> Folks who've worked with regular expressions before might know about ^ meaning "start-of-string" and correspondingly see $ as "end-of-string".
Huh. I always think of them as "start-of-line" and "end-of-line". I mean, a lot of the time when I'm working with regexes, I'm working with text a line at a time so the effect is the same, but that doesn't change how I think of those operators.
Maybe because a fair amount of the work I do with regexes (and, probably, how I was introduced to them) is via `grep`, so I'm often thinking of the inputs as "lines" rather than "strings"?
Even disregarding whether or not end-of-string is also an end-of-line or not (see all the other comments below), $ doesn't match the newline, similar to zero-width matches like \b, so the newline wouldn't be included in the matched text either way.
Problem is, plenty of software doesn't actually look at the match but rather just validates that there was a match (and then continues to use the input to that match).
It's kind of driving me nuts that the article says ^ is "start of string" when it's actually "start of line", just like $ is "end of line". \A is apparently "start of string" like \Z is "end of string".
Usually ^ matches only at the beginning of the string, and $ matches only at the end of the string and immediately before the newline (if any) at the end of the string. When this flag is specified, ^ matches at the beginning of the string and at the beginning of each line within the string, immediately following each newline. Similarly, the $ metacharacter matches either at the end of the string and at the end of each line (immediately preceding each newline).
In single-line [2] mode, the line starts at the start of the string and ends at the end of the line where the end of the line is either the end of the string if there is no terminating newline or just before the final newline if there is a terminating newline.
In multi-line mode a new line starts at the start of the string and after each newline and ends before each newline or at the end of the string if the last line has no terminating newline.
The confusion is that people think that they are in string-mode if they are not in multi-line mode but they are not, they are in single-line mode, ^ and $ still use the semantics of lines and a terminating newline, if present, is still not part of the content of the line.
With \n\n\n in single-line mode the non-greedy ^(\n+?)$ will capture only two of the newlines, the third one will be eaten by the $. If you make it greedy ^(\n+)$ will capture all three newlines. So arguably the implementations that do not match cat\n with cat$ are the broken ones.
The POSIX definition of a line is a sequence of non-newline characters - possibly zero - followed by a newline. Everything that does not end with a newline is not a [complete] line. So strictly speaking it would even be correct that cat$ does not match cat because there is no terminating newline, it should only match cat\n. But as lines missing a terminating newline is a thing, it seems reasonable to be less strict.
Python violates that definition however, by allowing internal newlines in strings. For example /^c[^a]t$/ matches "c\nt\n", but according to POSIX that's not a line.
I suspect the real reason for Python's behavior starts with the early decision to include the terminating newline in the string returned by IOBase.readline().
Python's peculiar choice has some minor advantages: you can distinguish between files that do and don't end with a terminating newline (the latter are invalid according to POSIX, but common in practice, especially on Windows), and you can reconstruct the original file by simply concatenating the line strings, which is occasionally useful.
The downside of this choice is that as a caller you have to deal with strings that may-or-may-not contain a terminating newline character, which is annoying (I often end up calling rstrip() or strip() on every line returned by readline(), just to get rid of the newlines; read().splitlines() is an option too if you don't mind reading the entire file into memory upfront).
My guess is that Python's behavior is just a hack to make re.match() easier to use with readline(), rather than based on any principled belief about what lines are.
Python's behavior is not a hack, it is the common behavior. $ matches at the end of the string or before the last character if that is a newline, which is logically the same as the end of a single line. But as you said, you can have additional newlines inside of the string which is also the common behavior and not specific to python. Personally I think of this as you just assume that the string is a single line and match $ accordingly, either at the end of the string or before a terminating newline, if there are additional newlines, you treat them mostly as normal characters, with the exception of dot not matching newlines unless you set the single-line/dot-all flag.
The very post we're commenting on shows that that's not true: PHP, Python, Java and .NET (C#) share one behavior (accept "\n" as "$"), and ECMAScript (Javascript), Golang, and Rust share another behavior (do not accept "\n" as $).
Let's not argue about which is “the most common”; all of these languages are sufficiently common to say that there is no single common behavior.
> $ matches at the end of the string or before the last character if that is a newline, which is logically the same as the end of a single line.
Yes, that is Python's behavior (and PHP's, Java's, etc.). You're just describing it; not motivating why it has to work that way or why it's more correct than the obvious alternative of only matching the end of the string.
Subjectively, I find it odd that /^cat$/ matches not just the obvious string "cat" but also the string "cat\n". And I think historically, it didn't. I tried several common tools that predate Python:
- awk 'BEGIN { print ("cat\n" ~ /^cat$/) }' prints 0
- in GNU ed, /^M/ does not match any lines
- in vim, /^M/ does not match any lines
- sed -n '/\n/p' does not print any lines
- grep -P '\n' does not match any lines
- (I wanted to try `grep -E` too but I don't know how to escape a newline)
- perl -e 'print ("cat\n" =~ /^cat$/)' prints 1
So the consensus seems to be that the classic UNIX line-based tools match the regex against the line excluding the newline terminator (which makes sense since it isn't part of the content of that line) and therefore $ only needs to match the end of the string.
The odd one out is Perl: it seems to have introduced the idea that $ can match a newline at the end of the string, probably for similar reasons as Python. All of this suggests to me that allowing $ to match both "\n" and "" at the end of the string was a hack designed to make it easier to deal with strings without control characters and string that end with a single newline.
So the consensus seems to be that the classic UNIX line-based tools match the regex against the line excluding the newline terminator (which makes sense since it isn't part of the content of that line) and therefore $ only needs to match the end of the string.
If you read a line, you usually remove the newline at the end but you could also keep it as Python does. If you remove the newline, then a line can never contain a newline, the case cat\n can never occur. If you keep the newline, there will be exactly one newline as the last character and you arguably want cat$ to match cat\n because that newline is the end of the line but not part of the content. It makes perfect sense that $ matches at the end of the string or before a newline as the last character as it will do the right thing whether or not you strip the newline.
If you want cat$ to not match cat\n, then you are obviously not dealing with lines, you have a string with a newline at the end but you consider this newline part of the content instead of terminating the line. But ^ and $ are made for lines, so they do not work as expected. I also get what people are complaining about, if you are not in multi-line and have a proper line with at most one newline at the end, then it will behave exactly as if you are in multi-line which raises the question why you would have those two modes to begin with. Not multi-line only behaves differently if you have additional newlines or one newline not at the end, that is if you do not have a proper line, so why should $ still behave as if you were dealing with a line?
If you are not in multi-line mode, then a single line is expected and consequently there is at most one newline at the end of the string. You can of course pick an input that violates this, run it against a multi-line string with several newlines in it. cat\n\n will not match cat$ because there is something between cat and the end of the line, it just happens to be a newline but without any special meaning because it is not the last character and you did not say that the input is multi-line.
If you have a file containing `A\nB\nC` in a file, the file is three lines long.
I guess it could be argued that a file containing `A\nB\nC\n` has four lines, with the fourth having zero length.
That a regex is applying to an in memory string vs a file doesn't feel to me like it should have different semantics.
Digging into the history a little, it looks like regexes were popularized in text editors and other file oriented tooling. In those contexts I imagine it would be far more common to want to discard or ignore the trailing zero length line than to process it like every other line in a file.
Technically the “newline” character is actually a line _terminator_. Hence “A\n” is one line, not two. The “\n” is always at the end of a line by definition.
Yes, that is a file with zero lines that ends with an "incomplete line". Processing of such files by standard line-oriented utilities is undefined in the opengroup spec. So, for instance, the effect of "grep"ping such a file is not defined. Heck, even "cat"ting such a file gives non-ideal results, such as colliding with the regular shell prompt. For this reason, a lot of software projects I work on check and correct this condition whenever creating a commit.
No, it is valid for a file to have content but no lines.
Semantically many libraries treat that as a line because while \n<EOF> means "the end of the last line" having just <EOF> adds additional complexity the user has to handle to read the remaining input. But by the book it's not "a line".
If I said "ten buckets of water" does that mean ten full buckets? Or does a bucket with a drop in it count as "a bucket of water?" If I asked for ten buckets of water and you brought me nine and one half-full, is that acceptable? What about ten half-full buckets?
A line ends in a newline. A file with no newlines in it has no lines.
Thats beyond ridiculous. Most languages when you are reading a line from a file, and it doesn't have a \n terminator, its going to give you that line, not say, oops, this isn't a line sorry.
I don't think you can meaningfully generalize to "most languages" here. To give an example, two extremely popular languages are C and Python. Both have a standard library function to read a line from a text stream - fgets() for C, readline() for Python. In both cases, the behavior is to read up to and including the newline character, but also to stop if EOF is encountered before then. Which means that the return value is different for terminated vs unterminated final lines in both languages - in particular, if there's no \n before EOF, the value returned is not a line (as it does not end with a newline), and you have to explicitly write your code to accommodate that.
That's a relatively recent invention compared to tools like `wc` (or your favorite `sh` for that matter). See also: https://perldoc.perl.org/functions/chop wherein the norm was "just cut off the last character of the line, it will always be a newline"
I get this is largely a semantic debate, but find it a little ironic so many programmers seem put off with the idea of a line count that starts at “0”.
Another way to look at it is that concatenating files should sum the line count. Concatenating two empty files produces an empty file, so 0 + 0 = 0. If “incomplete lines” are not counted as lines, then the maths still works out. If they counted as lines, it would end up as 1 + 1 = 1.
No, a line is defined as a sequence of characters (bytes?) with a line terminator at the end.
Technically as per posix a file as you describe is actually a binary file without any lines. Basically just random binary data that happens to kind of look like a line.
This isn't a weird state. It's a language problem. An 'incomplete line' isn't a type of line, it's an unfortunate name for a thing that is not a line. Just like how the 'wor' is an incomplete word (the word 'word'), but 'wor' is, of course, not a word.
Same thing for formalisms like equations in algebra or formulas in propositional logic— we have the phrase 'well-formed formula', and we might describe some sequences of terms as 'incomplete formulas' or perhaps 'ill-formed formulas', but those phrases don't describe anything that meets the formal system's definition of 'formula' at all— they are not formulas. 'Ill-formed formula' is not a compositional phrase where 'ill-formed' describes a feature of a 'formula'. It's a bit of convenient language for what we can intuitively or metaphorically recognize as a formula-ish thing.
That's a weird way to look at it. Binary files might not have "lines", but there's no reason they couldn't include a byte with value 10 (the ASCII value for \n). Software reading that file wouldn't know the difference, right?
Also, why couldn't you have a text file without any lines?
It's a file with zero complete lines. But it has 1 line, that's incomplete, right?
Because the Unix definition of text file requires the file to end with a newline. "Lines" only exist in the context of text files. If there's no terminating newline, it's (pedantically) not a text file and so has no lines. Now, in practice, if you open() that file in text mode, it doesn't TMK return an error if the terminating newline isn't present, but it's undefined behaviour.
And if you do have a terminating newline, then you have at least one line :).
That seems like a broken (maybe just bad?) definition/specification to me. A blob of JSON in a file isn't "text" if there's no newline character trailing it?
This does precisely nothing to solve the ambiguity issue when a final line lacks a newline. The representation of that newline isn't relevant to the problem.
The point is that having a sequence of two delimiters to signal the end of the logical line allows you to have single instances of either delimiter included within the text. This allows visual line breaks to be included within the same line as understood by the regex parser.
Despite the downvotes your comment received, I think you have a good point. There are two uses for a newline, first to signal the end of a line, for example when sending text over a serial connection, and second to separate two lines, for example in a text file.
To indicate that a serially received line is complete, the interpretation as a terminator makes perfect sense - abcd\n is a complete line, abc is a still incomplete line. In a text file the interpretation as a separator might be preferable because that gets rid of the issue of the last line not having a newline - a\nb\nc are three lines separated by two newlines, a\nb\nc\n are four lines separated by three newlines and the last line is empty.
But then it might also be useful to have a terminator in a text file to be able to detect an incompletely written line. So using two characters, one for each purpose, could solve the problem. \r means the line is complete, \n means it follows a next line. abc is an incomplete line, abcd\r is a complete line and no line follows, abcd\r\n is a complete line and a second incomplete line follows which is currently empty. abcd\r\n\r are two complete lines, the second one empty. abcd\r\nefg is a complete line followed by an incomplete line. abcd\r\nefg\r are two complete lines. You could even have two incomplete lines abc\nefg.
But I think Windows always uses \r\n because this is how you get to a newline on a typewriter or really old printer, you return the carriage and feed the paper one line. I do not think that they had the idea of differentiating between terminator and separator, otherwise you could have only \r and maybe even only \n sometimes. But in principle this could work quite nicely, I guess. You could start a line with \n and end it with \r, this would give you \r\n between lines and \r after the final line. Or nothing if the final line is incomplete or \r\n if the final line is incomplete and currently empty. The odd thing would be a newline as the very first character, maybe one could suppress that. This would also be compatible with Windows and nix, it would just consider all nix lines incomplete. Only abc\rdef\r would not really make sense, two complete lines but the second one is not a new line.
If I ever get to write a new operating system, I will inflict this on humanity.
I mean, it was what everyone had agreed upon previously. Microsoft was the only party to follow through. For all the guff they get for not following standards, it was the one standard they did.
You don't have to love a company to acknowledge they did something right.
It doesn't indicate the start of a new line, or files would start with it. Files end with it, which is why it is a line terminator. And it is by definition: by the standard, by the way cat and/or your shell and/or your terminal work together, and by the way standard utilities like `wc` treat the file.
I don't know why no-one here sees this as a bad design...
If a line is missing a newline then we just disregard it?!
A way better way to deal with newline is it's a separator like comma. And like in modern languages we allow a final separator, but ignore it so that is easier for tools to generate files.
Now all combinations of characters, including newline characters, has an interpretation without dropping anything.
I also always preferred the interpretation of a newline as a separator instead of as a terminator for files because I never liked the final newline causing a new empty line in the editor and as you thought that it was bad design that you can have a somewhat invalid file.
But if you look beyond files, the interpretation as a terminator also makes perfect sense, when you receive text over a serial connection it signals that the line is complete which does not necessarily imply that another line will follow. The same in a file, if the terminating newline is missing, you can deduce that an incomplete write occurred and some data might be missing. If you decide to have a newline as a separator after the last line but to ignore it, then you can not represent an empty last line.
I guess you would need two different characters, one terminator and one separator. You could start a line with \n and end it with \r. The \n separates the line from the one before, then \r terminates the line and marks it as complete. You would get \r\n between lines as on Windows and the last line would only have \r if complete or would otherwise count as incomplete. Then again you could almost get the same thing with \n only, you would just have to change the interpretation, instead of \n giving you a line and no \n giving you not a line, you would have to say that \n gives you a complete line and no \n gives you an incomplete line. With that you could however not have an incomplete empty line.
This effort of building in redundancy is pointless. We just need a newline to know where to start the output on a new line. If you want to safeguard the proper content of a file, a whole lot more is needed.
Posix getline() includes EOF as a line terminator:
getline() reads an entire line from stream, storing the address
of the buffer containing the text into *lineptr. The buffer is
null-terminated and includes the newline character, if one was
found.
...
... a delimiter character is not added if one was
not present in the input before end of file was reached.
Your quoted documentation says otherwise. It says that a 'line' include the delimiter, '\n', in the line buffer. It also says that is no delimiter is found before the EOF is reached that the line buffer will not include the delimiter. That means the line buffer can clearly indicate an incomplete line by the absence of the delimiter. To be clear, EOF isn't a 'line terminator', it's the end of the data stream.
Probably a vulnerability issue. Programmers would leave multiline mode on by mistake, then validate that some string only contain ^[a-Z]*$… only for the string to have an \n and an SQL injection on the second line.
What is driving me nuts is that we have Unicode now, so there is no need to use common characters like $ or ^ to denote special regex state transitions.
If we were willing to ignore the ability to actually type it, you don't need Unicode for that; ASCII has a whole block of control characters at the beginning; I think ASCII 25 ("End of medium") works here.
the idea of changing a decades old convention to instead use, as I assume you are implying, some character that requires special entry, is beyond silly.
It’s not that silly. You constantly get into escape conundrums because you need to use a metacharacter which is also a metacharacter three levels deep in some embedding.
(But that might not solve that problem? Maybe the problem is mostly about using same-character delimiters for strings.)
And I guess that’s why Perl is so flexible with regards to delimiters and such.
Yes, languages really need some sort of "raw string" feature like Python (or make regex literals their own syntax like Perl does). That's the solution here, not using weird characters...
Fine enough. But I wonder why strings have to use the same delimiter. Imagine if you had a list delimiter `|` and the answer to nested lists was “ohh, use raw list syntax, just make `###||` when you are three levels deep or something”.
It is quite nice what `sed` does. A sed search-and-replace is typically shown as `s/foo/bar/`, but you can actually use any punctuation character to separate the parts. Whatever follows the "s" will be used for that statement, so you can write `s|foo|bar|` or `s:foo:bar:`, even mixing and matching in the same script to have `s|baz|quux|; s:xyzzy:blorp:` and it will all work.
On the third hand strings are of course a special case because you always have special characters and whatnot which makes raw strings useful. :) Not just doing `"` and stuff.
I don't think anyone that writes regex would feel specially challenged by using the Alt+ | Ctrl+Shift+u key combos for unicode entry. Having to escape less things in a pattern would be nice.
I write regexes all the time, and I don't know if I would be CHALLENGED by that, but it would be annoying. Escaping things is trivial, and since you do it all the time it is not anything extra to learn. Having to remember bespoke keystrokes for each character is a lot more to learn.
Regexes are one case where I think it's already extremely unbalanced wrt being easy to write but hard to read. Using stuff like special Unicode chars for this would make them harder to write but easier to read, which sounds like a fair deal to me. In general, I'd say that regexes should take time and effort to write, just because it's oh-so-easy to write something that kinda sorta works but has massive footguns.
I would also imagine that, if this became the norm, IDEs would quickly standardize around common notation - probably actually based on existing regex symbols and escapes - to quickly input that, similar to TeX-like notation for inputting math. So if you're inside a regex literal, you'd type, say, \A, and the editor itself would automatically replace it with the Unicode sigil for beginning-of-string.
Regexes originate from Perl, or they were popularized by Perl if i got this right. In Perl readable code is not ranked as one of it's top 100 priorities. Regexes could originate from J and situation could be even worse though!
Regex's predate perl quite substantially. Think grep and friends if nothing else.
Certainly making the perlre library available separate to perl encouraged its widespread use, and lots of others copied or were inspired by it.
"Popularized" doesn't seem like quite the right word though, I don't disagree with the point, but if I shout "Hey everyone let's write regex's" at the office people throw stationary at me, which is not true of other popular things!
I took a look at Raku, which claims be a better Perl maybe, or closely related but more modern, it certainly looks nice. Although i am a big fan of typed languages, Raku piqued my interest.
Very nice, good to know. Yes i know gradual typing, Python has a form of that i think. I will check out Raku at some point, the type system will not go unnoticed. I didn't even know it had one!
ASCII restriction begets ASCII toothpick soup. Either lift that restriction or use balanced delimiters for strings in ASCII like backtick and single quote.
(“But backtick is annoying to type” said the Europeans.)
People say this all the time, but is it really always true? I have a ton of code that I wrote, that just works, and I never really look at it again, at least not with the level of inspection that requires parsing the regex in my head.
Even for code I wrote once and then never have to fix, I end up reading it multiple times while I create it and the lines around it. I think it really is always true.
Why not? Common characters are easier to type and presumbly if you are using regex on a unicode string they might include these special characters anyway so what have you gained?
It's not really an issue if the string you're matching might have those characters. It's an issue if the regex you are matching that string might need to match those characters verbatim. Which is actually pretty common with ()[]$ when you're matching phone numbers, prices etc - so you end up having to escape a lot, and regex is less readable especially if it also has to use those same characters as regex operators. On the other hand, it would be very uncommon to want to literally match, say, ⦑⦒ or ⟦⟧.
I'm the same, but now that I try in Perl, sure enough, $ seems to default to being a positive lookahead assertion for the end of the string. It does not match and consume an EOL character.
Only in multiline mode does it match EOL characters, but it does still not appear to consume them. In fact, I cannot construct a regex that captures the last character of one line, then consumes the newline, and then captures the first character of the next line, while using $. The capture group simply ends at $.
Maybe because a fair amount of the work I do with regexes (and, probably, how I was introduced to them) is via `grep`, so I'm often thinking of the inputs as "lines" rather than "strings"?
Same, tho it'd be interesting to see if this behavior holds if the file ends without a trailing newline and your match is on the final newline-less line.
Yes exactly, they match the end of a line, not a newline character. Some examples from documentation:
man 7 regex: '$' (matching the null string at the end of a line)
pcre2pattern: The circumflex and dollar metacharacters are zero-width assertions. That is, they test for a particular condition being true without consuming any characters from the subject string. These two metacharacters are concerned with matching the starts and ends of lines. ... The dollar character is an assertion that is true only if the current matching point is at the end of the subject string, or immediately before a newline at the end of the string (by default), unless PCRE2_NOTEOL is set. Note, however, that it does not actually match the newline. Dollar need not be the last character of the pattern if a number of alternatives are involved, but it should be the last item in any branch in which it appears. Dollar has no special meaning in a character class.
I don't have (n)vi(m) open right now but I think this only applies to prepending spaces. For prepending tabs, 0 will take you to the first non-tab character as well.
If you have "set list" to make non-space whitespace visible, it'll go to the leftmost position. I did it long ago along with "set listchars=trail:.,tab:>-" so I can see not only where tabs are, but also their size/alignment without causing the text to shift.
i feel like this perspective will be split between folks who use regex in code with strings and more sysadmin folks who are used to consuming lines from files in scripts and at the cli.
but yeah seems like a real misunderstanding from “start/end of string” people
Huh. I always think of them as "start-of-line" and "end-of-line". I mean, a lot of the time when I'm working with regexes, I'm working with text a line at a time so the effect is the same, but that doesn't change how I think of those operators.
Maybe because a fair amount of the work I do with regexes (and, probably, how I was introduced to them) is via `grep`, so I'm often thinking of the inputs as "lines" rather than "strings"?