Usually ^ matches only at the beginning of the string, and $ matches only at the end of the string and immediately before the newline (if any) at the end of the string. When this flag is specified, ^ matches at the beginning of the string and at the beginning of each line within the string, immediately following each newline. Similarly, the $ metacharacter matches either at the end of the string and at the end of each line (immediately preceding each newline).
In single-line [2] mode, the line starts at the start of the string and ends at the end of the line where the end of the line is either the end of the string if there is no terminating newline or just before the final newline if there is a terminating newline.
In multi-line mode a new line starts at the start of the string and after each newline and ends before each newline or at the end of the string if the last line has no terminating newline.
The confusion is that people think that they are in string-mode if they are not in multi-line mode but they are not, they are in single-line mode, ^ and $ still use the semantics of lines and a terminating newline, if present, is still not part of the content of the line.
With \n\n\n in single-line mode the non-greedy ^(\n+?)$ will capture only two of the newlines, the third one will be eaten by the $. If you make it greedy ^(\n+)$ will capture all three newlines. So arguably the implementations that do not match cat\n with cat$ are the broken ones.
The POSIX definition of a line is a sequence of non-newline characters - possibly zero - followed by a newline. Everything that does not end with a newline is not a [complete] line. So strictly speaking it would even be correct that cat$ does not match cat because there is no terminating newline, it should only match cat\n. But as lines missing a terminating newline is a thing, it seems reasonable to be less strict.
Python violates that definition however, by allowing internal newlines in strings. For example /^c[^a]t$/ matches "c\nt\n", but according to POSIX that's not a line.
I suspect the real reason for Python's behavior starts with the early decision to include the terminating newline in the string returned by IOBase.readline().
Python's peculiar choice has some minor advantages: you can distinguish between files that do and don't end with a terminating newline (the latter are invalid according to POSIX, but common in practice, especially on Windows), and you can reconstruct the original file by simply concatenating the line strings, which is occasionally useful.
The downside of this choice is that as a caller you have to deal with strings that may-or-may-not contain a terminating newline character, which is annoying (I often end up calling rstrip() or strip() on every line returned by readline(), just to get rid of the newlines; read().splitlines() is an option too if you don't mind reading the entire file into memory upfront).
My guess is that Python's behavior is just a hack to make re.match() easier to use with readline(), rather than based on any principled belief about what lines are.
Python's behavior is not a hack, it is the common behavior. $ matches at the end of the string or before the last character if that is a newline, which is logically the same as the end of a single line. But as you said, you can have additional newlines inside of the string which is also the common behavior and not specific to python. Personally I think of this as you just assume that the string is a single line and match $ accordingly, either at the end of the string or before a terminating newline, if there are additional newlines, you treat them mostly as normal characters, with the exception of dot not matching newlines unless you set the single-line/dot-all flag.
The very post we're commenting on shows that that's not true: PHP, Python, Java and .NET (C#) share one behavior (accept "\n" as "$"), and ECMAScript (Javascript), Golang, and Rust share another behavior (do not accept "\n" as $).
Let's not argue about which is “the most common”; all of these languages are sufficiently common to say that there is no single common behavior.
> $ matches at the end of the string or before the last character if that is a newline, which is logically the same as the end of a single line.
Yes, that is Python's behavior (and PHP's, Java's, etc.). You're just describing it; not motivating why it has to work that way or why it's more correct than the obvious alternative of only matching the end of the string.
Subjectively, I find it odd that /^cat$/ matches not just the obvious string "cat" but also the string "cat\n". And I think historically, it didn't. I tried several common tools that predate Python:
- awk 'BEGIN { print ("cat\n" ~ /^cat$/) }' prints 0
- in GNU ed, /^M/ does not match any lines
- in vim, /^M/ does not match any lines
- sed -n '/\n/p' does not print any lines
- grep -P '\n' does not match any lines
- (I wanted to try `grep -E` too but I don't know how to escape a newline)
- perl -e 'print ("cat\n" =~ /^cat$/)' prints 1
So the consensus seems to be that the classic UNIX line-based tools match the regex against the line excluding the newline terminator (which makes sense since it isn't part of the content of that line) and therefore $ only needs to match the end of the string.
The odd one out is Perl: it seems to have introduced the idea that $ can match a newline at the end of the string, probably for similar reasons as Python. All of this suggests to me that allowing $ to match both "\n" and "" at the end of the string was a hack designed to make it easier to deal with strings without control characters and string that end with a single newline.
So the consensus seems to be that the classic UNIX line-based tools match the regex against the line excluding the newline terminator (which makes sense since it isn't part of the content of that line) and therefore $ only needs to match the end of the string.
If you read a line, you usually remove the newline at the end but you could also keep it as Python does. If you remove the newline, then a line can never contain a newline, the case cat\n can never occur. If you keep the newline, there will be exactly one newline as the last character and you arguably want cat$ to match cat\n because that newline is the end of the line but not part of the content. It makes perfect sense that $ matches at the end of the string or before a newline as the last character as it will do the right thing whether or not you strip the newline.
If you want cat$ to not match cat\n, then you are obviously not dealing with lines, you have a string with a newline at the end but you consider this newline part of the content instead of terminating the line. But ^ and $ are made for lines, so they do not work as expected. I also get what people are complaining about, if you are not in multi-line and have a proper line with at most one newline at the end, then it will behave exactly as if you are in multi-line which raises the question why you would have those two modes to begin with. Not multi-line only behaves differently if you have additional newlines or one newline not at the end, that is if you do not have a proper line, so why should $ still behave as if you were dealing with a line?
If you are not in multi-line mode, then a single line is expected and consequently there is at most one newline at the end of the string. You can of course pick an input that violates this, run it against a multi-line string with several newlines in it. cat\n\n will not match cat$ because there is something between cat and the end of the line, it just happens to be a newline but without any special meaning because it is not the last character and you did not say that the input is multi-line.
If you have a file containing `A\nB\nC` in a file, the file is three lines long.
I guess it could be argued that a file containing `A\nB\nC\n` has four lines, with the fourth having zero length.
That a regex is applying to an in memory string vs a file doesn't feel to me like it should have different semantics.
Digging into the history a little, it looks like regexes were popularized in text editors and other file oriented tooling. In those contexts I imagine it would be far more common to want to discard or ignore the trailing zero length line than to process it like every other line in a file.
Technically the “newline” character is actually a line _terminator_. Hence “A\n” is one line, not two. The “\n” is always at the end of a line by definition.
Yes, that is a file with zero lines that ends with an "incomplete line". Processing of such files by standard line-oriented utilities is undefined in the opengroup spec. So, for instance, the effect of "grep"ping such a file is not defined. Heck, even "cat"ting such a file gives non-ideal results, such as colliding with the regular shell prompt. For this reason, a lot of software projects I work on check and correct this condition whenever creating a commit.
No, it is valid for a file to have content but no lines.
Semantically many libraries treat that as a line because while \n<EOF> means "the end of the last line" having just <EOF> adds additional complexity the user has to handle to read the remaining input. But by the book it's not "a line".
If I said "ten buckets of water" does that mean ten full buckets? Or does a bucket with a drop in it count as "a bucket of water?" If I asked for ten buckets of water and you brought me nine and one half-full, is that acceptable? What about ten half-full buckets?
A line ends in a newline. A file with no newlines in it has no lines.
Thats beyond ridiculous. Most languages when you are reading a line from a file, and it doesn't have a \n terminator, its going to give you that line, not say, oops, this isn't a line sorry.
I don't think you can meaningfully generalize to "most languages" here. To give an example, two extremely popular languages are C and Python. Both have a standard library function to read a line from a text stream - fgets() for C, readline() for Python. In both cases, the behavior is to read up to and including the newline character, but also to stop if EOF is encountered before then. Which means that the return value is different for terminated vs unterminated final lines in both languages - in particular, if there's no \n before EOF, the value returned is not a line (as it does not end with a newline), and you have to explicitly write your code to accommodate that.
That's a relatively recent invention compared to tools like `wc` (or your favorite `sh` for that matter). See also: https://perldoc.perl.org/functions/chop wherein the norm was "just cut off the last character of the line, it will always be a newline"
I get this is largely a semantic debate, but find it a little ironic so many programmers seem put off with the idea of a line count that starts at “0”.
Another way to look at it is that concatenating files should sum the line count. Concatenating two empty files produces an empty file, so 0 + 0 = 0. If “incomplete lines” are not counted as lines, then the maths still works out. If they counted as lines, it would end up as 1 + 1 = 1.
No, a line is defined as a sequence of characters (bytes?) with a line terminator at the end.
Technically as per posix a file as you describe is actually a binary file without any lines. Basically just random binary data that happens to kind of look like a line.
This isn't a weird state. It's a language problem. An 'incomplete line' isn't a type of line, it's an unfortunate name for a thing that is not a line. Just like how the 'wor' is an incomplete word (the word 'word'), but 'wor' is, of course, not a word.
Same thing for formalisms like equations in algebra or formulas in propositional logic— we have the phrase 'well-formed formula', and we might describe some sequences of terms as 'incomplete formulas' or perhaps 'ill-formed formulas', but those phrases don't describe anything that meets the formal system's definition of 'formula' at all— they are not formulas. 'Ill-formed formula' is not a compositional phrase where 'ill-formed' describes a feature of a 'formula'. It's a bit of convenient language for what we can intuitively or metaphorically recognize as a formula-ish thing.
That's a weird way to look at it. Binary files might not have "lines", but there's no reason they couldn't include a byte with value 10 (the ASCII value for \n). Software reading that file wouldn't know the difference, right?
Also, why couldn't you have a text file without any lines?
It's a file with zero complete lines. But it has 1 line, that's incomplete, right?
Because the Unix definition of text file requires the file to end with a newline. "Lines" only exist in the context of text files. If there's no terminating newline, it's (pedantically) not a text file and so has no lines. Now, in practice, if you open() that file in text mode, it doesn't TMK return an error if the terminating newline isn't present, but it's undefined behaviour.
And if you do have a terminating newline, then you have at least one line :).
That seems like a broken (maybe just bad?) definition/specification to me. A blob of JSON in a file isn't "text" if there's no newline character trailing it?
This does precisely nothing to solve the ambiguity issue when a final line lacks a newline. The representation of that newline isn't relevant to the problem.
The point is that having a sequence of two delimiters to signal the end of the logical line allows you to have single instances of either delimiter included within the text. This allows visual line breaks to be included within the same line as understood by the regex parser.
Despite the downvotes your comment received, I think you have a good point. There are two uses for a newline, first to signal the end of a line, for example when sending text over a serial connection, and second to separate two lines, for example in a text file.
To indicate that a serially received line is complete, the interpretation as a terminator makes perfect sense - abcd\n is a complete line, abc is a still incomplete line. In a text file the interpretation as a separator might be preferable because that gets rid of the issue of the last line not having a newline - a\nb\nc are three lines separated by two newlines, a\nb\nc\n are four lines separated by three newlines and the last line is empty.
But then it might also be useful to have a terminator in a text file to be able to detect an incompletely written line. So using two characters, one for each purpose, could solve the problem. \r means the line is complete, \n means it follows a next line. abc is an incomplete line, abcd\r is a complete line and no line follows, abcd\r\n is a complete line and a second incomplete line follows which is currently empty. abcd\r\n\r are two complete lines, the second one empty. abcd\r\nefg is a complete line followed by an incomplete line. abcd\r\nefg\r are two complete lines. You could even have two incomplete lines abc\nefg.
But I think Windows always uses \r\n because this is how you get to a newline on a typewriter or really old printer, you return the carriage and feed the paper one line. I do not think that they had the idea of differentiating between terminator and separator, otherwise you could have only \r and maybe even only \n sometimes. But in principle this could work quite nicely, I guess. You could start a line with \n and end it with \r, this would give you \r\n between lines and \r after the final line. Or nothing if the final line is incomplete or \r\n if the final line is incomplete and currently empty. The odd thing would be a newline as the very first character, maybe one could suppress that. This would also be compatible with Windows and nix, it would just consider all nix lines incomplete. Only abc\rdef\r would not really make sense, two complete lines but the second one is not a new line.
If I ever get to write a new operating system, I will inflict this on humanity.
I mean, it was what everyone had agreed upon previously. Microsoft was the only party to follow through. For all the guff they get for not following standards, it was the one standard they did.
You don't have to love a company to acknowledge they did something right.
It doesn't indicate the start of a new line, or files would start with it. Files end with it, which is why it is a line terminator. And it is by definition: by the standard, by the way cat and/or your shell and/or your terminal work together, and by the way standard utilities like `wc` treat the file.
I don't know why no-one here sees this as a bad design...
If a line is missing a newline then we just disregard it?!
A way better way to deal with newline is it's a separator like comma. And like in modern languages we allow a final separator, but ignore it so that is easier for tools to generate files.
Now all combinations of characters, including newline characters, has an interpretation without dropping anything.
I also always preferred the interpretation of a newline as a separator instead of as a terminator for files because I never liked the final newline causing a new empty line in the editor and as you thought that it was bad design that you can have a somewhat invalid file.
But if you look beyond files, the interpretation as a terminator also makes perfect sense, when you receive text over a serial connection it signals that the line is complete which does not necessarily imply that another line will follow. The same in a file, if the terminating newline is missing, you can deduce that an incomplete write occurred and some data might be missing. If you decide to have a newline as a separator after the last line but to ignore it, then you can not represent an empty last line.
I guess you would need two different characters, one terminator and one separator. You could start a line with \n and end it with \r. The \n separates the line from the one before, then \r terminates the line and marks it as complete. You would get \r\n between lines as on Windows and the last line would only have \r if complete or would otherwise count as incomplete. Then again you could almost get the same thing with \n only, you would just have to change the interpretation, instead of \n giving you a line and no \n giving you not a line, you would have to say that \n gives you a complete line and no \n gives you an incomplete line. With that you could however not have an incomplete empty line.
This effort of building in redundancy is pointless. We just need a newline to know where to start the output on a new line. If you want to safeguard the proper content of a file, a whole lot more is needed.
Posix getline() includes EOF as a line terminator:
getline() reads an entire line from stream, storing the address
of the buffer containing the text into *lineptr. The buffer is
null-terminated and includes the newline character, if one was
found.
...
... a delimiter character is not added if one was
not present in the input before end of file was reached.
Your quoted documentation says otherwise. It says that a 'line' include the delimiter, '\n', in the line buffer. It also says that is no delimiter is found before the EOF is reached that the line buffer will not include the delimiter. That means the line buffer can clearly indicate an incomplete line by the absence of the delimiter. To be clear, EOF isn't a 'line terminator', it's the end of the data stream.
Probably a vulnerability issue. Programmers would leave multiline mode on by mistake, then validate that some string only contain ^[a-Z]*$… only for the string to have an \n and an SQL injection on the second line.
> Matches the start of the string, and in MULTILINE mode also matches immediately after each newline.
Or JavaScript:
> An input boundary is the start or end of the string; or, if the m flag is set, the start or end of a line.
\A and \Z are start/end of input regardless of mode… when they’re available, that’s not the case of all engines.