I don't think they even make compilers that don't warn about the top 2 issues, and 5 for sure. And after that it gets down to "I did something stupid and something stupid happened." I mean:
int ii = i/++i;
Seriously? As if defining the order of operations would magically take the suck out of that statement.
There are like 1.5 nuggets of real pain in here. (1 point for returning a stack array, 0.5 points for noticing that C is C.)
Yeah, I was quite underwhelmed by these complaints. Several I don't even think are correct:
#9:
#define DEVICE_COUNT 4
uint8 *szDevNames[DEVICE_COUNT] = {
"SelectSet 5000",
"SelectSet 7000"}; /* table has two entries of junk */
"junk" is incorrect. It has 2 entries of zeros, and that is something that you can count on. If you really don't like that (and I generally do like that) you can turn on gcc's -Wextra which will warn about it.
#17 complains about char's being signed but really it should be complaining about chars' signedness not being spec-ed (really--it's implementation dependent whether plain "char" is signed or unsigned. If you really care, use the "signed" keyword. But really, complaining about overflow? Strange.
I think the problem is that a lot of languages encourage you to do something clever, whereas C actively and relentlessly punishes you for being clever.
My favorite example is from #19, which is something that javascript programmers do all of the time.
int value = a && b && fn(a->x,b->x);
(The author goes on to complain about how value you obviously be whatever fn(a->x, b->x) returns).
In javascript, that's actually good practice (I guess, I had a mentor once who encouraged it, but I never really bought into it). To a C programmer though, that just looks gross. (Or at least to me, and I like to pretend to be a C programmer).
I always wrap my assignment-conditionals with double parentheses. It sucks when you miss these but I usually type out the right sequence of equal signs when I mean equality.
3. Unhygienic macros
Treat macros like a search-and-replace with a little more intelligence, but respect how literally the pre-processor might take you for. So, add parentheses.
4. Mismatched header files
I've not encountered this before, so I can't comment on it. Be careful with namespaces.
5. Phantom Returned Values
Luckily, gcc -Wall returns: warning: control reaches end of non-void function
6. Unpredictable struct construction
Can't comment on this one either, although I avoid literal assignments given in the example like the plague.
7. Indefinite order of evaluation
The example code just looks messy.
8. Easily changed block scope
Always use curly braces, that's what I say.
9. Permissive compilation
Not sure in the example why one would just remove the CALLIT macro and assume things to work. Of course, the comma in C means something. Not sure why one would put an assignment before a case in a switch either.
Yes, the points he makes are typical beginner mistakes, or at least things that 20 years of programmer conventions have found a way around. Not really big problems of the language.
My biggest gripe with C is string handling. Although the extremely insecure functions have been slowly phased out (such as gets), zero-terminated strings are an attack on sanity.
"My biggest gripe with C is string handling. Although the extremely insecure functions have been slowly phased out (such as gets), zero-terminated strings are an attack on sanity."
There are some nice safe string libraries out there, like this one:
Agreed, it is pretty easy to avoid the issue by using a library or framework, and bstring looks nice.
Still, I wish something like that was simply built-in, as external string libraries can cause interoperability issues: Each framework defines its own string handling functions and format, making it neccesary to convert between them in an application, if you use them together.
(the worst thing is that this problem still exists with C++ as of today, even though it has a built-in string people insist on rolling their own)
Something similar to #17 burned me at my first job. Did you know that if a variable is declared as a "char", the compiler gets to decide whether it's signed or unsigned? (Consistently, of course - every "char" will be signed, or every "char" will be unsigned). I didn't at the time, but I sure do now!
My employer had code that ran on about 5 billion different platforms, and my job was to port it to a new one. Everything went fine, except for the weird random crashes that would happen periodically. The idea that a "char" could be an "unsigned char" by default was so far off my radar screen that I didn't figure out what was going on until a few days later when I reluctantly dived into the assembly. (It didn't help that it was a mobile platform with basically 0 support for gdb).
Turned out to be a 3 second fix - pass "-fsigned-char" to gcc. Nowadays in new code I always explicitly declare whether my chars are signed or unsigned.
You're supposed to use chars for character data, in which case it's not supposed to matter whether or not the integer representation is signed or unsigned. All you generally do with characters is store, compare for equality, and print. Those should all be safe to do without ever knowing if the underlying integer is considered signed or unsigned.
It's only when you start doing arithmetic on characters, or just want to use char to mean "byte" (or "octet") that it matters, and then it's a very good idea to be specific and say "unsigned char" if that is what you expect.
Yep. In their defense, this was a codebase with a 20 year lineage that ran on Linux, Solaris, HP-UX, AS/400, all variants of Windows including mobile, Irix, and some that I'm probably forgetting, and they had never run into this issue before.
Yes, making any assumption on the basic types in C is very dangerous for portability. That's why you should use typedef'd typed such as uint8_t and friends (defined in stdint.h in C99, but easy to roll your own) instead of 'char'.
(except in cases where you don't care about the signedness or memory size)
... and I still love C. There's something utterly minimalist and carnal about C. The best part is how it's several orders of magnitude more productive than writing assembly code, all the while 99% as powerful and performant.
Of the handful of languages I just checked, a fair number of them—especially the popular ones—treat integers starting with 0 as octal. Python 2.6, Ruby, PHP, Perl, Scala, Java and D all believed that 010 == 8, while GHC, GNU Prolog, GNU Smalltalk, Scheme (Guile and Racket) and Common Lisp all treated it as being equal to 10. Python 3 actively rejects it, because they've changed the octal literal syntax to 0o10 to prevent this from sneaking up on people.
Some of the other complaints are valid—for example, I dream of a sensible module system when writing in C, which would alleviate #13—but a lot of the complaints are sort of petty, and I'd argue that this one, being true of many more languages than C, falls directly into the petty bin.
Seriously? I think this page reflects more on the person who wrote it than on C. A top 10 list with 19 items? Genius! Forgetting to include a </pre> tag on item #6? Classic sign of a sloppy programmer.
When I programmed FORTRAN, there were much uglier things to shoot yourself in the foot with. For example, parameter lists in function calls were not checked: you pass in a double, and the function expects an integer? Good luck with that, the compiler didn't warn about it and no value conversion would take place (the runtime would just interpret part of the bit pattern of the double as an int), and it would probably crash at runtime.
Luckily we have function prototypes in C nowadays ;-)
I have never made these kind of mistakes. I have done the things in section 2 and section 19, but I did it on purpose and expected the result, it was not a mistake.
I use Enhanced CWEB for C programming. Many of these things will be caught because you can see in the printout of the book, that there are mistakes. (For example, it typesets octal numbers in italic)
The problem with -Werror is it causes trouble with autoconf-based scripts: Test programs that should've compiled can fail because of a warning as opposed to an actual error. I prefer to compile with -Wall -Wextra and not commit anything that raises warnings.
My standard approach is to only ever omit the braces when the entire if statement fits on a single line. I also enforce a hard 80-char limit on my lines, which prevents a single enormous line.
I personally hate an 80 character limit. I do wrap comments at 80 or 100 chars but code isn't read like english prose--It's very easy to read a long line of code and it makes a lot of sense (sometimes) to keep everything on one line. Especially if there are multiple similar lines where you can make the columns line up nicely.
I personally find that the limit means that you're not occasionally reading far away from the main body of code, which makes it easier for my eyes to track. An added bonus is the ability to put multiple files side-by-side, something not easily possible when lines can be of arbitrary length.
I don't think they even make compilers that don't warn about the top 2 issues, and 5 for sure. And after that it gets down to "I did something stupid and something stupid happened." I mean:
Seriously? As if defining the order of operations would magically take the suck out of that statement.There are like 1.5 nuggets of real pain in here. (1 point for returning a stack array, 0.5 points for noticing that C is C.)