14 KiB
C Pitfalls
C is a powerful language that offers almost absolute control and maximum performance which necessarily comes with responsibility and danger of shooting oneself in the foot. Without knowledge of the pitfalls you may well find yourself fallen into one of them.
This article will be focused on C specific/typical pitfalls, but of course C also comes with general programming pitfalls, such as those related to floating point, concurrency, bugs such as off by one and so on -- indeed, be aware of these ones too.
Unless specified otherwise, this article supposes the C99 standard of the C language.
Generally: be sure to check your programs with tools such as valgrind, splint, cppcheck, UBSan or ASan, and turn on compiler auto checks (-Wall
, -Wextra
, -pedantic
, ...), it's quick, simple and reveals many bugs!
Undefined/Unspecified Behavior
Undefined (completely unpredictable), unspecified (safe but potentially differing) and implementation-defined (consistent within implementation but potentially differing between them) behavior poses a kind of unpredictability and sometimes non-intuitive, tricky behavior of certain operations that may differ between compilers, platforms or runs because they are not exactly described by the language specification; this is mostly done on purpose so as to allow some implementation freedom which allows implementing the language in a way that is most efficient on given platform. One has to be very careful about not letting such behavior break the program on platforms different from the one the program is developed on. Note that tools such as cppcheck can help find undefined behavior in code. Description of some such behavior follows.
There are tools for detecting undefined behavior, see e.g. clang's UBSan (https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html).
Data type sizes including int and char may not be the same on each platform. Even though we almost take it for granted that char is 8 bits wide, in theory it can be different (even though sizeof(char)
is always 1). Int (and unsigned int) type width should reflect the architecture's native integer type, so nowadays it's mostly 32 or 64 bits. To deal with these differences we can use the standard library limits.h
and stdint.h
headers.
No specific endianness or even encoding of numbers is specified. Nowadays little endian and two's complement is what you'll encounter on most platforms, but e.g. PowerPC uses big endian ordering.
Unlike with global variables, values of uninitialized local variables are not defined. Global variables are automatically initialized to 0 but not local ones -- this can lead to nasty bugs as sometimes local variables WILL be initialized with 0 but stop being so e.g. under different optimization level, so watch out. Demonstration:
int a; // auto initialized to zero
int main(void)
{
int b; // undefined value!
return 0;
}
Order of evaluation of operands and function arguments is generally not specified. I.e. in an expression or function call it is not defined which operands or arguments will be evaluated first, the order may be completely random and the order may differ even when evaluating the same expression at another time. Some operators, like &&
and ||
, may be exception to this, but these are few. This is demonstrated by the following code:
#include <stdio.h>
int x = 0;
int a(void)
{
x += 1;
return x;
}
int main(void)
{
printf("%d %d\n",x,a()); // may print 0 1 or 1 1
return 0;
}
Watch out especially for cases e.g. with pseudorandom generators, i.e. things like: if ((randomNum() % 2) & (randomNum() < x)) ...
-- this may introduce undesired nondeterminism, i.e. the code may give different results on different computers, compilers or just between different runs. Using &&
here would probably help as that is a short circuit operator that has defined order of evaluation from left to right. You may always enforce specific order of evaluation by just sequentially computing it in several steps, like for example: int cond = randomNum() % 2; cond &= randomNum() < x; if (cond) ...
.
Overflow behavior of signed type operations is not specified. Sometimes we suppose that e.g. addition of two signed integers that are past the data type's limit will produce two's complement overflow (wrap around), but in fact this operation's behavior is undefined, C99 doesn't say what representation should be used for numbers. For portability, predictability and preventing bugs it is safer to use unsigned types (but safety may come at the cost of performance, i.e. you prevent compiler from performing some optimizations based on undefined behavior).
Bit shifts by type width or more are undefined. Also bit shifts by negative values are undefined. So e.g. x >> 8
is undefined if width of the data type of x
is 8 bits or fewer.
Char data type signedness is not defined. The signedness can be explicitly "forced" by specifying signed char
or unsigned char
.
Floating point results are not precisely specified, no representation (such as IEEE 754) is specified and there may appear small differences in float operations under different machines or e.g. compiler optimization settings -- this may lead to nondeterminism.
Memory Unsafety
Besides being extra careful about writing memory safe code, one needs to also know that some functions of the standard library are memory unsafe. This is regarding mainly string functions such as strcpy
or strlen
which do not check the string boundaries (i.e. they rely on not being passed a string that's not zero terminated and so can potentially touch memory anywhere beyond); safer alternatives are available, they have an n
added in the name (strncpy
, strnlen
, ...) and allow specifying a length limit.
Be careful with pointers, pointers are hard and prone to errors, use them wisely and sparingly, assign NULLs to freed pointers and so on and so forth.
Watch out for memory leaks, try to avoid dynamic allocation (static/automatic allocation mostly suffices) and if you have to use it, simplify it as much as you can and additionally double and triple check everything (manually as well as with tools like valgrind).
Different Behavior Between C And C++ (And Different C Standards)
C is not a subset of C++, i.e. not every C program is a C++ program (for simple example imagine a C program in which we use the word class
as an identifier: it is a valid C program but not a C++ program). Furthermore a C program that is at the same time also a C++ program may behave differently when compiled as C vs C++, i.e. there may be a semantic difference. Of course, all of this may also apply between different standards of C, not just between C and C++.
For portability sake it is good to try to write C code that will also compile as C++ (and behave the same). For this we should know some basic differences in behavior between C and C++.
One difference is e.g. in that type of character literals is int in C but char in C++, so sizeof('x')
will likely yield different values.
Another difference lies for example in pointers to string literals. While in C it is possible to have non-const pointers such as
char *s = "abc";
C++ requires any such pointer to be const
, i.e.:
const char *s = "abc";
C++ generally has stronger typing, e.g. C allows assigning a pointer to void to any other pointer while C++ requires explicit type cast, typically seen with malloc:
int *array1 = malloc(N * sizeof(int)); // valid only in C
int *array2 = (int *) malloc(N * sizeof(int)); // valid in both C and C++
C allows skipping initialization (variable declarations) e.g. gotos or switches, C++ prohibits it.
And so on.
{ A quite detailed list is at https://en.wikipedia.org/wiki/Compatibility_of_C_and_C%2B%2B. ~drummyfish }
Compiler Optimizations
C compilers perform automatic optimizations and other transformations of the code, especially when you tell them to optimize aggressively (-O3
) which is a standard practice to make programs run faster. However this makes compilers perform a lot of magic and may lead to unexpected and unintuitive undesired behavior such as bugs or even the "unoptimization of code". { I've seen a code I've written have bigger size when I set the -Os
flag (optimize for smaller size). ~drummyfish }
Aggressive optimization may firstly lead to tiny bugs in your code manifesting in very weird ways, it may happen that a line of code somewhere which may somehow trigger some tricky undefined behavior may cause your program to crash in some completely different place. Compilers exploit undefined behavior to make all kinds of big brain reasoning and when they see code that MAY lead to undefined behavior a lot of chain reasoning may lead to very weird compiled results. Remember that undefined behavior, such as overflow when adding signed integers, doesn't mean the result is undefined, it means that ANYTHING CAN HAPPEN, the program may just start printing nonsensical stuff on its own or your computer may explode. So it may happen that the line with undefined behavior will behave as you expect but somewhere later on the program will just shit itself. For these reasons if you encounter a very weird bug, try to disable optimizations and see if it goes away -- if it does, you may be dealing with this kind of stuff. Also check your program with tools like cppcheck.
Automatic optimizations may also be dangerous when writing multithreaded or very low level code (e.g. a driver) in which the compiler may have wrong assumptions about the code such as that nothing outside your program can change your program's memory. Consider e.g. the following code:
while (x)
puts("X is set!");
Normally the compiler could optimize this to:
if (x)
while (1)
puts("X is set!");
As in typical code this works the same and is faster. However if the variable x is part of shared memory and can be changed by an outside process during the execution of the loop, this optimization can no longer be done as it results in different behavior. This can be prevented with the volatile
keyword which tells the compiler to not perform such optimizations.
Of course this applies to other languages as well, but C is especially known for having a lot of undefined behavior, so be careful.
Other
Basic things: =
is not ==
, |
is not ||
, &
is not &&
, array indices start at 0 (not 1) and so on. There are also some deeper gotchas like a/*b
is not a / *b
(the first is comment).
Also watch out for this one: !=
is not =!
:) I.e. if (x != 4)
and if (x =! 4)
are two different things, the first means not equal and is usually what you want, the latter is two operations, =
and !
, the tricky thing is it also compiles and may work as expected in some cases but fail in others, leading to a very nasty bug. Same thing with -=
vs =-
and so on. See also downto operator.
Another common, mostly beginner mistake is a semicolon after if or while condition -- this compiles but doesn't work correctly. Notice the difference between these two if statements:
if (a == b);
puts("aaa"); // will print always
if (a == b)
puts("aaa"); // will print only if a == b
Beginners similarly often forget breaks in switch statement, which works but usually not as you want -- thankfully compilers warn you about this.
Also putchar('a')
versus putchar("a")
;) Only the first one is correct of course.
Another possible gotcha: const char *myStrings[] = {"abc", "def", "ghi"};
vs const char *myStrings[] = {"abc", "def" "ghi"};
. In the latter we forgot a comma, but it's still a valid code, in the array there are now only two strings, the latter being "defghi". Writing the expected array size would help spot this as it wouldn't match.
Stdlib API can be trollish, for example the file printing functions: fprintf expects the file pointer as first argument while fputs expects it as last, so to print hello you can do either fprintf(file,"hello")
or fputs("hello",file)
-- naturally this leads to fucking up the order sometimes and doing so even compiles (both arguments are pointers), the running code then crashes.
Watch out for operator precedence! C infamously has weird precedence with some special operators, bracket expressions if unsure, or just to increase readability for others. Also nested ifs with elses can get tricky -- again, use curly brackets for clarity in your spaghetti code.
Preprocessor can give you headaches if you use it in overcomplicated ways -- ifdefs and macros are fine, but too many nesting can create real mess that's super hard to debug. It can also quite greatly slow down compilation. Try to keep the preprocessing code simple and flat.
Watch out for macro arguments, always bracket them because they get substituted on text level. Consider e.g. a macro #define divide(a,b) a / b
, and then doing divide(3 + 1,2)
-- this gets expanded to 3 + 1 / 2
while you probably wanted (3 + 1) / 2
, i.e. the macro should have been defined as #define divide(a,b) (a) / (b)
.
This may get some beginners: for (unsigned char i = 0; i < 256; ++i) { ... }
-- this loop will never end because the data type is not big enough to surpass the iteration limit. Similarly this may happen if you use e.g. unsigned int and 65536 and so on. New compilers will warn you about this.
This is not really a pitfall, rather a headscratcher, but don't forget to link math library with -lm
flag when using using the math.h
library.