[C](c.md) is a powerful language that offers almost absolute control and maximum performance which necessarily comes with responsibility and danger of shooting oneself in the foot. Without knowledge of the pitfalls you may well find yourself fallen into one of them.
Unless specified otherwise, this article supposes the C99 standard of the C language.
Undefined, unspecified and implementation-defined behaviors are kinds of unpredictable and sometimes non-intuitive behavior of certain operations that may differ between compilers, platforms or runs because they are not defined by the language specification; this is mostly done on purpose as to allow some implementation freedom which allows implementing the language in a way that is most efficient on given platform. This behavior may be completely random (unpredictable) or implementation-specified (consistent within each implementation but potentially different between implementations). In any case, one has to be very careful about letting such behavior influence computations. Note that tools such as [cppcheck](cppcheck.md) can help find undefined behavior in code. Description of some such behavior follows.
**Data type sizes including int and char may not be the same on each platform**. Even though we almost take it for granted than char is 8 bits wide, in theory it can be wider. The int (and unsigned int) type width should reflect the architectures native integer type, so nowadays mostly it's mostly 32 or 64 bits. To deal with this we can use the standard library `limits.h` and `stdint.h` headers.
**No specific [endianness](endian.md) or even encoding of numbers is specified**. Nowadays little endian and [two's complement](twos_complement.md) is what you'll encounter on most platforms, but e.g. [PowerPC](ppc.md) uses big endian ordering.
**Order of evaluation of operands and function arguments is not specified**. I.e. in an expression or function call it is not defined which operands or arguments will be evaluated first, the order may be completely random and the order may differ even when evaluating the same expression at another time. This is demonstrated by the following code:
**Overflow behavior of signed type operations is not specified.** Sometimes we suppose that e.g. addition of two signed integers that are past the data type's limit will produce two's complement overflow (wrap around), but in fact this operation's behavior is undefined, C99 doesn't say what representation should be used for numbers. For [portability](portability.md), predictability and safety **it is safer to use unsigned types** (but safety may come at the cost of performance, i.e. you prevent compiler from performing some optimizations based on undefined behavior).
**Bit shifts by type width or more are undefined.** Also bit shifts by negative values are undefined. So e.g. `x >> 8` is undefined if width of the data type of `x` is 8 bits or fewer.
Besides being extra careful about writing memory safe code, one needs to also know that **some functions of the standard library are memory unsafe**. This is regarding mainly string functions such as `strcpy` or `strlen` which do not check the string boundaries (i.e. they rely on not being passed a string that's not zero terminated and so can potentially touch memory anywhere beyond); safer alternatives are available, they have an `n` added in the name (`strncpy`, `strnlen`, ...) and allow specifying a length limit.
C is **not** a subset of C++, i.e. not every C program is a C++ program (for simple example imagine a C program in which we use the word `class` as an identifier: it is a valid C program but not a C++ program). Furthermore a C program that is at the same time also a C++ program may behave differently when compiled as C vs C++, i.e. there may be a [semantic](semantics.md) difference. Of course, all of this may also apply between different standards of C, not just between C and C++.
For portability sake it is good to try to write C code that will also compile as C++ (and behave the same). For this we should know some basic differences in behavior between C and C++.
C compilers perform automatic optimizations and and other transformations of the code, especially when you tell it to optimize aggressively (`-O3`) which is a standard practice to make programs run faster. However this makes compilers perform a lot of [magic](magic.md) and may lead to unexpected and unintuitive undesired behavior such as bug or even the "unoptimization of code". { I've seen a code I've written have bigger size when I set the `-Os` flag (optimize for smaller size). ~drummyfish }
Aggressive optimization may firstly lead to tiny bugs in your code manifesting in very weird ways, it may happen that a line of code somewhere which may somehow trigger some tricky [undefined behavior](undefined_behavior.md) may cause your program to crash in some completely different place. Compilers exploit undefined behavior to make all kinds of big brain reasoning and when they see code that MAY lead to undefined behavior a lot of chain reasoning may lead to very weird compiled results. Remember that undefined behavior, such as overflow when adding signed integers, doesn't mean the result is undefined, it means that ANYTHING CAN HAPPEN, the program may just start printing nonsensical stuff on its own or your computer may explode. So it may happen that the line with undefined behavior will behave as you expect but somewhere later on the program will just shit itself. For these reasons if you encounter a very weird bug, try to disable optimizations and see if it goes away -- if it does, you may be dealing with this kind of stuff. Also check your program with tools like [cppcheck](cppcheck.md).
Automatic optimizations may also be dangerous when writing [multithreaded](multithreading.md) or very low level code (e.g. a driver) in which the compiler may have wrong assumptions about the code such as that nothing outside your program can change your program's memory. Consider e.g. the following code:
```
while (x)
puts("X is set!");
```
Normally the compiler could optimize this to:
```
if (x)
while (1)
puts("X is set!");
```
As in typical code this works the same and is faster. However if the variable *x* is part of shared memory and can be changed by an outside process during the execution of the loop, this optimization can no longer be done as it results in different behavior. This can be prevented with the `volatile` keyword which tells the compiler to not perform such optimizations.
Of course this applies to other languages as well, but C is especially known for having a lot of undefined behavior, so be careful.
## Other
Watch out for **operator precedence**! Bracket expressions if unsure, or just to increase readability for others.
Also watch out for this one: `!=` is not `=!` :) I.e. `if (x != 4)` and `if (x =! 4)` are two different things, the first means *not equal* and is usually what you want, the latter is two operations, `=` and `!`, the tricky thing is it also compiles and may work as expected in some cases but fail in others, leading to a very nasty bug.