You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

5.8 KiB

Optimization

Optimization means making a program more efficient in terms of some metric such as speed or memory usage (but also others such as power consumption, network usage etc.) while preserving its functionality.

Unlike refactoring, oprimization changes the behavior of the program to a more optimal one (but again, it doesn't change its functionality).

General Tips'n'Tricks

  • Tell your compiler to actually optimize (-O3, -Os etc.).
  • gprof is a utility you can use to profile your code.
  • <stdint.h> has fast type nicknames, types such as uint_fast32_t which picks the fastest type of at least given width on given platform.
  • Keywords such as inline, static and const can help compiler optimize well.
  • Optimize the bottlenecks! Optimizing in the wrong place is a complete waste of time. If you're optimizing a part of code that's taking 1% of your program's run time, you will never speed up your program by more than that 1% even if you speed up the specific part by 10000%.
  • You can almost always trade space (memory usage) for time (CPU demand) and vice versa and you can also fine-tune this. You typically gain speed by precomputation (look up tables, more demanding on memory) and memory with compression (more demanding on CPU).
  • Avoid branches (ifs). They break prediction and instruction preloading and are often source of great performance losses. Don't forget that you can compare and use the result of the operation without using any branching (e.g. x = (y == 5) + 1;).
  • Use iteration instead of recursion if possible (calling a function is pretty expensive).
  • You can use good-enough approximations instead of completely accurate calculations, e.g. taxicab distance instead of Euclidean distance, and gain speed or memory without trading.
  • Operations on static data can be accelerated with accelerating structures (look-up tables for functions, indices for database lookups, spatial grids for collision checking, ...).
  • Use powers of 2 whenever possible, this is efficient thanks to computers working in binary. Not only may this help nice utilization and alignment of memory, but mainly multiplication and division can be optimized by the compiler to mere bit shifts which is a tremendous speedup.
  • Write cache-friendly code (minimize long jumps in memory).
  • Compare to 0 if possible. There's usually an instruction that just checks the zero flag which is faster than loading and comparing two arbitrary numbers.
  • Consider moving computation from run time to compile time. E.g. if you make a resolution of your game constant (as opposed to a variable), the compiler will be able to partially precompute expressions with the display dimensions and so speed up your program (but you won't be able to dynamically change resolution).
  • On some platforms such as ARM the first arguments to a function may be passed via registers, so it may be better to have fewer parameters in functions.
  • Optimize when you already have a working code. As Donald Knuth put it: "premature optimization is the root of all evil". Nevertheless you should get used to simple nobrainer efficient patterns by default and just write them automatically.
  • Use your own caches where they help, for example if you're frequently working with some database item you better pull it to memory and work with it there, then write it back once you're done (as opposed to communicating with the DB there and back).
  • Single compilation unit (one big program without linking) can help compiler optimize better because it can see the whole code at once, not just its parts. It will also make your program compile faster.
  • Search literature for algorithms with better complexity class (sorts are a nice example).
  • For the sake of embedded platforms avoid floating point as that is often painfully slowly emulated in software. Use fixed point.
  • Early branching can create a speed up (instead of branching inside the loop create two versions of the loop and branch in front of them). This is a kind of space-time tradeoff.

When to Actually Optimize?

Nubs often ask this. Generally fine, sophisticated optimization should come as one of the last steps in development, when you actually have a working thing. These are optimizations requiring significant energy/time to implement -- you don't want to spend resources on this at the stage when they may well be dropped in the end, or they won't matter because they'll be outside the bottleneck. However there are two "exceptions".

The highest-level optimization is done as part of the initial design of the program, before any line of code gets written. This includes the choice of data structures and mathematical models you're going to be using, the very foundation around which you'll be building your castle. This happens in your head at the time you're forming an idea for a program, e.g. you're choosing between server-client or P2P, monolithic or micro kernel, raytraced or rasterized graphics etc. These choices affect greatly the performance of your program but can hardly be changed once the program is completed, so they need to be made beforehand. This requires wide knowledge and experience.

Another kind of optimization done during development is just automatically writing good code, i.e. being familiar with specific patterns and using them without much thought. For example if you're computing some value inside a loop and this value doesn't change between iterations, you just automatically put computation of that value before the loop. Without this you'd simply end up with a shitty code that would have to be rewritten line by line at the end.