Update
This commit is contained in:
parent
70c10acfc5
commit
1f6026b2ee
26 changed files with 1965 additions and 1903 deletions
|
@ -76,6 +76,8 @@ Another kind of optimization done during development is just automatically writi
|
|||
|
||||
Automatic optimization is typically performed by the compiler; usually the programmer has the option to tell the compiler how much and in what way to optimize (no optimization, mild optimization, aggressive optimization, optimization for speed, size; check e.g. the man pages of [gcc](gcc.md) where you can see how to turn on even specific types of optimizations). Some compilers perform extremely complex reasoning to make the code more efficient, the whole area of optimization is a huge science -- here we'll only take a look at the very basic techniques. We see optimizations as transformations of the code that keep the semantics the same but minimize or maximize some measure (e.g. execution time, memory usage, power usage, network usage etc.). Automatic optimizations are usually performed on the intermediate representation (e.g. [bytecode](bytecode.md)) as that's the ideal way (we only write the optimizer once), however some may be specific to some concrete instruction set -- these are sometimes called *peephole* optimizations and have to be delayed until code generation.
|
||||
|
||||
There also exist **dynamic optimization** techniques performed at runtime by the platform running the program (interpreter, emulator, virtual machine, ...).
|
||||
|
||||
The following are some common methods of automatic optimization (also note that virtually any method from the above mentioned manual optimizations can be applied if only the compiler can detect the possibility of applying it):
|
||||
|
||||
{ Tip: man pages of gcc or possibly other compilers detail specific optimizations they perform under the flags that turn them on, so see these man pages for a similar overview. ~drummyfish }
|
||||
|
@ -92,6 +94,7 @@ The following are some common methods of automatic optimization (also note that
|
|||
- **Generating [lookup tables](lut.md)**: if the optimizer judges some function to be critical in terms of speed, it may auto generate a lookup table for it, i.e. precompute its values and so sacrifice some memory for making it run extremely fast.
|
||||
- **Dead code removal**: parts of code that aren't used can be just removed, making the generated program smaller -- this includes e.g. functions that are present in a [library](library.md) which however aren't used by the specific program or blocks of code that become unreachable e.g. due to some `#define` that makes an if condition always false etc.
|
||||
- **[Compression](compression.md)**: compression methods may be applied to make data smaller and optimize for size (for the price of increased CPU usage).
|
||||
- **[Dynamic recompilation](dynamic_recompilation.md)/[JIT](jit.md) compilation** (typical for interpreted/emulated programs): these terms seem to not have very clear definitions but the basic idea is that of compiling the program late and/or only certain parts of it: we may compile the program as soon it gets executed OR keep compiling parts of it as it runs, i.e. where we are interpreting some kind of [bytecode](bytecode.md) for example we may be turning parts of it to a faster native code. Compiling parts of the program as it is running has advantages and may in theory even result in faster running program than that produced by a traditional compiler because a dynamic compiler has more information about the program: it can measure which parts of the program take most computational time and these can be turned into native code, resulting in significant optimization.
|
||||
- ...
|
||||
|
||||
## See Also
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue