4.4 KiB
Approximation
Approximating means calculating or representing something with lesser than best possible precision -- estimating -- purposefully allowing some margin of error in results and using simpler mathematical models than the most accurate ones: this is typically done in order to save resources (CPU cycles, memory etc.) and reduce complexity so that our projects and analysis stay manageable. Simulating real world on a computer is always an approximation as we cannot capture the infinitely complex and fine nature of the real world with a machine of limited resources, but even within this we need to consider how much, in what ways and where to simplify.
Using approximations however doesn't have to imply decrease in precision of the final result -- approximations very well serve optimization. E.g. approximate metrics help in heuristic algorithms such as A*. Another use of approximations in optimization is as a quick preliminary check for the expensive precise algorithms: e.g. using bounding spheres helps speed up collision detection (if bounding spheres of two objects don't collide, we know they can't possibly collide and don't have to expensively check this).
Examples of approximations:
- Distances: instead of expensive Euclidean distance (
sqrt(dx^2 + dy^2)
) we may use Chebyshev distance (dx + dy
) or Taxicab distance (max(dx,dy)
). - Engineering approximations ("guesstimations"): e.g. sin(x) = x for "small" values of x or pi = 3 (integer instead of float).
- Physics engines: complex triangle meshes are approximated with simple analytical shapes such as spheres, cuboids and capsules or at least convex hulls which are much easier and faster to deal with. They also approximate relativistic physics with Newtonian.
- Addition/subtraction (of integers) can sometimes be approximated with logical OR/AND operations, as they behave a bit similarly. This can be used e.g. for brightening/darkening of pixel colors in 332 or 565 format -- without the approximation addition of colors in these formats is very expensive (basically requires conversion to RGB, addition, clamping and a conversion back).
- Square root/square (integer) can be roughly approximated too. E.g. to get a quick "almost square" of a number you can try something like doubling each binary digit and shifting everything right, e.g.
101
->11001
-- it's not very accurate but may be good enough e.g. for some graphics effects and may be especially effective as hardware implementation as it works instantly and uses literally no logic gates (you just reorder bits)! A bit improved version may construct a pair of digits from each digit as logical AND (upper bit) and logical OR (lower bit) of the bit with its lower neighbor (lowest bit may still just be doubled), e.g.1101
->11010111
. Square root can similarly be roughly estimated by reducing each pair of bits with logical OR sown to a single bit (e.g.101100
->110
). { Dunno if this is actually used anywhere, I came up with this once before I fell asleep. ~drummyfish } A famous hack in Quake, called fast inverse square root, uses a similar approximation in floating point. - 3D graphics is almost completely about approximations, e.g. basically all shapes are approximated with triangle meshes, screen space effects (like SSAO) are used to approximate global illumination, reflections etc. Similarly ray tracing neglects indirect lighting etcetc.
- Real numbers are practically always approximated with floating point or fixed point (rational numbers).
- Numerical methods offer generality and typically yield approximate solutions while their precision vs speed can be adjusted via parameters such as number of iterations.
- Taylor series approximates given mathematical function and can be used to e.g. estimate solutions of differential equations.
- Primitive music synthesis often uses simple functions like triangle/saw/square wave to approximate sin waves (though many times it's done for the actual sound of these waves, sometimes it may be simply to save on resources).
- ...