less_retarded_wiki/double_buffering.md

27 lines
5.9 KiB
Markdown
Raw Permalink Normal View History

# Double Buffering
In [computer graphics](graphics.md) double buffering is a technique of rendering in which we do not draw directly to [video RAM](vram.md), but instead to a second "back buffer", and only copy the rendered frame from back buffer to the video RAM ("front buffer") once the rendering has been completed; this prevents flickering and displaying of incompletely rendered frames on the display. Double buffering requires a significant amount of extra memory for the back buffer, however it is also necessary for how graphics is rendered today.
2023-04-06 22:43:02 +02:00
```
here we are this is seen
drawing on display
| |
V V
.--------. when drawing is done .--------.
| | we copy this | |
| back | -----------------------> | front |
| buffer | | buffer |
|________| |________|
```
In most libraries and frameworks today you don't have to care about double buffering, it's done automatically. For this reason in many frameworks you often need to indicate the end of rendering with some special command such as `flip`, `endFrame` etc. If you're going lower level, you may need to implement double buffering yourself.
Though we encounter the term mostly in computer graphics, the principle of using a second buffer in order to ensure the result is presented only when it's ready can be applied also elsewhere.
Let's take a small example: say we're rendering a frame in a 3D game. First we render the environment, then on top of it we render the enemies, then effects such as explosions and then at the top of all this we render the [GUI](gui.md). Without double buffering we'd simply be rendering all these pixel into the front buffer, i.e. the memory that is immediately shown on the display. This would lead to the user literally seeing how first the environment appears, then enemies are drawn over it, then effects and then the GUI. Even if all this redrawing takes an extremely short time, it is also the case that the final frame will be shown for a very short time before another one will start appearing, so in the result the user will see huge flickering: the environment may look kind of normal but the enemies, effects and GUI may appear transparent because they are only visible for a fraction of the frame. The user also might be able to see e.g. enemies that are supposed to be hidden behind some object if that object is rendered after the enemies. With double buffering this won't happen as we perform the rendering into the back buffer, a memory which doesn't show on the display. Only when we have completed the frame in the back buffer, we copy it to the front buffer, pixel by pixel. Here the user may see the display changing from the old frame to the new one from top to the bottom, but he will never see anything temporary, and since the old and new frames are usually very similar, this top-to-bottom update may not even be distracting (it is addressed by [vertical synchronization](vsync.md) if we really want to get rid of it).
There also exists [triple buffering](triple_buffering.md) which uses yet another additional buffer to increase [FPS](fps.md). With double buffering we can't start rendering a new frame into back buffer until the back buffer has been copied to the front buffer which may further be delayed by [vertical synchronization](vsync.md), i.e. we have to wait and waste some time. With triple buffering we can start rendering into the other back buffer while the other one is being copied to the front buffer. Of course this consumes significantly more memory. Also note that triple buffering can only be considered if the hardware supports parallel rendering and copying of data, and if the FPS is actually limited by this... mostly you'll find your FPS bottleneck is elsewhere in which case it makes no sense to try to implement triple buffering. On small devices like embedded you probably shouldn't even think about this.
Double buffering can be made more efficient by so called page flipping, i.e. allowing to switch the back and front buffer without having to physically copy the data, i.e. by simply changing the [pointer](pointer.md) of a display buffer. This has to be somehow supported by hardware.
**When do we actually need double buffering?** Not always, we can avoid it or suppress its memory requirements if we need to, e.g. with so called **[frameless rendering](frameless.md)** -- we may want to do this e.g. in [embedded](embedded.md) programming where we want to save every byte of RAM. The mainstream computers nowadays simply always run on a very fast FPS and keep redrawing the screen even if the image doesn't change, but if you write a program that only occasionally changes what's on the screen (e.g. an e-book reader), you may simply leave out double buffering and actually render to the front buffer once the screen needs to change, the user probably won't notice any flicker during a single quick frame redraw. You also don't need double buffering if you're able to compute the final pixel color right away, for example with [ray tracing](ray_tracing.md) you don't need any double buffering, unless of course you're doing some complex [postprocessing](postprocessing.md). Double buffering is only needed if we compute a pixel color but that color may still change before the frame is finished. You may also only use a partial double buffer if that is possible (which may not be always): you can e.g. split the screen into 16 regions and render region by region, using only a 1/16th size double buffer. Using a [palette](palette.md) can also make the back buffer smaller: if we use e.g. a 256 color palette, we only need 1 byte for every pixel of the back buffer instead of some 3 bytes for full [RGB](rgb.md). The same goes for using a smaller resolution that is the actual native resolution of the screen.