You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

13 KiB

Physics Engine

{ LRS now has a very small 3D physics engine called tinyphysicsengine. ~drummyfish }

Physics engine is a software (usually a library or framework) whose purpose is to simulate mechanical laws of real life physics, i.e. things such as forces, rigid and soft body collisions, particle motion, fluid dynamics etc. Where to draw the line between physics engines and "other software" is not exactly clear, a lot of software somehow takes real life physics into account without being called "physics engine", typically e.g. 3D rendering software, but in general if it focuses on motion, forces, collision etc., it may fall into this category.

{ When it comes to classic 3D rigid body physics engines, they're extremely hard to make, much harder than for example an advanced 3D rendering engine, especially when you want to make them LRS (without floating point, ...) and/or general and somewhat physically correct (being able to simulate e.g. the Dzhanibekov effect, satisfying all the conservation laws, continuous collision detection etc.). Good knowledge of mechanics and things like quaternions and 3D rotations is just the beginning, difficulties arise in every aspect of the engine, and of those there are many. As I've found, 32 bit fixed point is not enough for a general engine (even though it is enough for a rendering engine), you'll run into precision problems as you need to represent both relatively high and low energies. You'll also run into stability issues such as stable contacts, situations with multiple objects stacked on top of each other starting to bounce on their own etc. Even things such as deciding in what order to resolve collisions are very difficult, they can lead to many bugs such as a car not being able to drive on a straight road made of several segments. Collision detection alone for all combinations of basic shapes (sphere, cuboid, cylinder, capsule, ... let alone general triangle mesh) are hard as you want to detect general cases (not only e.g. surface collisions) and you want to extract all the parameters of the collisions (collision location, depth, normal etc.) AND you want to make it fast. And of course you'll want to add acceleration structures and many other thing on top. So think twice before deciding to write your own physics engine.

A sane approach may be to write a simplified engine specifically for your program, for example a Minetest-like game may just need non-rotating capsules in a voxel environment, that's not that hard. You can also get away with a bit of cheating and faking, e.g. simulating rigid bodies as really stiff soft bodies, it may not be as efficient and precise but it's simpler to program. It may be good enough. Well, that's basically what tinyphysicsengine does anyway. Old playstation game Rally Cross apparently did something similar too. ~drummyfish }

Physics engine is a quite wide term -- even though one usually imagines something akin the typical real time 3D rigid body engine used in games such as GTA, there are many other types with vastly different purposes, features and even basic paradigms, some may e.g. be specialized just for computing precise ballistic trajectories for the army, only spitting out numbers without providing any visualization, some may serve for simulating and forecasting weather, some may simulate the evolution of our Universe etc. Some common classifications and possible characteristics of physics engines follow:

  • 2D vs 3D: 2D engines are generally much more simple to implement than 3D, for example because of much more simple math for rotations and collision detection. Graphics and physics are usually loosely interconnected (though they should be decoupled) in that the way in which we represent graphics (2D, general 3D, BSP, voxels, ...) usually also determines how we compute physics, so that there may also exist e.g. "pseudo 3D" physics engines as part of "pseudo 3D" renderers, e.g. the one used in Doom etc.
  • real time vs offline: Real-time ones are mostly intended to be used in the entertainment industry, i.e. games, movies etc. as they can compute somewhat realistic looking results quickly but for the price of dropping high accuracy (they use many approximations). Scientific engines may prefer to be offline and taking longer time to compute more precise results.
  • rigid body vs soft body: Rigid body engines don't allow bodies to deform while soft body ones do -- in real life all bodies are soft, but neglecting this detail and considering shapes rigid can have benefits (such as being able to consider the body as a whole and not having to simulate all its individual points). Of course, a complex engine may implement both rigid and soft body physics.
  • paradigm: The basic approach to implementing the simulation, e.g. being impulse-based (applying impulses to correct errors), constraint-based (solving equations to satisfy imposed constraints), penalty-based (trying to find equilibriums of forces) etc.
  • discrete vs continuous collision detection: Discrete collision detection only detects collisions at single points in time (at each engine tick) and are simple than those implementing continuous collision detection. Discrete engine are less accurate, consider e.g. that a very fast moving object can pass through a wall because at one instant it is in front of it while at the next tick it is behind it. Continuous collisions won't allow this to happen, but are more difficult to program, may be slower etc. For games discrete collisions are usually good enough.
  • purpose and accuracy: The basic categories are precise, scientific and often special-purpose engines, and engines meant for entertainment and less accurate visualizations such as games and movies.
  • features: fluid, cloth, particles, ragdoll, inverse kinematics, GPU acceleration, determinism, voxels, acceleration data structures ...: These are a number of additional features the engine can have such as the ability to simulate fluids (which itself is a huge field of its own) or cloths, some go as far as e.g. integrating motion-captured animations of humans with physics to create smooth realistic animations e.g. of running over walking pedestrians with a car and so on.

A typical physics engine will work something like this: we create a so called physics world, a data structure that represents the space in which the simulation takes place (it is similar to a scene in rendering engines). We then populate this world with physics elements such as rigid bodies (which can have attributes such as mass, elasticity etc.). These bodies are normally basic geometric shapes such as spheres, cylinders, boxes or capsules, or objects composed of several such basic shapes. This is unlike with rendering engines in which we normally have triangle meshes -- in physics engines triangle meshes are extremely slow to process, so for the sake of a physics engine we approximate this mesh with some of the above basic shapes (for example a creature in a game that's rendered as a hi-poly 3D model may in the physics engine be represented just as a simple sphere). Furthermore the bodies can be static (cannot move, this is sometimes done by setting their mass to infinity) or dynamic (can move); static bodies normally represent the environment (e.g. the game level), dynamic ones the entities in it (player, NPCs, projectiles, ...). Making a body static has performance benefits as its movement doesn't have to be calculated and the engine can also precalculate some things for it that will make e.g. collision detections faster. We then simulate the physics of the world in so called ticks (similar to frames in rendering); in simple cases one tick can be equivalent to one rendering frame, but properly it shouldn't be so (physics shouldn't be affected by the rendering speed, and also for the physics simulation we can usually get away with smaller "FPS" than for rendering, saving some performance). Usually one tick has set some constant time length (e.g. 1/60th of a second). In each tick the engine performs a collision detection, i.e. it finds out which bodies are touching or penetrating other bodies (this is accelerated with things such as bounding spheres). Then it performs so called collision resolution, i.e. updating the positions, velocities and forces so that the bodies no longer collide and react to these collisions as they would in the real world (e.g. a ball will bounce after hitting the floor). There can be many more things, for example constraints: we may e.g. say that one body must never get further away from another body than 10 meters (imagine it's tied to it by a rope) and the engine will try to make it so that this always holds. The engine will also offer a number of other functions such as casting rays and calculating where it hits (obviously useful for shooter games).

Integrating physics with graphics: you will most likely use some kind of graphics engine along with physics engine, even if just for debugging. As said above, keep in mind a graphics and physics engines should be strictly separated (decoupled, for a number of reasons such as reusability, easier debugging, being able to switch graphics and physics engines etc.), even though they closely interact and may affect each other in their design, e.g. by the data structures you choose for your program (voxel graphics will imply voxel physics etc.). In your program you will have a physics world and a graphics scene, both contain their own elements: the scene has graphics elements such as 3D models or particle systems, the physics world has elements such as rigid bodies and force fields. Some of the graphical and physics entities are connected, for example a 3D model of a tree may be connected to a physics rigid body of a cone shape. NOT ALL graphics elements have counterparts in the physics simulation (e.g. a smoke effect or light aren't present in the physics simulation) and vice versa (e.g. player in a first man game has no 3D model but still has some physics shape). The connection between graphics and physics elements should be done above both engines (i.e. do NOT add pointers to physics object to graphics elements etc.). This means that e.g. in a game you create a higher abstract environment -- for example a level -- which stands above the graphics scene and physics world and has its own game elements, each game element may be connected to a graphics or physics element. These game elements have attributes such as a position which gets updated according to the physics engine and which is transferred to the graphics elements for rendering. Furthermore remember that graphics and physics should often run on different "FPS": graphics engines normally try to render as fast as they can, i.e. reach the highest FPS, while physics engines often have a time step, called a tick, of fixed time length (e.g. 1/30th of a second) -- this is so that they stay deterministic, accurate and also because physics may also run on much lower FPS without the user noticing (interpolation can be used in the graphics engine to smooth out the physics animation for rendering). "Modern" engines often implement graphics and physics in separate threads, however this is not suckless, in most cases we recommend the KISS approach of a single thread (in the main loop keep a timer for when the next physics tick should be simulated).

Existing Engines

One of the best and most famous FOSS 3D physics engines is Bullet (zlib license), it has many features (rigid and soft bodies, GPU acceleration, constraints, ...) and has been used in many projects (Blender, Godot, ...). Box2D is a famous 2D physics engine under MIT license, written in C++. Tinyphysicsengine is a KISS LRS 3D physics engine made by drummyfish.