This commit is contained in:
Miloslav Ciz 2025-07-29 14:54:27 +02:00
parent 531fb64cc1
commit 12f160ddce
17 changed files with 2025 additions and 1996 deletions

View file

@ -53,11 +53,11 @@ Let's now take a closer look at a basic classification of 3D models (we only men
**Animation**: the main approaches to animation are these (again, just the important ones, you may encounter other ways too):
- **model made of separate parts**: Here we make a bigger model out of smaller models (i.e. body is head plus torso plus arms plus legs etc.) and then animate the big model by transforming the small models. This is very natural e.g. for animating machines, for example the wheels of a car, but characters were animated this way too (see e.g. Morrowind). The advantage is that you don't need any sophisticated subsystem for deforming models or anything, you just move models around (though it's useful to at least have parenting of models so that you can attach models to stick together). But it can look bad and there may be some ugliness like models intersecting etc.
- **keyframe [morphing](morphing.md)**: The mostly sufficient [KISS](kiss.md) way based on having a few model "poses" and just [interpolating](interpolation.md) between them -- i.e. for example a running character may have 4 keyframes: legs together, right leg in front, legs together, left leg in front; now to make smooth animation we just gradually deform one keyframe into the next. Now let's stress that this is a single model, each keyframe is just differently shaped, i.e. its vertices are at different positions, so the model is animating by really being deformed into different shapes. This was used in old games like [Quake](quake.md), it looks good and works well -- use it if you can.
- **skeletal animation**: The [mainstream](mainstream.md), gigantically [bloated](bloat.md) way, used in practically all 3D games since about 2005. Here we firstly make a *skeleton* for the model, i.e. an additional "model made of sticks (so called *bones*)" and then we have to painstakingly rig (or *skin*) the model, i.e. we attach the skeleton to the model (more technically assign weights to the model's vertices so as to make them deform correctly). Now we have basically a puppet we can move quite easily: if we move the arm bone, the whole arm moves and so on. Now animations are still made with keyframes (i.e. making poses in certain moments in time between which we interpolate), the advantage against morphing is just that we don't have to manually reshape the model on the level of individual vertices, we only manipulate the bones and the model deforms itself, so it's a bit more comfortable but this requires firstly much more work and secondly there needs to be a hugely bloated skeletal system programmed in (complex math, complex model format, skinning GUI, ...). Bones have more advantages, you can e.g. make procedural animations, ragdoll physics, you can attach things like weapon to the bones etc., but it's mostly not worth it. Even if you have a rigged skeletal model, you can still export its animation in the simple *keyframe morphing* format so as to at least keep your engine simple. Though skeletal animation was mostly intended for characters, nowadays it's just used for animating everything (like book pages have their own bones etc.) because many engines don't even support anything simpler.
- **composed of separate parts**: Here we craft a larger model out of smaller constituent models (i.e. body equals a head plus torso plus arms plus legs etc.) and then animate the main model by transforming the smaller parts. This is quite natural e.g. for machines, for example the wheels of a car, but in the industry biological characters were successfully animated this way too (see e.g. Morrowind). The advantage lies mainly in that you don't require any sophisticated subsystem for model deformation, you just move models around (though it's useful to at least have parenting of models implemented so that you can attach models to stick together). But understandably this may look bad and show ugliness such as model intersections etc.
- **keyframe [morphing](morphing.md)**: The mostly sufficient [KISS](kiss.md) way based on creation of a few model "poses" and just [interpolating](interpolation.md) between them -- i.e. for example a running character may have 4 keyframes: legs together, right leg in front, legs together, left leg in front; now to make smooth animation we just gradually deform one keyframe into the next. Now let's stress that this is a single model, each keyframe is just differently shaped, i.e. its vertices are at different positions, so the model is animating by really being deformed into different shapes. This was used in old games like [Quake](quake.md), it looks good and works well -- use it if you can.
- **skeletal animation**: The [mainstream](mainstream.md), gigantically [bloated](bloat.md) way, used practically in all 3D games since around 2005. Here we firstly set up a *skeleton* for the model, i.e. an additional "model made out of sticks (so called *bones*)" and then we have to painstakingly rig (or *skin*) the model, i.e. we attach the skeleton to the model (more technically assign weights to the model's vertices so as to make them deform correctly). Now we have basically a puppet we can move quite easily: if we move the arm bone, the whole arm moves and so on. Now animations are still made with keyframes (i.e. making poses in certain moments in time between which we interpolate), the advantage against morphing is just that we don't have to manually reshape the model on the level of individual vertices, we only manipulate the bones and the model deforms itself, so it's a bit more comfortable but this requires firstly much more work and secondly there needs to be a hugely bloated skeletal system programmed in (complex math, complex model format, skinning GUI, ...). Bones have more advantages, you can e.g. make procedural animations, ragdoll physics, you can attach things like weapon to the bones etc., but it's mostly not worth it. Even if you have a rigged skeletal model, you can still export its animation in the simple *keyframe morphing* format so as to at least keep your engine simple. Though skeletal animation was mostly intended for characters, nowadays it's just used for animating everything (like book pages have their own bones etc.) because many engines don't even support anything simpler.
Let us also briefly mention **texturing**, an important part of making traditional 3D models. In the common, narrower sense texture is a 2D images that is stretched onto the model surface to give the model more detail, just like we put wallpaper on a wall -- without textures our models have flat looking surfaces with just a constant color (at best we may assign each polygon a different color, but that won't make for a very realistic model). Putting texture on the model is called *texture mapping* -- you may also hear the term *UV mapping* because texturing is essential about making what we call a *UV map*. This just means we assign each model vertex 2D coordinates inside the texture; we traditionally call these two coordinates *U* and *V*, hence the term *UV mapping*. *UV* coordinates are just coordinates within the texture image; they are not in pixels but are typically [normalized](normalization.md) to a [float](float.md) in range <0,1> (i.e. 0.5 meaning middle of the image etc.) -- this is so as to stay independent of the texture [resolution](resolution.md) (you can later swap the texture for a different resolution one and it will still work). By assigning each vertex its UV texture coordinates we basically achieve the "stretching", i.e. we say which part of the texture will show on what's the character's face etc. (Advanced note: if you want to allow "tears" in the texture, you have to assign UV coordinates per triangle, not per vertex.) Now let's also mention a model can have multiple textures at once -- the most basic one (usually called *diffuse*) specifies the surface color, but additional textures may be used for things like transparency, normals (see [normal mapping](normal_mapping.md)), displacement, material properties like metalicity and so on (see also [PBR](pbr.md)). The model may even have multiple UV maps, the UV coordinates may be animated and so on and so forth. Finally we'll also say that there exists 3D texturing that doesn't use images, 3D textures are mostly [procedurally generated](procgen.md), but this is beyond our scope now.
**Texturing** must be briefly mentioned as well as an important part of traditional 3D modeling. In the common, narrower sense texture is a plain 2D image which is stretched onto the model's surface in order to conjure more detail, just like we glue a wallpaper onto a wall -- without textures our models show only flat looking surfaces, only with a constant [color](color.md) (at best we may assign each polygon a different color, but that won't make for a very realistic model). The application of texture on the model is called *texture mapping* -- you may also come across the term *UV mapping* because texturing is essential about making what we call a *UV map*. This just means we assign each model vertex 2D coordinates inside the texture; we traditionally call these two coordinates *U* and *V*, hence the term *UV mapping*. *UV* coordinates are just coordinates within the texture image; they are not in pixels but are typically [normalized](normalization.md) to a [float](float.md) in range <0,1> (i.e. 0.5 meaning middle of the image etc.) -- this is so as to stay independent of the texture [resolution](resolution.md) (you can later swap the texture for a different resolution one and it will still work). By assigning each vertex its UV texture coordinates we basically achieve the "stretching", i.e. we say which part of the texture will show on what's the character's face etc. (Advanced note: if you want to allow "tears" in the texture, you have to assign UV coordinates per triangle, not per vertex.) Now let's also mention a model can have multiple textures at once -- the most basic one (usually called *diffuse*) specifies the surface color, but additional textures may be used for things like transparency, normals (see [normal mapping](normal_mapping.md)), displacement, material properties like metalicity and so on (see also [PBR](pbr.md)). The model may even have multiple UV maps, the UV coordinates may be animated and so on and so forth. Finally we'll also say that there exists 3D texturing that doesn't use images, 3D textures are mostly [procedurally generated](procgen.md), but this is beyond our scope now.
We may do many, many more things with 3D models, for example **[subdivide](subdivision.md)** them (automatically break polygons down into more polygons to smooth them out), apply boolean operations to them (see above), **sculpt** them (make them from virtual clay), **[optimize](optimization.md)** them (reduce their polygon count, make better topology, ...), apply various modifiers, 3D print them, make them out of paper (see [origami](origami.md)) etcetc.
@ -218,9 +218,9 @@ TODO: other types of models, texturing etcetc.
*WORK IN PROGRESS*
**Do you want to start 3D modeling?** Or do you already know a bit about it and **just want some advice to get better?** Then let us share a few words of advice here.
**Are you dreaming about 3D modeling?** Or do you perhaps already know a bit about it and **just want some advice to get better** by any chance? Then let this section serve us to share a few words of advice for doing just that.
Let us preface by mentioning the **[hacker](hacking.md) chad way of making 3D models**, i.e. the [LRS](lrs.md) way 3D models should ideally be made. Remeber, **you don't need any program to create 3D models**, you don't have to be a Blender whore, you can make 3D models perfectly fine without Blender or any similar program, and even without computers. Sure, a certain kind of highly artistic, animated, very high poly models will be very hard or impossible to make without an interactive tool like Blender, but you can still make very complex 3D models, such as that of a whole city, without any fancy tools. Of course people were making statues and similar kinds of "physical 3D models" for thousands of years -- sometimes it's actually simpler to make the model by hand out of clay and later scan it into the computer, you can just make a physical wireframe model, measure the positions of vertices, hand type them into a file and you have a perfectly valid 3d model -- you may also easily make a polygonal model out of paper, BUT even virtual 3D models can simply be made with pen and paper, it's just numbers, vertices and [triangles](triangle.md), very manageable if you keep it simple and well organized. You can directly write the models in text formats like obj or collada. First computer 3D models were actually made by hand, just with pen and paper, because there were simply no computers fast enough to even allow real time manipulation of 3D models; back then the modelers simply measured positions of someone object's "key points" (vertices) in 3D space which can simply be done with tools like rulers and strings, no need for complex 3D scanners (but if you have a digital camera, you have a quite advanced 3D scanner already). They then fed the manually made models to the computer to visualize them, but again, you don't even need a computer to draw a 3D model, in fact there is a whole area called [descriptive geometry](descriptive_geometry.md) that's all about drawing 3D models on paper and which was used by engineers before computers came. Anyway, you don't have to go as far as avoiding computers of course -- if you have a programmable computer, you already have the luxury which the first 3D artists didn't have, a whole new world opens up to you, you can now make very complex 3D models just with your programming language of choice. Imagine you want to make the said 3D model of a city just using the [C](c.md) programming language. You can first define the terrain as [heightmap](heightmap.md) simply as a 2D array of numbers, then you write a simple code that will iterate over this array and converts it to the obj format (a very simple plain text 3D format, it will be like 20 lines of code) -- now you have the basic terrain, you can render it with any tool that can load 3D models in obj format (basically every 3D tool), AND you may of course write your own 3D visualizer, there is nothing difficult about it, you don't even have to use perspective, just draw it in orthographic projection (again, that will be probably like 20 lines of code). Now you may start adding houses to your terrain -- make a C array of vertices and another array of triangle indices, manually make a simple 3D model of a house (a basic shape will have fewer than 20 vertices, you can cut it out of paper to see what it will look like). That's your house geometry, now just keep making instances of this house and placing them on the terrain, i.e. you make some kind of struct that will keep the house transformation (its position, rotation and scale) and each such struct will represent one house having the geometry you created (if you later improve the house model, all houses will be updates like this). You don't have to worry about placing the houses vertically, their height will be computed automatically so they sit right on the terrain. Now you can update your model exporter to take into account the houses, it will output the obj model along with them and again, you can view this whole model in any 3D software or with your own tools. You can continue by adding trees, roads, simple materials (maybe just something like per triangle colors) and so on. This approach may actually even be superior for some projects just as scripting is superior to many GUI programs, you can collaborate on this model just like you can collaborate on any other text program, you can automate things greatly, you'll be independent of proprietary formats and platforms etcetc. This is how 3D models would ideally be made.
Let us preface by examining the **[hacker](hacking.md) chad way of making 3D models**, i.e. the [LRS](lrs.md) way 3D models would ideally be made. Remeber, **you don't need any program to create 3D models**, you don't have to be a Blender whore, you can make 3D models perfectly fine without Blender or any similar program, and even without computers. Sure, a certain kind of highly artistic, animated, very high poly models will be very hard or near impossible to make without an interactive tool like Blender, but you can still make very complex 3D models, such as that of a whole city, without any fancy tools. Of course people were making statues and similar kinds of "physical 3D models" for thousands of years -- sometimes it's actually simpler to make the model by hand out of clay and later scan it into the computer, you can just make a physical wireframe model, measure the positions of vertices, hand type them into a file and you have a perfectly valid 3d model -- you may also easily make a polygonal model out of paper, BUT even virtual 3D models can simply be made with pen and paper, it's just numbers, vertices and [triangles](triangle.md), very manageable if you keep it simple and well organized. You can directly write the models in text formats like obj or collada. First computer 3D models were actually made by hand, just with pen and paper, because there were simply no computers fast enough to even allow real time manipulation of 3D models; back then the modelers simply measured positions of someone object's "key points" (vertices) in 3D space which can simply be done with tools like rulers and strings, no need for complex 3D scanners (but if you have a digital camera, you have a quite advanced 3D scanner already). They then fed the manually made models to the computer to visualize them, but again, you don't even need a computer to draw a 3D model, in fact there is a whole area called [descriptive geometry](descriptive_geometry.md) that's all about drawing 3D models on paper and which was used by engineers before computers came. Anyway, you don't have to go as far as avoiding computers of course -- if you have a programmable computer, you already have the luxury which the first 3D artists didn't have, a whole new world opens up to you, you can now make very complex 3D models just with your programming language of choice. Imagine you want to make the said 3D model of a city just using the [C](c.md) programming language. You can first define the terrain as [heightmap](heightmap.md) simply as a 2D array of numbers, then you write a simple code that will iterate over this array and converts it to the obj format (a very simple plain text 3D format, it will be like 20 lines of code) -- now you have the basic terrain, you can render it with any tool that can load 3D models in obj format (basically every 3D tool), AND you may of course write your own 3D visualizer, there is nothing difficult about it, you don't even have to use perspective, just draw it in orthographic projection (again, that will be probably like 20 lines of code). Now you may start adding houses to your terrain -- make a C array of vertices and another array of triangle indices, manually make a simple 3D model of a house (a basic shape will have fewer than 20 vertices, you can cut it out of paper to see what it will look like). That's your house geometry, now just keep making instances of this house and placing them on the terrain, i.e. you make some kind of struct that will keep the house transformation (its position, rotation and scale) and each such struct will represent one house having the geometry you created (if you later improve the house model, all houses will be updates like this). You don't have to worry about placing the houses vertically, their height will be computed automatically so they sit right on the terrain. Now you can update your model exporter to take into account the houses, it will output the obj model along with them and again, you can view this whole model in any 3D software or with your own tools. You can continue by adding trees, roads, simple materials (maybe just something like per triangle colors) and so on. This approach may actually even be superior for some projects just as scripting is superior to many GUI programs, you can collaborate on this model just like you can collaborate on any other text program, you can automate things greatly, you'll be independent of proprietary formats and platforms etcetc. This is how 3D models would ideally be made.
OK, back to the mainstream now. Nowadays as a [FOSS](foss.md) user you will most likely do 3D modeling with [Blender](blender.md) -- we recommended it to start learning 3D modeling as it is powerful, [free](free_software.md), gratis, has many tutorials etc. Do NOT use anything [proprietary](proprietary.md) no matter what anyone tells you! Once you know a bit about the art, you may play around with alternative programs or approaches (such as writing programs that generate 3D models etc.). However **as a beginner just start with Blender**, which is from now on in this article the software we'll suppose you're using.