You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

36 KiB

3D Model

In the world of computers (especially in computer graphics, but also in physics simulations, 3D printing etc.) 3D model is a representation of a three dimensional object, for example of a real life object such as a car, tree or a dog, but also possibly something more abstract like a fractal or function plot surface. 3D models can be displayed using various 3D rendering techniques and are used mostly to simulate real world on computers (e.g. games), as real world is, as we know, three dimensional. 3D models can be created in several ways, e.g. manually with 3D modeling software (such as Blender) by 3D artists, by 3D scanning real world objects, or automatically by procedural generation.

There is a plethora of different 3D model types, the topic is very large when viewed in its whole scope because 3D models can be used and represented in many ways (and everything is yet more complex by dealing with different methods of 3D rendering) -- the mainstream "game" 3D models that most people are used to seeing are polygonal (basically made of triangles) boundary-representation (recording only surface, not volume) textured (with "pictures" on their surface) 3D models, but be aware that many different ways of representation are possible and in common use by the industry, for example various volume representations, voxel models, point clouds, implicit surfaces, spline surfaces, constructive solid geometry, wireframe etc. Models may also bear additional extra information and features, e.g. material, bone rigs for animation, animation key frames, density information, LODs, even scripts and so on.

3D formats: situation here is not as simple as it is with images or audio, but there are a few formats that in practice will suffice for most of your models. Firstly the most KISS one is (Wavefront) obj -- this is supported by almost every 3D software, it's a text format that's easy to parse and it's even human readable and editable; obj supports most things you will ever need like UV maps and normals, and you can hack it even for a primitive keyframe animation. So if you can, use obj as your first choice. If you need something a little more advanced, use COLLADA (.dae extension) -- this is a bit more bloated than obj as it's an XML, but it's still human readable and has more features, for example skeletal animation, instancing, model hierarchy and so on. Another noteworthy format is e.g. stl used a lot in 3D printing. For other than polygonal models you may have to search a bit or just represent your model in some sane way, for example heightmap is naturally saved as a grayscale image, voxel model may be saved in some dead simple text format and so on. Also be always sure to distribute your model in universal format, i.e. don't just share Blender's project file or anything like that, that's like sharing pictures in Photoshop format or sending someone a Word document, only retards do that -- yes, you should also share the project file if possible, but it's more important to release the model in a widely supported, future proof and non discriminating format.

Let's now take a closer look at a basic classification of 3D models (we only mention the important categories, this is not an exhaustive list):

  • by representation:
    • boundary representation: Captures only the object's boundary, i.e. its surface, so we really end up with just a hollow "shell" of the object without any volume inside. This is mostly good enough for programs such as games -- with opaque objects we only ever see their surface anyway (with transparent objects we get into a bit of trouble but can still manage to "fake" some convincing look).
      • smooth spline surfaces: Model the boundary with smooth mathematical function, poplar are e.g. NURBS. This gives very nice models that even up close look completely smooth. Disadvantage is that it's not so easy to render these models directly in real time, so the models are typically converted to polygonal models during rendering anyway, sometimes using things like adaptive subdivision to still keep the model as smooth as possible.
      • polygonal: The surface is represented with polygons such as triangles or quads that are connected to each other by their edges. These models have sharp edges and to look smooth have to make use of many polygons, but it turns out this representation is convenient, they can be easily edited and most importantly quickly rendered in real time, which is why most game 3D models use this representation. These models are composed of three essential elements: vertices (points in space), edges (lines connecting the vertices) and polygons (flat surfaces between the edges).
        • triangular: Polygonal models that only contain triangles (three sided polygons), oftentimes automatically created from general polygonal models. Models are normally automatically triangulated right before rendering because a GPU can basically just draw triangles.
      • ...
    • volume representation: Explicitly provide information about the whole object's volume, i.e. it is possible to tell if any point is inside the model or outside of it and additionally usually also providing more information e.g. about density, material, index of refraction and so on. Such models naturally allow more precise simulations, they may fit better into physics engines and can also naturally be nicely rendered with raytracting and similar image order rendering methods. For real time entertainment graphics this is mostly overkill and the model would have to be converted to boundary representation anyway, so volumetric models are rather used in science and engineering industry.
      • voxel: Analogy of 2D bitmap images extended to 3D, i.e. the model exists in a 3D grind and is represented with small cubes called voxels (3D analog to 2D pixel). The model is therefore rough with "staircases" -- this is very famously seen in the game Minecraft, but is very common also e.g. in medical devices like MRI which scan human body as a series of layers, each one captured essentially as a 2D image. Voxel models are convenient for various dynamic simulations, cellular automata, procedural generation and so on. Voxel models can be converted to other types of model, e.g. to polygonal ones (see the Marching Cubes algorithm).
      • constructive solid geometry (CSG): Represent the model as a tree (or a more general hierarchical graph) of basic geometric shapes combined together with so called boolean operations -- basically set operations like union, subtraction and so on. This is widely used in CAD applications for variety of reasons, e.g. the models are quite precise and smooth, easily parametrized, their description is similar to their physical manufacturing by machines (e.g. "make a sphere, dig hole in it", ...) and so on.
      • implicit surfaces, signed distance function: Describe the model by a distance function, i.e. function f(x,y,z) which for any point in space says the distance to the object's boundary, with this distance being negative inside the object. This has some nice advanced use cases.
      • heightmaps: Typically used for modeling terrain, represent terrain height at each 2D coordinate, normally with a grayscale bitmap image. Advantages include simplicity of representation and the ability to edit the heightmap with image editing tools, among disadvantages are limited resolution of the heightmap and inability to represent e.g. overhangs.
      • ...
    • point cloud: Captures only individual points, sometimes with additional attributes such as color of each point, something akin its size, orientation and so on. This is typically what we get as raw data from some 3D scanners (see photogrammetry). The advantage of point clouds is simplicity, they can be relatively easily rendered (just by drawing points on the screen), disadvantage is that the model has no surface and volume, there are "holes" in it: point cloud therefore has to be very dense to really be useful and for that it can take a lot of storage space. Point clouds may be converted to a more desirable format with special algorithms.
    • wireframe: Records only edges, again potentially with attributes like their color etc. Just as with point clouds wireframe model has no surface or volume, but it at least has some information about which points are interconnected. Nowadays wireframe is not so much used as a model representation but rather as one of viewing modes.
  • by features:
    • UV mapped: Having UV map, i.e. being ready to be textured.
    • textured: Having one or more textures.
    • rigged: Having bone rig set up for skeletal animation.
    • animated: Having predefined animation (e.g. idle, running, attacking, ...).
    • with materials: Having one or more materials defined and assigned to parts of the model. Materials define properties such as texture, metalicity, transparency and so on.
    • with smoothing groups: Having information important for correct shading (so that sharp edges look sharp and smooth edges look smooth).
    • with subdivision weights: Having information that's important for correct automatic subdivision (geometrical smoothing).
    • ...
  • by detail, resolution and fidelity:
    • low poly: Relatively low total count of polygons.
    • mid poly: Polygon count somewhere between low and high poly.
    • high poly: Relatively high total count of polygons.
    • ...
  • by artistic style:
    • realistic
    • stylized
    • abstract
    • ...
  • by intended use:
    • real time: Made for real time graphic, i.e. optimized for speed.
    • offline: Made for offline rendering, optimized for detail and visual quality.
    • for animation: Made with animation in mind -- this requires extra effort on correct topology so that the model deforms well.
    • ...

Animation: the main approaches to animation are these (again, just the important ones, you may encounter other ways too):

  • model made of separate parts: Here we make a bigger model out of smaller models (i.e. body is head plus torso plus arms plus legs etc.) and then animate the big model by transforming the small models. This is very natural e.g. for animating machines, for example the wheels of a car, but characters were animated this way too (see e.g. Morrowind). The advantage is that you don't need any sophisticated subsystem for deforming models or anything, you just move models around (though it's useful to at least have parenting of models so that you can attach models to stick together). But it can look bad and there may be some ugliness like models intersecting etc.
  • keyframe morphing: The mostly sufficient KISS way based on having a few model "poses" and just interpolating between them -- i.e. for example a running character may have 4 keyframes: legs together, right leg in front, legs together, left leg in front; now to make smooth animation we just gradually deform one keyframe into the next. Now let's stress that this is a single model, each keyframe is just differently shaped, i.e. its vertices are at different positions, so the model is animating by really being deformed into different shapes. This was used in old games like Quake, it looks good and works well -- use it if you can.
  • skeletal animation: The mainstream, gigantically bloated way, used in practically all 3D games since about 2005. Here we firstly make a skeleton for the model, i.e. an additional "model made of sticks (so called bones)" and then we have to painstakingly rig (or skin) the model, i.e. we attach the skeleton to the model (more technically assign weights to the model's vertices so as to make them deform correctly). Now we have basically a puppet we can move quite easily: if we move the arm bone, the whole arm moves and so on. Now animations are still made with keyframes (i.e. making poses in certain moments in time between which we interpolate), the advantage against morphing is just that we don't have to manually reshape the model on the level of individual vertices, we only manipulate the bones and the model deforms itself, so it's a bit more comfortable but this requires firstly much more work and secondly there needs to be a hugely bloated skeletal system programmed in (complex math, complex model format, skinning GUI, ...). Bones have more advantages, you can e.g. make procedural animations, ragdoll physics, you can attach things like weapon to the bones etc., but it's mostly not worth it. Even if you have a rigged skeletal model, you can still export its animation in the simple keyframe morphing format so as to at least keep your engine simple. Though skeletal animation was mostly intended for characters, nowadays it's just used for animating everything (like book pages have their own bones etc.) because many engines don't even support anything simpler.

Let us also briefly mention texturing, an important part of making traditional 3D models. In the common, narrower sense texture is a 2D images that is stretched onto the model surface to give the model more detail, just like we put wallpaper on a wall -- without textures our models have flat looking surfaces with just a constant color (at best we may assign each polygon a different color, but that won't make for a very realistic model). Putting texture on the model is called texture mapping -- you may also hear the term UV mapping because texturing is essential about making what we call a UV map. This just means we assign each model vertex 2D coordinates inside the texture; we traditionally call these two coordinates U and V, hence the term UV mapping. UV coordinates are just coordinates within the texture image; they are not in pixels but are typically normalized to a float in range <0,1> (i.e. 0.5 meaning middle of the image etc.) -- this is so as to stay independent of the texture resolution (you can later swap the texture for a different resolution one and it will still work). By assigning each vertex its UV texture coordinates we basically achieve the "stretching", i.e. we say which part of the texture will show on what's the character's face etc. (Advanced note: if you want to allow "tears" in the texture, you have to assign UV coordinates per triangle, not per vertex.) Now let's also mention a model can have multiple textures at once -- the most basic one (usually called diffuse) specifies the surface color, but additional textures may be used for things like transparency, normals (see normal mapping), displacement, material properties like metalicity and so on (see also PBR). The model may even have multiple UV maps, the UV coordinates may be animated and so on and so forth. Finally we'll also say that there exists 3D texturing that doesn't use images, 3D textures are mostly procedurally generated, but this is beyond our scope now.

We may do many, many more things with 3D models, for example subdivide them (automatically break polygons down into more polygons to smooth them out), apply boolean operations to them (see above), sculpt them (make them from virtual clay), optimize them (reduce their polygon count, make better topology, ...), apply various modifiers, 3D print them, make them out of paper (see origami) etcetc.

{ Holy crab, there is a lot to say about 3D models. ~drummyfish }

Example

Let's take a look at a simple polygonal 3D model. The following is a primitive, very low poly model of a house, basically just a cube with roof:

               I
             .:..  
           .' :':::..
        _-' H.' '.   ''-. 
      .'    .:...'.......''..G
    .' ...'' :    '.    ..' :
  .::''......:.....'.-''    :
 E:          :      :F      :
  :          :      :       :
  :          :      :       :
  :          :......:.......:
  :        .' D     :     .' C    
  :     .''         :   -'
  :  .''            : .'
  ::'...............:'
 A                   B

In a computer it would firstly be represented by an array of vertices, e.g.:

-2 -2 -2  (A)
 2 -2 -2  (B)
 2 -2  2  (C)
 2 -2 -2  (D)
-2  2 -2  (E)
 2  2 -2  (F)
 2  2  2  (G)
 2  2 -2  (H)
 0  3  0  (I)

Along with triangles (specified as indices into the vertex array, here with letters):

ABC ACD          (bottom)
AFB AEF          (front wall)
BGC BFG          (right wall)
CGH CHD          (back wall)
DHE DEA          (left wall)
EIF FIG GIH HIE  (roof)  

We see the model consists of 9 vertices and 14 triangles. Notice that the order in which we specify triangles follows the rule that looking at the front side of the triangle its vertices are specified clockwise (or counterclockwise, depending on chosen convention) -- sometimes this may not matter, but many 3D engines perform so called backface culling, i.e. they only draw the front faces and there some faces would be invisible from the outside if their winding was incorrect, so it's better to stick to the rule if possible.

The following is our house model in obj format -- notice how simple it is (you can copy paste this into a file called house.obj and open it in Blender):

# simple house model
v 2.000000 -2.000000 -2.000000
v 2.000000 -2.000000 2.000000
v -2.000000 -2.000000 2.000000
v -1.999999 -2.000000 -2.000000
v 2.000001 2.000000 -2.000000
v 1.999999 2.000000 2.000000
v -2.000001 2.000000 2.000000
v -2.000000 2.000000 -2.000000
v -2.000001 2.000000 2.000000
v 0.000000 3.000000 0.000000
vn 1.0000 0.0000 0.0000
vn -0.0000 0.0000 1.0000
vn 0.0000 -1.0000 0.0000
vn 0.0000 0.0000 -1.0000
vn -1.0000 -0.0000 -0.0000
vn -0.0000 0.8944 0.4472
vn 0.4472 0.8944 0.0000
vn 0.0000 0.8944 -0.4472
vn -0.4472 0.8944 -0.0000
s off
f 6 2 5
f 2 1 5
f 6 9 3
f 3 2 6
f 4 1 3
f 2 3 1
f 5 1 8
f 4 8 1
f 8 4 9
f 4 3 9
f 9 6 10
f 6 5 10
f 8 10 5
f 8 9 10

And here is the same model again, now in collada format (it is an XML so it's much more verbose, again you can copy paste this to a file house.dae and open it in Blender):

<?xml version="1.0" encoding="utf-8"?>
<COLLADA xmlns="http://www.collada.org/2005/11/COLLADASchema" version="1.4.1"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <!-- simple house model -->
  <asset>
    <contributor> <author>drummyfish</author> </contributor>
    <unit name="meter" meter="1"/>
    <up_axis>Z_UP</up_axis>
  </asset>
  <library_geometries>
    <geometry id="house-mesh" name="house">
      <mesh>
        <source id="house-mesh-positions">
          <float_array id="house-mesh-positions-array" count="30">
             2  2 -2      2 -2 -2      -2 -2 -2      -2  2 -2
             2  2  2      2 -2  2      -2 -2  2      -2  2  2
            -2 -2  2      0  0  3
          </float_array>
          <technique_common>
            <accessor source="#house-mesh-positions-array" count="10" stride="3">
              <param name="X" type="float"/>
              <param name="Y" type="float"/>
              <param name="Z" type="float"/>
            </accessor>
          </technique_common>
        </source>
        <vertices id="house-mesh-vertices">
          <input semantic="POSITION" source="#house-mesh-positions"/>
        </vertices>
        <triangles material="Material-material" count="14">
          <input semantic="VERTEX" source="#house-mesh-vertices" offset="0"/>
          <p>
            5 1 4   1 0 4    5 8 2    2 1 5    3 0 2    1 2 0    4 0 7
            3 7 0   7 3 8    3 2 8    8 5 9    5 4 9    7 9 4    7 8 9
          </p>
        </triangles>
      </mesh>
    </geometry>
  </library_geometries>
  <library_visual_scenes>
    <visual_scene id="Scene" name="Scene">
      <node id="house" name="house" type="NODE">
        <translate sid="location">0 0 0</translate>
        <rotate sid="rotationZ">0 0 1 0</rotate>
        <rotate sid="rotationY">0 1 0 0</rotate>
        <rotate sid="rotationX">1 0 0 0</rotate>
        <scale sid="scale">1 1 1</scale>
        <instance_geometry url="#house-mesh" name="house"/>
      </node>
    </visual_scene>
  </library_visual_scenes>
  <scene> <instance_visual_scene url="#Scene"/> </scene>
</COLLADA>

TODO: other types of models, texturing etcetc.

3D Modeling: Learning It And Doing It Right

WORK IN PROGRESS

Do you want to start 3D modeling? Or do you already know a bit about it and just want some advice to get better? Then let us share a few words of advice here.

Let us preface with mentioning the hacker chad way of making 3D models, i.e. the LRS way 3D models should ideally be made. Remeber, you don't need any program to create 3D models, you don't have to be a Blender whore, you can make 3D models perfectly fine without Blender or any similar program, and even without computers. Sure, a certain kind of highly artistic, animated, very high poly models will be very hard or impossible to make without an interactive tool like Blender, but you can still make very complex 3D models, such as that of a whole city, without any fancy tools. Of course people were making statues and similar kinds of "physical 3D models" for thousands of years -- sometimes it's actually simpler to make the model by hand out of clay and later scan it into the computer, you can just make a physical wireframe model, measure the positions of vertices, hand type them into a file and you have a perfectly valid 3d model -- you may also easily make a polygonal model out of paper, BUT even virtual 3D models can simply be made with pen and paper, it's just numbers, vertices and triangles, very manageable if you keep it simple and well organized. You can directly write the models in text formats like obj or collada. First computer 3D models were actually made by hand, just with pen and paper, because there were simply no computers fast enough to even allow real time manipulation of 3D models; back then the modelers simply measured positions of someone object's "key points" (vertices) in 3D space which can simply be done with tools like rulers and strings, no need for complex 3D scanners (but if you have a digital camera, you have a quite advanced 3D scanner already). They then fed the manually made models to the computer to visualize them, but again, you don't even need a computer to draw a 3D model, in fact there is a whole area called descriptive geometry that's all about drawing 3D models on paper and which was used by engineers before computers came. Anyway, you don't have to go as far as avoiding computers of course -- if you have a programmable computer, you already have the luxury which the first 3D artists didn't have, a whole new world opens up to you, you can now make very complex 3D models just with your programming language of choice. Imagine you want to make the said 3D model of a city just using the C programming language. You can first define the terrain as heightmap simply as a 2D array of numbers, then you write a simple code that will iterate over this array and converts it to the obj format (a very simple plain text 3D format, it will be like 20 lines of code) -- now you have the basic terrain, you can render it with any tool that can load 3D models in obj format (basically every 3D tool), AND you may of course write your own 3D visualizer, there is nothing difficult about it, you don't even have to use perspective, just draw it in orthographic projection (again, that will be probably like 20 lines of code). Now you may start adding houses to your terrain -- make a C array of vertices and another array of triangle indices, manually make a simple 3D model of a house (a basic shape will have fewer than 20 vertices, you can cut it out of paper to see what it will look like). That's your house geometry, now just keep making instances of this house and placing them on the terrain, i.e. you make some kind of struct that will keep the house transformation (its position, rotation and scale) and each such struct will represent one house having the geometry you created (if you later improve the house model, all houses will be updates like this). You don't have to worry about placing the houses vertically, their height will be computed automatically so they sit right on the terrain. Now you can update your model exporter to take into account the houses, it will output the obj model along with them and again, you can view this whole model in any 3D software or with your own tools. You can continue by adding trees, roads, simple materials (maybe just something like per triangle colors) and so on. This approach may actually even be superior for some projects just as scripting is superior to many GUI programs, you can collaborate on this model just like you can collaborate on any other text program, you can automatize things greatly, you'll be independent of proprietary formats and platforms etcetc. This is how 3D models would ideally be made.

OK, back to the mainstream now. Nowadays as a FOSS user you will most likely do 3D modeling with Blender -- we recommended it to start learning 3D modeling as it is powerful, free, gratis, has many tutorials etc. Do NOT use anything proprietary no matter what anyone tells you! Once you know a bit about the art, you may play around with alternative programs or approaches (such as writing programs that generate 3D models etc.). However as a beginner just start with Blender, which is from now on in this article the software we'll suppose you're using.

Start extremely simple and learn bottom-up, i.e. learn about fundamentals and low level concepts and start with very simple models (e.g. simple untextured low-poly shape of a house, box with a roof), keep creating more complex models by small steps. Do NOT fall into the trap of "quick and easy magic 3D modeling" such as sculpting or some "smart apps" without knowing what's going on at the low level, you'll end up creating extremely ugly, inefficient models in bad formats, like someone wanting to create space rockets without learning anything about math or physics first. Remember to practice, practice, practice -- eventually you learn by doing, so try to make small projects and share your results on sites such as opengameart to get feedback and some mental satisfaction and reward for your effort. The following is an outline of possible steps you may take towards becoming an alright 3D artist:

  1. Learn what 3D model actually is, basic technical details about how a computer represents it and roughly how 3D rendering works. It is EXTREMELY important to have at least some idea about the fundamentals, i.e. you should learn at least the following:
  • 3D models that are used today consist of vertices and triangles (though higher polygons are usually supported in modeling software, everything is broken down to triangles eventually), computers usually store arrays of vertices and triangles as indices pointing to the array of vertices. Triangles have facing (front and back side, determined by the order of its vertices). These 3D models only represent the boundary (not the volume). All this is called the model's geometry.
  • Normals are vectors "perpendicular to the surface", they can be explicitly modified and stored or computed automatically and they are extremely important because they say how the model interacts with light (they are used in shading of the model), i.e. which edges appear sharp or smooth. Normal maps are textures that can be used to modify normals to make the surface seem rough or otherwise deformed without actually modifying the geometry. You HAVE TO understand normals.
  • Textures are images (or similar image-like data) that can be mapped to the model surface to "paint it" (or give it other material properties). They are mapped to models by giving vertices texturing UV coordinates. To make textures you'll need some basics of 2D image editing (see e.g. GIMP).
  • 3D rendering (and also modeling) works with the concept of a scene in which a number of models reside, as well as a virtual camera (or multiple ones), lights and other objects. These objects have transformations (normally translation, rotation and scale, represented by matrices) and may form a hierarchy, so called scene graph (some objects may be parents of other objects, meaning the child transformations are relative to parents) etc.
  • A 3D renderer will draw the triangles the model consists of by applying shading to determine color of each pixel of the rasterized triangle. Shading takes into account besides others texture(s) of the model, its material properties and light falling on the model (in which the model normals play a big role). Shading can be modified by creating shaders (if you don't create custom shaders, some default one will be used).
  • Briefly learn about other concepts such as low/high poly modeling and basic 3D formats such as OBJ and COLLADA (which features they support etc.), possible other models representations (voxels, point clouds, ...) etc.
  1. Manually create a few extremely simple low-poly untextured models, e.g. that of a simple house, laptop, hammer, bottle etc. Keep the vertex and triangle count very low (under 100), make the model by MANUALLY creating every vertex and triangle and focus only on learning this low level geometry manipulation well (how to create a vertex, how to split an edge, how to rotate a triangle, ...), making the model conform to good practice and get familiar with tools you're using, i.e. learn the key binds, locking movement direction to principal axes, learn manipulating your 3D view, setting up the free/side/front/top view with reference images etc. Make the model nice! I.e. make it have correctly facing triangles (turn backface culling on to check this), avoid intersecting triangles, unnecessary triangles and vertices, remove all duplicate vertices (don't have multiple vertices with the same position), connect all that should be connected, avoid badly shaped triangles (e.g. extremely acute/long ones) etc. Also learn about normals and make them nice! I.e. try automatic normal generation (fiddle e.g. with angle thresholds for sharp/smooth edges), see how they affect the model look, try manually marking some edges sharp, try out smoothing groups etc. Save your final models in OBJ format (one of the simplest and most common formats supporting all you need at this stage). All this will be a lot to learn, that's why you must not try to create a complex model at this stage. You can keep yourself "motivated" e.g. by aiming for creating a low-poly model collection you can share at opengameart or somewhere :)
  2. Learn texturing -- just take the models you have and try to put a simple texture on them by drawing a simple image, then unwrapping the UV coordinates and MANUALLY editing the UV map to fit on the model. Again the goal is to get familiar with the tools and concepts now; experiment with helpers such as unwrapping by "projecting from 3D view", using "smart" UV unwrap etc. Make the UV map nice! Just as model geometry, UV maps also have good practice -- e.g. you should utilize as many texture pixels as possible (otherwise you're wasting space in the image), watch out for color bleeding, the mapping should have kind of "uniform pixel density" (or possibly increased density on triangles where more details is supposed to be), some pixels of the texture may be mapped to multiple triangles if possible (to efficiently utilize them) etc. Only make a simple diffuse texture (don't do PBR, material textures etc., that's too advanced now). Try out texture painting and manual texture creation in a 2D image program, get familiar with both.
  3. Learn modifiers and advanced tools. Modifiers help you e.g. with the creation of symmetric models: you only model one side and the other one gets mirrored. Subdivide modifier will automatically create a higher poly version of your model (but you need to help it by telling it which sides are sharp etc.). Boolean operations allow you to apply set operations like unification or subtraction of shapes (but usually create a messy geometry you have to repair!). There are many tools, experiment and learn about their pros and cons, try to incorporate them to your modeling.
  4. Learn retopology and possibly sculpting. Topology is an extremely important concept -- it says what the structure of triangles/polygons is, how they are distributed, how they are connected, which curves their edges follow etc. Good topology has certain rules (e.g. ideally only being composed of quads, being denser where the shape has more detail and sparser where it's flat, having edges so that animation won't deform the model badly etc.). Topology is important for efficiency (you utilize your polygon budget well), texturing and especially animation (nice deformation of the model). Creating more complex models is almost always done in the following two steps:
  • Creating the shape while ignoring topology, for example with sculpting (but also other techniques, e.g. just throwing shapes together). The goal is to just make the desired shape.
  • Retopology: creating a nice topology for the shape while keeping the shape unchanged. This is done by starting modeling from the start with the "stick to surface" option, i.e. whenever you create or move a vertex, it sticks to the nearest surface (surface of the created shape). Here you just try to create a new "envelope" on the existing shape while focusing on making the envelope's topology nice.
  1. Learn about materials and shaders. At this point you may learn about how to create custom shaders, how to create transparent materials, apply multiple textures, how to make realistic skin, PBR shaders etc. You should at least be aware of basic shading concepts and commonly encountered techniques such as Phong shading, subsurface scattering, screen space effects etc. because you'll encounter them in shader editors and you should e.g. know what performance penalties to expect.
  2. Learn animation. First learn about keyframes and interpolation and try to animate basic transformations of a model, e.g. animate a car driving through a city by keyframing its position and rotation. Then learn about animating the model's geometry -- first the simple, old way of morphing between different shapes (shape keys in Blender). Finally learn the hardest type of animation: skeletal animation. Learn about bones, armatures, rigging, inverse kinematics etc.
  3. Now you can go crazy and learn all the uber features such as hair, physics simulation, NURBS surfaces, boob physics etc.

Don't forget to stick to LRS principles! This is important so that your models are friendly to good technology. I.e. even if "modern" desktops don't really care about polygon count anymore, still take the effort to optimize your model so as to not use more polygons that necessary! Your models may potentially be used on small, non-consumerist computers with software renderers and low amount of RAM. Low-poly is better than high-poly (you can still prepare your model for automatic subdivision so that obtaining a higher poly model from it automatically is possible). Don't use complex stuff such as PBR or skeletal animation unless necessary -- you should mostly be able to get away with a simple diffuse texture and simple keyframe morphing animation, just like in old games! If you do use complex stuff, make it optional (e.g. make a normal map but don't rely on it being used in the end).

Good luck with your modeling!