Understanding UV Mapping and Textures
In this article, we’ll examine UV mapping techniques and the different types of textures. What are UVs?UV mapping is the 3D modeling process of projecting a 2D image onto a 3D model’s surface. The term “UV” refers to the bidimensional (2D) nature of the process: the letters “U” and “V” denote the axes of the 2D texture because “X”, “Y” and “Z” are already used to denote the axes of the 3D model.
In this article, we’ll examine UV mapping techniques and the different types of textures.
What are UVs?
UV mapping is the 3D modeling process of projecting a 2D image onto a 3D model’s surface. The term “UV” refers to the bidimensional (2D) nature of the process: the letters “U” and “V” denote the axes of the 2D texture because “X”, “Y” and “Z” are already used to denote the axes of the 3D model.
The 3D model is unfolded at the seams and laid out flat on a 2D plan, not unlike the process used for pattern-making in sewing. Once the mapping is complete, the artist can produce a custom image based on the “pattern” and apply it to the 3D model. This process makes it possible to produce models rich in color and detail. Other processes exist to color models, but they have much narrower limitations.
Let’s look at what mapping looks like, using the common example of a simple cube:
This example clearly shows how the model is unfolded in a single piece onto a 2D canvas. As a general rule, it is best to keep the number of seams to a minimum, since the most labour-intensive part of the process is optimizing their number and position on the model. Ideally, the seams will be invisible, since they should be located on a cut or edge of your model (for example, the seams on a pair of pants, or the joint between two metal plates), or where the user won’t notice them (for example, if your model is a person, under the arms, at the back of the head, under hair or a hat).
A few years ago, we saw the advent of automated UV-generating models giving the user varying levels of control over the process. Their quality has steadily increased and they are now up to a professional standard. However, they must be handled with care, as the seams often end up being in places that, though acceptable, are visible. Newer automated models allow the artist to paint directly on the model, doing away with UVs entirely. On the other hand, textures generated in 2D applications such as Photoshop require manually-optimised mapping. Your decision about whether to use automated mapping will depend on which technique you plan to use to create textures.
Another important point is that the UV mapping process can only be started after the 3D modelling is done, and, due to technical limitations, before any form of animation.
The following two examples illustrate the importance of doing things in the right order. Notice how the grid is warped.
—If you stretch the cube after UV:
—If you add a new geometry:
Textures with Varied Effects
Let’s say we have our mesh, i.e. the 3D monochrome model with no textures, animation, or effects, and its UV mapping. You’d think that you can simply place the texture on the model and be done with it. Unfortunately, this will probably not yield optimal results. In order to produce quality objects, you need to use specialised textures which may look odd but add subtlety and richness and enhance the realism of your work.
Let’s start with the most intuitive, visible texture that “colors” your model. It can be a Diffuse map or an Albedo map:
— Diffuse map textures give your model its basic color. Not only does it define the color that your your brain will associate with the object, it is also used by your software to shade reflected light. Objects surrounding your model will be tinted with this color; the effect will be subtle, almost invisible, but it will make your work much more realistic-looking.
— Albedo map textures are similar to Diffuse map textures and perform the same function. The difference is that the Albedo map must not contain contrasting lighting; it shows the basic color of the object with no shadows or glare.
Then there is the Specular map, which comes in two varieties: the Specular Level Map and the Specular Color Map. These textures control the amount and color of the light reflected by the object. This is not the kind of light you’d see reflected in a mirror, but rather the light reflected by everyday objects around us: walls, a table, etc. This is called diffuse lighting. It should be noted that the color of the reflection is often white or the same color as the Diffuse map. The Specular Level Map can also be used to simulate shadows in cracks or other fine details, also known as Ambient Occlusion.
Ambient Occlusion Maps are usually employed to enhance the realism of a model. AO, as it is often called, simulates shadows generated by the environment, especially on concave areas of models. The effect is especially noticeable when compared to a model without it. Notice how the image on the right, with AO, displays better tonal shading, as well as a slight darkening of the surface immediately beneath the sphere.
This next texture comes in handy to illustrate the subtlety of Normal Maps. These are textures that simulate relief, called Bump Maps. By definition, they must be black and white, since the grey scale indicating the texture level is the only information available. This texture is basically a shortcut allowing you to show relief on a surface without adding polygons to the model. As an illustration, let’s take a simple, flat, square surface, defined by four interrelated vertices. Normally, this square would have to be perfectly flat, since the four vertices are on the same plane. Normal Map greyscales every pixel on a scale of 0 to 100% to determine the height of each pixel on the surface. Below is a side-by-side illustration of the same square, before and after the application of a Bump Map. On the right, the black-and-white texture was used in the Bump Map to simulate relief.
Nowadays, Normal maps are often used to control surface warping in real-time. The process is basically the same, except that the value used is no longer just greyscale but rather the three levels of an RGB image, red, green and blue. By combining these three values, you can warp a surface in three directions rather than just along the axis of the plane. The advantage of this is that it gives your object a more realistic appearance when looked at from an oblique angle.
There are two types of Normal Maps. The first is the Tangent Space Normal Map, designed specifically for models which are going to be warped, for example animated bodies. Then there are also Object Space and World Space Normal maps. These two are designed for models that are not going to be warped. The Object Space Normal Map is optimized for moving objects, for example a chair which is going to be moved. The World Space Normal Map is designed for static objects which are going to be completely immobile, like walls or a floor.
When we create animations for video or film, we usually use Displacement maps, which, contrary to the two other types of maps, generate additional polygons upon rendering, for superior quality. The problem is that the process is difficult and hard on your system, significantly slowing the render time, which is hard to justify when working on real-time animation.
The last map we’re going to look at is the Reflection Map. This map is used to create a reflection that will be visible on the model. For example, if we apply a cloudy sky texture to a sphere, it will reflect the cloudy sky texture even if there is no cloudy sky in the scene. It should be noted that the reflection will be visible on the entire model, unless you use several different materials.
Using Textures: Materials
Since I’ve just mentioned materials, let’s explain what they are. Materials are a string of predefined instructions telling the software how to manage the model’s reactions to light, texture by texture. When used by 3D artists, materials are like a container in which all the textures for a single entity are gathered and configured. Some materials work differently and don’t use the Texture maps we’ve just presented, instead using algorithms that can be configured to various degrees. These are called procedural materials. For example, this type of material can give the appearance of wood through an algorithm, dispensing with the need for UV mapping. Since these materials are application scripts, the artist must know which materials are provided by the platform being used, and be able to make do with whatever material is available.
And let’s not forget Toon Shading (after animated cartoons, also called Cel Shading, or celluloid shading, refering to the cellulose formerly used to make cartoons). This material simulates cartoons, with defined edges and a more restricted shading palette.
Now let’s go back to automated UV mapping. As mentioned, many applications now allow artists to paint directly on the model. Some of these methods are based on an automatic UV creation system, which solves the problem of hiding the seams. Another approach, called Polypainting, allows you to paint directly on the model without even using UV. With this method, color is directly associated to the model’s geometry. Each vertex can be painted a different color, so that the surface is shaded with various layers of a combination of colors on each point of the surface.
These processes are often used to create program-generated static objects which are then touched up with software like Photoshop. Some programs allow users to extract and convert the information in image files, which can then be used as textures.
The advantage of this compared to traditional UV mapping is that it is much more intuitive, eliminating the technical aspect of the work, which is something that artists appreciate. The drawback is that the resulting UVs are extremely complex, generating fragmented and incompressable texture files which do not allow manual inspection or revision. This can be a major hurdle to any future adjustments or touch-ups.