This is something that was bugging me for a little while. Cycles offers leaps and bounds in efficiency over the classic Blender rendering engine; but the down side is that we all need to learn how to use it in a new way.
I’m going to cover the simplest thing I can think of first, which is texturing. It’s targeted at people who have never textured a thing before in their lives; so if you’re already familiar with a section, feel free to skip ahead.
Blender is an excellent 3D modeling tool (and incidentally a decent video editor as a side effect), but it isn’t meant for 2D work, and that’s what textures either are or ultimately rely on. Additionally, most of us are GIMP, Adobe Photoshop, or Inkscape fans and aficionados to begin with, and an additional image editor just isn’t necessary. So, Blender lets you use your image editor of choice, and settles for handling proper mapping of images to UV coordinates.
So, what is a UV coordinate?
It’s safe to say that we’re all quite familiar with X, Y, Z, rho, phi, theta, and their related coordinate systems; but it’s bad form to refer to a coordinate of a texture via X and Y. Yes, they’re arguably two dimensional, but XYZ is used for points in space.
You don’t know, on the basis of a texture alone, where that point in space is going to be; nor should there be any nominal confusion about the location of a point on a texture, and the location of that point in 3D space (which, yeah, actually comes up a lot).
Additionally, as you come to understand textures better, you’ll discover two things. The first is that their coordinates are arguably six dimensional–you need to consider color and alpha, as well. But that’s the easy and boring part. The other note is that you can do a lot more with them than color a mesh; they’re just as useful for bump mapping, light mapping, expressing frequency density, even animation. With a proper set of tools, there really isn’t any limit to what you can do with a texture.
So, nominally, U is a horizontal coordinate of an element on a texture, and V is a vertical component.
As a side note, you will occasionally find texture coordinates under the names S and T. The difference is a matter of convention, but in most cases this refers to whether the vertical component–V or T–moves “upward” or “downward”. The V axis generally faces “down” the image, the T axis faces “up”. However, if you understand why “up” and “down” are in quotation marks here, and recognize their arbitrary nature, then you already know how silly this can be.
Suffice it to say that they are a respectable alternative, a flag that you might need to do some rotating or inverting of your image to get it to behave, and in the rare case of needing third and fourth position components, S and T are preferred.
Unwrapping a Mesh
I’m skipping over the techniques of creating a 3D shape (mesh, here) in Blender, as it’s somewhat tangential to the subject.
Remember to use the Cycles rendering engine; as it is all quite different for the classic Blender rendering engine.
We’ll start with a basic cube; the starter file will work.
Drag from the top right corner of your 3D view, until you have a new sub window. From its bottom left menu, select “UV/Image Editor”. What you have will initially be fairly boring; a simple rectangle in the middle of the cell.
This is the direct view of your texture, in UV coordinates. Every face of your shape (in this case, the cube) samples its coloring from a position within the UV window. Of course, your cube isn’t even textured yet, so we’ll get started with something called unwrapping.
Remember when you were in grade school, and you learned to create a cube from a simple piece of paper? This classic drawing, a pair of scissors, and a little tape?
That was basically your first UV unwrap.
If you wanted to, you could have sketched a texture onto the image, and presuming you knew which edges were going to wrap to which, you could “texture” that paper cube.
UV unwrapping really isn’t any different than that; save that it’s digital, and with the Cycles material editor you can do much cooler things with it.
To unwrap, go to your 3D view, switch to edit mode, and hit “U”. A menu will pop up.
These are all different methods of UV unwrapping. I’m literally not even familiar with every method by which this can be done–it extends well beyond the menu–but I am going to go over some of my favorites.
Smart UV Project
This will basically unwrap your shape so that you have every face visible.
It is particularly helpful for simpler meshes, like our cube, or some UV spheres. Every modification made to each square’s (or polygon’s) image will be “painted” onto your mesh.
It’s easy to see that this default unfolding is quite different from the example above, which also works; unfortunately if you tried to cut a cube out of a flat piece of paper like this, I imagine you had quite a bit of trouble folding it. (Cycles couldn’t care less.)
You’ll notice that the outline shape is styled exactly like a mesh; and in fact, you can grab and drag every point in it. Once a texture is assigned to it, any modification of a particular outline will alter the nature of the image painted onto the corresponding face; you basically have free reign with your texture.
This is true because Blender uses a technique known as lerping (or linear interpolating, for long,) to determine the appropriate color for each pixel of your render. It basically lets you find a happy medium between one color and another, among other things. 3D programmers do it roughly as often as they click the mouse, so clearly it’s useful and important.
However, a complete unwrap can be nightmare for 2D artists, especially when it comes to complex meshes (like, say, the human body). For that, we have other methods.
Project from View
…and Project from View (Bounds).
Try hitting U again over the 3D view in edit mode, and this time, select Project from View. You’ll notice that the point map projected over the texture is a literal copying of the camera view (in this case, that means your view) of the mesh, flattened out. This has drawbacks, but it can be a godsend to 2D artists.
Ever see one of those paintings on a street or sidewalk, which have an absolutely Hongkiat 3D Street Art? Then, as you walk around or look at it from another angle, the depth disappears? That’s analogous to Project from View.
On exporting (which I’ll get to in a second), an artist can paint/draw/render whatever they feel the character should look like, from that angle. They will get a blindingly realistic portrayal, from that angle, in the final render. The drawback is that the projection has to stretch (from lerping) from increasingly more radical angles, and if you aren’t careful, this can kill the magic in the final render.
The other thing worth noting is that, by default, the texture painting punches through to the other side of the object as well; but that isn’t as much of a problem as you might think. Most of your edit-mode vertex manipulation tools also work on the UV editor; and you also have the ability to manipulate the texture further in the Node editor, which we’ll cover later on.
The only difference that Project from View (Bounds) makes is that the image will be stretched out to cover the entire bounds of the UV image; which is usually a good thing. Wasted space is typically still kept, and it isn’t helpful in the end. However, if memory is a concern, then it can be a good idea to keep the projection shrunk down to a minimum and simply crop your image accordingly.
Exporting the UV Image
Under the UV/Image Editor’s UV menu, there is an option, titled Export UV Layout on Blender 2.78. This will save your UV layout–inclusive of the vertices and segments–as a 2D image.
The default is to save as a PNG. However in the bottom left you’ll see that SVG and EPS (encapsulated postscript) are also available. That should be handy for all you vector-graphics jockeys.
Go ahead and export your cube mapping. Open it in your favorite graphics editor.
You’ll find that all of the edges and vertices have been imprinted on the image; it is absolutely ready for you to paint a texture.
I can’t offer many texturing tips here–it’s too far from the core focus of the posting. However, I might in the future.
Make your texture and save it.
Importing the UV Image
This is where we finally make the magic happen.
There are effectively two parts to applying a texture to a mesh. The first is in the Properties pane under the texture panel.
This is all assuming that you still have your cube selected. You’ll note the square image at the top of the panel, which has the checkerboard pattern on it? That’s texturing.
You’ll also notice the ubiquitous open image box, which I had my mouse over in this instance. Click on that, and guide it to your texture file. There are a number of options beneath it, but the most important two are Color Space, and Bounds Handling. The others can certainly be helpful, but they’re a bit beyond the scope of this tutorial. (I recommend that you either play with them until you’ve figured them out, like I would; or find a more specific tutorial.)
This simply informs Blender of whether you’re using a colored image, or an image of non-colored data. More than anything, this affects the way light plays on the surface of it.
I’m bringing it up mostly because the average uninformed user thinks that non-color-data means black-and-white; which it doesn’t. (Try it out and see.) By default, for color data, Cycles converts the traditional sRGB color space to another color space called rec709, a linear colorspace. If this was to be used for texturing, then you probably wouldn’t want that. Non-color data prevents this transition. If you look at the result of the exchange in a render, you’ll find that non-color data lacks a certain feeling of depth.
This is basically how your texture behaves when the polytope reaches the edge of it. The options are clip, extend, and repeat.
Clip basically means stop rendering texture there, and return to the default for out-of-bounds points. This can be quite handy for, say, decals.
Extend means stretch over the entire scope of the mesh.
Repeat, the default, means start over from the opposing edge of the texture.
You might notice, if you skipped ahead and tried rendering, that your mesh doesn’t have a texture yet. Well, that’s because we haven’t told it what to do with that image. As I said earlier, there are countless possibilities.
Go to your materials tab–that’s the one right next to texture, with the checkered sphere on it–and add a new material. Enable nodes with the giant button under surface, and for the moment, choose “Diffuse BSDF” under the shader menu.
Remember that corner that you dragged to create the new cell in the blender window? We’re going to do it again; this time, I suggest you do it from the UV image editor cell’s top right corner, to the left. You’ll get another cell. Switch it (via the bottom left menu) to Node Editor.
This is where a lot of the changes with cycles, as far as process goes, came into play. You can build your shaders with a spaghetti graph. Many of the controls from the 3D view should still be available. Hover over an empty spot, hit Shift+A, and select Texture and Image Texture.
Clock on the image icon on the left of the image texture node, and you’ll get a list of available (loaded) images. Pick out your texture image.
Drag from the yellow circle on the right of Image Texture to the yellow circle on the left of Diffuse BSDF; and the color will be mapped to the image texture.
However, your cube just went black, didn’t it. WTH, right? Not so much; the problem is that, like I said earlier, images can also be used for animations; which include panning and scaling. So, it needs to know what UV coordinate it’s generating a color for.
Shift+A in an exposed area and select Input, and UV Map. Drag from the innocuous-looking UV Node blue circle on the right of the UV Map node, to the blue circle on the left of Image Texture (labeled “Vector”).
Now you’re ready to go. Make sure your lighting is right (I suggest an area lamp tilted to face the cube, tuned to around 2000 in strength), hit F12 to render an image, and observe your beautiful textured cube.