CMSC 23700 Lab 3: Texturing Objects & Per-Pixel Lighting Computations

Lab 3 Objectives

To bring more realism into our scenes we are going to learn about two important concepts in computer graphics: texure mapping, and lighting. We won't go into full details about how they work (you'll learn that in lecture) but instead learn how to set them in OpenGL. In this week's lab you'll learn about the following:


Texture Mapping

At a high-level, texture mapping means applying any type of image to one or more faces of a 3D object. Texturing an object involves three steps:

  1. Loading the image (i.e., the texture) into an texture object, which allows OpenGL to supply to the shader program.
  2. Load a set of coordinates known as texture coordinates for each vertex. Since a triangle can be transformed (i.e., translated, rotated, scaled), we use texture coordinates to follow the movement of the vertices to make the texture seem real on the face of an object. During rasterization, the GPU will interpolate the texture coordinates across the traingle provide them to the fragement shader.
  3. Inside the fragment shader, we map these interpolated texture coordindates to the texture. This action is known as sampling , which results in a texel (i.e., a pixel in a texture) being produced. The texel often contains a color which is used as the "out" color for a fragment.

OpenGL supports several types of textures such as 1D, 2D, and 3D that can be used for different techniques. In this course, we will mostly be working with 2D textures.

Texture coordinates are specified in texture space , which is simply the normalized range [0,1]. By convention, we use U and V as the axes of the texture space where U corresponds to the x-axis and V corresponds to the y-axis. OpenGL treats the values of the UV axes as going from left to right on the U axis and down to up on the V axis. Take a look at the following image:

Thus, for this lab we are going to specifiy the texture coordinates for the cube as such:

Problem for yourself: Why have normalized textured coordinates?

Part 1: Textures in OpenGL

The sections below discuss creating textures and using them during the rendering process.

Generate/Setup a texture

The CS237 library provides a few wrapper functions around the OpenGL texture functions that allow you to easily load an image and create/setup a texture based on that image:

//1 
cs237::image2d * image = new cs237::image2d("PATH_TO_PNG_FILE");

//2 
cs237::texture2D * texture = new cs237::texture2D(GL_TEXTURE_2D, image);

//3
texture->Bind();

//4
texture->Parameter(GL_TEXTURE_MIN_FILTER,GL_LINEAR);
texture->Parameter(GL_TEXTURE_MAG_FILTER,GL_LINEAR);
  1. The cs237::image2d type is part of the cs237 texture library. Creating a cs237::image2d object loads a 2D image from a png file into memory, which then can be loaded into an OpenGL texture object. You only need to specify the file path when creating the object and it handles the rest.
  2. Next, we create the OpenGL texture given the image data we previously loaded. The constructor takes in two arguments: target, and the image data, which will be loaded into the texture. The target is OpenGL specific and is needed to tell OpenGL what type of texture 2D will it target. Mostly in this course it will always be GL_TEXTURE_2D but there are other 2D targets you can use (e.g., GL_PROXY_TEXTURE_2D, GL_TEXTURE_RECTANGLE, etc.).
  3. We are about to setup additional parameters for this texture so we need to tell OpenGL that "Hey, the upcomming texture calls are related to this particular texture object", similiar to binding a VBO and then pusing data into the buffer. The texture already knows its target (i.e. GL_TEXTURE_2D) so it binds it to that target using the Bind() function. Texture objects can be bound to targets simultaneously. For example, I could bind a 3D texture to the GL_TEXTURE_3D target and bind a 2D texture to the GL_TEXTURE_2D target and work on them both at the same time.
  4. The Parameter function of a texture initializes many aspects related to the texture sampling operation. Rarely is a texture and the triangle it's covering proportional to each other. Many times the triangle is either a bit smaller or larger than the texture. We use these parameters to match to the proportion of the triangle. We specify the magnification parameter (GL_TEXTURE_MAG_FILTER) to handle when the triangle is larger than the texture (i.e., it is very close to the camera), which means there are several pixels covered by the same texel. The minification (GL_TEXTURE_MIN_FILTER) parameter to handle when a triangle is smaller than the texture (i.e., it is very far from the camera), which means several texels are covered by the same pixel. Here we select the linear interpolation filter type (GL_LINEAR) for both parameters, which provides good looking results.

Rendering a Texture

The below code needs to happen if you will use a texture to render an object:

//1: Activate the first texture unit. 
CS237_CHECK(glActiveTexture(GL_TEXTURE0));

//2 : Activate the first texture unit. 
CS237_CHECK(glActiveTexture(GL_TEXTURE0));

//3: Bind the texture to its target
texture->Bind();

//4: Assign the uniform sampler to the first texture unit. 
cs237::SetUniform(_samplerLoc, 0);
  1. In order to use a texture inside the shader program, we need to bind it to a texture unit. The texture will then live inside the texture unit and the shader uses the texture unit to get access to the texture object. A benefit of having a texture unit is that it can handle several texture objects simultaneously, as long as the textures are of different types (i.e., 1D, 2D, 3D texture). Thus, before every draw call we need to: activate a texture unit and then bind the texture object to the activiated texture unit so that it can be sampled within the fragment shader.

    The function glActiveTexture activates a texture unit. The function takes in an enumeration (i.e., GL_TEXTURE0, GL_TEXTURE1, GL_TEXTURE2, and so on...) that correspoonds to which texture unit to activate. You can more than one texture unit activated simultaneously and the number available is dependent on the capability of your graphis card.

  2. Next, if the texture unit activated (i.e. texture unit 0) then we bind our texture to this texture unit by calling the bind function again.
  3. Lastly, we need to assign the uniform sampler variable defined in the fragment shader to the texture unit that it will use for sampling.

    The function cs237::SetUniform sets the index of the texture unit to use for sampling. _samplerLoc is the uniform location of the sampler defined in the fragment shader and "0" corresponds to telling the sampler it should be sampling from texture unit 0. For example, if you had the following sampler variable defined in your fragment shader:

    uniform sampler2D sampler;

    then you will use the UniformLocation function to get its location:

    _samplerLoc(sh->UniformLocation ("sampler")),

    where "shader" is the shader program and then you can assign the texture unit it points to by using glUniform1i.

    Note: the actual index of the texture unit is used here, and NOT the OpenGL enum GL_TEXTURE0 (which has a different value).

Part 2: Shaders & Per-Pixel Light Computations Setup

In order to perform per-pixel lighting on an object, you will need to supply normals for each vertex defined within your mesh. In OpenGL, you will pass these normals into the vertex shader (along with your vertex positions, and texture coordinates) by using vertex attributes. This means you will need to create a VBO for your normals and set their attribute information just like vertex positions. You will then pass the normal from the vertex shader as an "out" variable to the rasterizer.

Note: the interpolated normal vectors are not guaranteed to be unit-length, so you will need to renormalize them in the fragment shader. See the fragement shader below.

There will also be additional information you'll need to pass to the fragment shader in relation to lighting (e.g. ambient, direction, intensity, etc.). This can be done by defining these properties as uniform variables and using them in the fragment shader where lighting will be computed.

Your shader source files should look something like this:


Part 3: Actual Lab (Texture a Cube)

Inside your lab3/src directory you will see an updated view.hxx and view.cxx that includes a new representation of the cube. A cube is now defined by having six walls that include the information such as vertex positions, color, texture coordinates, and the normal for each vertex. You should convert these walls into a mesh representation by using the Mesh struct definition and the loading functions. Use GL_TRIANGLE_FAN instead of GL_TRIANGLES as the drawing primitive target. Thus, you will probably want to store these meshes in an array (one for each wall). Here's a couple of additional notes about getting texturing working: