USMGLSL Shaders
Introduction

An Introduction to These Pages
by David Cornette
Computer Science Department
University of Southern Maine
Faculty Advisor: Dr. Bruce MacLeod


These pages detail the Master's project that I have done on GLSL shaders.  This page gives an overview of what the project, and on other pages you will find additional information about the topic, as well as the source code for my project.

In computer graphics, a common goal is to make photorealistic images, that is, images that look as much as possible likeSolid colored teapot rendered using the OpenGL fixed functionality photographs, or of how things look in the real world.  The surfaces of real objects are usually quite complex, with a variety of different colors arranges in different patterns.  Even objects that appear to have a single color will show some variations across their surface.  This may be slight variations in color, or it might small bumps and pits or perhaps variations in how much light different points of the surface reflect. This means that objects in computer graphics should not all be made of solid, undifferentiated colors if they are to look realistic.  To the right is an example of a solid colored computer graphics object.  Notice how it looks too perfect, and does not look very realistic.  The Teapot shape is often used for demonstration purposes in computer graphics, since it is relatively simple, but it has more interesting geometry than a sphere or a torus.

A teapot with decal texture mapping.One way to create variations and details on the surface of a computer generated surface is to use texture maps.  An texture map is a picture which is placed on the surface of the object like a decal.  Since a picture is two dimensional, there must be a mapping between the three-dimensional surface of the object and the two dimensional image.  It can be difficult to create this mapping in the case of complex objects like living creatures in such a way that there are no visible seams and the texture does not get stretched.  On the left is an image of the teapot with a simple image file wrapped onto its surface.  Notice the stretching and pixelation of the image at the top of the handle.  This is a common problem with image mapping.  To eliminate the pixelation, it is necessary to use a larger image.

There are additional problems with trying to use a photograph as a texture map.  A photograph taken at an oblique angle angle will not look right when mapped onto a surface.  Furthermore, the lighting of the real world object can cause it to not look right.  Consider a photograph of a brick wall which is lit by the sun’s rays coming from the right hand side.  If the image is mapped on to a wall in a computer generated world and the scene is lit with lights coming from the left, the wall will not look right.  Another problem with using texture maps to color the surface of an object is that any image is of a limited size.  In order to texture a large brick wall with a photograph, there are two possibilities.  The photograph can be taken of a large wall from far away, in which case the wall will lack detail when viewed from up close, or a close-up photograph may be taken, and the image tiled over the larger wall.  This might look acceptable up close, but the tiling would be noticeable from further away.

A solution to these shortcomings of texture maps is to use procedural textures.  In general, a procedural texture is one which is generated by an algorithm rather than stored as data.  Typically, a procedural texture is a function which takes as input the three-dimensional coordinates of a point on the surface of the object and returns the color of the object at that point.  The function may take other parameters as well, allowing a single function to be used for a variety of similar but different textures.  The algorithm will usually be much smaller than any image map that could be used.  Procedural textures are not limited to any particular size or scale.  Procedural textures can be defined over more than two dimensions.  This means that objects can appear to be carved out of some material.  The function can even be defined over four dimensions, allowing it to vary over time.  To do something similar with texture maps would require many different images, perhaps one for each frame that is to be rendered.
The function that defines a procedural texture is often part of a shader.  A shader is a computer program that calculates how to render a particular object.  In addition to calculating colors with procedural textures,  shaders can deform the surface of an object, make it appear to be bumpy, and determine how the light sources in the scene illuminate the object.

Shaders have been around since the 1980s.  One of the most popular languages for writing shaders is the RenderMan shading language.  RenderMan has been used in the production of many popular motion pictures.  The rendering of these motion pictures can take a long time.  Render farms with many powerful computers are used for this process.  Once they have rendered all of the individual frames, those frames are assembled into the movie.

When rendering a movie, this waiting is acceptable.  However, for other purposes images must be rendered in real time.  These purposes include scientific visualization, computer aided design, and computer gaming.  To speed up the rendering in these cases, accelerated graphics cards are used to do most of the rendering work.
Until recently, procedural texturing was not possible with this hardware rendering.  In recent years this has changed.  Many video cards are now programmable.  These graphics cards are capable of running shaders directly.  This enables procedural textures to be used for real time rendering.

There are three main languages that are used to write these real time shaders, HLSL, which can be used with the DirectX API; Cg, which was created by NVIDIA; and GLSL, which is part of the new version 2.0 of the OpenGL standard.
Before a shader can be used, there must be a model for it to shade.  Three-D artists create these models using some sort of modeling software.  There are a number of different modeling programs, such as Maya, 3D Studio Max,  Lightwave, and Blender.  Here is an image of Blender, and a model of its mascot, Suzanne.The modeling software Blender, with its mascot.

Once the artist has completed making the model, it needs to be stored in a file in some format.  The dragon model seen on these pages

 was modeled in Blender and exported to an .obj file, which is a text-based 3D file format.

A program can then read in the exported file and use it to draw the object.  The images found on this poster are taken from a program written in Java.  The Java program can load in the files from Blender, and it uses JOGL to display them.  JOGL is a binding of the OpenGL API for Java.  Even though Java is an object-oriented language, JOGL was designed to be a fairly flat API, so using it is very similar to using OpenGL in C.

The Java program must read in the source code of the GLSL shader, and then tell OpenGL to compile it and use it for drawing.  Here is a snippet of code that does this.

int v = gl.glCreateShader(GL.GL_VERTEX_SHADER);
int f = gl.glCreateShader(GL.GL_FRAGMENT_SHADER);

BufferedReader brv = new BufferedReader(new FileReader("vertexshader.glsl"));
String vsrc = "";
String line;
while ((line=brv.readLine()) != null) {
  vsrc += line + "\n";
}
gl.glShaderSource(v, 1, vsrc, (int[])null);
gl.glCompileShader(v);

BufferedReader brf = new BufferedReader(new FileReader("fragmentshader.glsl"));
String fsrc = "";
String line;
while ((line=brf.readLine()) != null) {
  fsrc += line + "\n";
}
gl.glShaderSource(f, 1, fsrc, (int[])null);
gl.glCompileShader(f);

int shaderprogram = gl.glCreateProgram();
gl.glAttachShader(shaderprogram, v);
gl.glAttachShader(shaderprogram, f);
gl.glLinkProgram(shaderprogram);
gl.glValidateProgram(shaderprogram);

gl.glUseProgram(shaderprogram);

Once this is done, any polygons drawn using JOGL method calls will be drawn using the GLSL shader, rather than the ordinary OpenGL fixed functionality.