I have read up to chapter 5 in the CG textbook (almost halfway done) and I thought it would be good to do a general summary of what I have learned so far. Granted I might be misinformed or have fragmented knowledge about some aspects, but I hope I will be corrected in the comments.
What are Shaders and How Do They Work?
First of we are talking about the programming language Cg created by NVIDIA. The point of shaders and the Cg language is to help you communicate via code with your graphics card to control the shape, appearance and motion of objects drawn. Essentially it allows you to control the graphics pipeline, like a boss. Cg programs control how vertices and fragments (potential pixels) are processed. This means that our Cg programs are executed inside of our graphics cards.
Shaders are a powerful rendering tool for developers because they allows us to utilise our graphics cards. Since our CPU’s are more suited toward general purpose operating system and application needs, it is better to use the GPU that is tailor built for graphics rendering. GPU’s are built to effectively process and rasterize millions of vertices and billions of fragments per second. The great thing about CG is that it gives you the advantages of a high level language (readability and ease of use) while giving you the performance of a low level assembly code. Cg does not provide pointers and memory allocation tools. However it supports vectors and matrices and many other math operations that make graphics calculations easier.
Cg is not meant to be used as a full fledged programming language. We still need to build our 3D applications in C++ (or any language) then use our shader language (CG, HLSL, GLSL, RenderMan etc.) to optimise our graphics using the GPU.
The Graphics Pipeline
In order to understand how shaders work, we have to have a general understanding on how the graphics pipeline (stages operating in parallel) work. First your 3D application sends several geometric primitives (polygons, lines and points) to your GPU. Each vertex has a position in 3D space along with its colour, texture coordinate and a normal vector.
This is the first processing stage of the pipeline. First several mathematical operations are performed on each vertex. These operations can be:
- Transformations (vertex space to screen space) for the rasterizer
- Generating texture coordinates for texturing and lighting to determine its colour
Primitive Assembly and Rasterization
Once the vertices are processed, they are sent to this stage of the pipeline. First the primitive assembly step assembles each vertex into geometric primitives. This will result in a sequence of triangles, lines or points.
After assembling the primitives will need to be clipped. Since we are limited to a screen we cannot view the entire screen. So according to our view frustum we clip and discard polygons (culling). Once our screen is clipped our next step is to rasterize. This is the process of determining what pixels are covered by a geometric primitive.
The last important item in this process is for the user to understand the difference between pixels and fragments. A pixel represents a location on the frame buffer that has colour, depth and other information. A fragment is the data needed to generate and update those pixels.
Fragment Texturing and Coluring
Now that we have all our fragments the next set of operations determine its final colour. This stage performs texturing and other math operations that influence the final colour of each fragment.
This is the last stage of the graphics pipeline. It is also one of the more complex stages. Once the completed fragments come out of the previous stage the graphics API perform several operations on the incoming data. Some of them are:
- Pixel ownership test
- Scissor test
- Alpha test
- Stencil test
- Depth test
- Logic operations
Programmable Graphics Pipeline
So what was the point of talking about the pipeline? Now that we know how the fixed pipeline works and how normal graphics API’s send information to be processed we can see where are shaders are executed and what they do.
The two important parts of this diagram are the programmable vertex processor that runs our Cg vertex programs and the programmable fragment processor that runs our Cg fragment programs. The biggest difference between each one is the fact the the fragment processor allows for texturing.
Now that we know how the graphics pipeline works we can create programs to manipulate the pipeline however we want. Next week we shall take a look at how to make programs for Cg and how they work in your 3D application.
Thank you for reading, -Moose