Level Up: Aftermath


I am writing this after commuting home from the Level Up convention/award ceremony/event.

It was truly a great and interesting event for a number of reasons.  I did not plan on going until I saw that our Game Design class was cancelled at night.  At this point the only thing stopping me from going was my own laziness.  I tried to grab my other group members to join me, but I only managed to get Kevin.  Probably the best one to bring anyway.

While we never had any intention of showing of our game, or worked on it exclusively for Level Up, as soon as we got there we began debugging.

At this point out game had no sounds, no HUD and an imbalanced game system.  We did not have our blacksmith master menu system that would allow the player to level up their items and complete side quests.  We just had several battles linked by using a little rotating exclamation mark.

We had left with Gary’s group and managed to get to the convention by 3pm, from then till 5pm Kevin and I were debugging.  I was fixing the shader lighting and Kevin was balancing the game to showcase.

We essentially prepared for a huge Q/A session to gather a whole bunch of data.  I wish we had put more effort into getting a HUD into the game. Regardless of the HUD I learned a lot of stuff about the game that I would not have been able to find out through normal testing.

What needed to be fixed

Aside from the obvious points of making the game look prettier by adding higher resolution textures, HUD/GUI and the rest of the shaders.  We noticed that the collision system was a bit wonky and people were having issues with depth perception.  There needed to be a greater collision response for the player since most of the time people were asking “am I even killing this monster?”  Also the AI were updating way to fast and were very difficult.  People were getting cornered and died fairly quickly.  We need to allow the players to exit the battle and upgrade their equipment rather then re-spawning them in the same level.

The great thing was that there were a lot of artsy girls and male programmers along with some gamers who tested our game.  I noticed that some guys were very aggressive and tried to parry and dodge while moving back an attacking.  Many inexperienced gamers just stood there, got surrounded and managed to die.  Granted when Dr. Nacke and Dr. Hogue tested our game they both had issues with the HUD and the obvious problems.

I also noticed that some people were having issues with the camera movement.  We need to slow down the intensity of the camera movement since some people ended up staring down into the ground because the camera acceleration was to fast for people with slower reflexes.  It would be good to have the player adjust the camera acceleration to their preference in the final version.

When people were in the forest area, they would walk around aimlessly and end up having a horde of enemies chase them while they ran backwards and attacked.  The forest area should have regions where only some beasts spawn and they patrol that area.  Many times people spawned and were surprised that a wolf began chasing them.  We need to have better and smoother transitions between levels that give players a safe zone that they can chill back in.  Players need to make the decision to enter a dangerous area and provoke an enemy attack.  Rather then having the AI walk towards you in a line, they should recognise that they are colliding with each other and push away to corner the player.  The AI need more behaviour that will allow for a more robust battle system then just the player running backwards and attacking.

Overall, we need to give the player more and better feedback during gameplay.  We need to place our HUD into the game to tell the player how much stamina and health they have left.  We need sounds to show the location of the enemy, and tell if they are hitting an enemy.  We need to add more AI behaviour instead of endless seeking.

Lessons Learned

We went there with a 2 man team from a group of 6 people.  We went in unprepared with little expectations.  By this time next year, we should have our game completed and use this event to test our game out with a different crowd and have several different versions to test different things.  There were different types of people that played our game and it was evident in the way they played it.

For example:

  • Experienced gamers adapted very easily to the WASD controls and managed to survive for a while
  • In-experienced gamers got completely destroyed and would have preferred a controller layout with simpler buttons
  • Players had to bend down or kneel on the floor and were in a very uncomfortable position playing the game, giving a negative feel to the game already

Next time we should prepare a variety of versions of our game with different control layouts and a proper chair and screen for them to play it on.  Also polish.  Our game needs to be very polished.  By this time everything should be implemented, and we are just performing balance and fun testing.

Is this level to hard? Do we need to increase our collision response? Should you be sprinting faster? Do you need more stamina?  We should be finding the answers to these questions in preparation for the final GameCon.

Final Notes

Overall it was a good event and I did learn a lot.  I got to bond with some fellow classmates from all years of the program.  We debugged and found errors in our game and code.

Some of the attendees offered awesome advice and seeing them play the game and enjoy it regardless of the state it was in really made me happy.  Considering this was an unfinished game, I just wondered how much better it could have been and I deeply regret not preparing a better version for this day.

People were really impressed with just the fact that we built out game from the ground up not using anything.  Some people even had a hard time believing us that we were not outsourcing anything by using external programs like Havok for physics.  This probably made me the most proud to be apart of UOIT Game Development.  That and the fact that one group that was using Unity Toon shading had several members who had no idea how to do edge detection and did not know what the Sobel Operator was.

I am looking forward to going to more of these events in the future and creating better polished games in the near future.  Time to get some sleep. GOOD NIGHT!

Thank your for reading,
– Moose

Shaders 103 – Lighting


By now you should know what shaders are, and how they work.  You should also know how to integrate them into your code.  Since I have spent a lot of time putting lighting and what not into our game, I have become a bit of an expert with it.  So today I am going to go over how to do some fragment based lighting.

Changes from OpenGL and Movement Matrices

While I didn’t do the lighting in our game last semester, you can’t take old OpenGL code with lighting and a whole bunch of glTranslate and glRotate calls and expect it to work.

The first thing we are going to have to do is build a whole bunch of matrix functions that build a perspective, look at, rotation, translation, multiplication, invert and transform matrices.  When you download the CG API some of the sample code does have these functions build in, but they expect you to know what they do and how they work.

Here is how we will now be rendering objects instead of using the ‘gl’ draw calls.

/*** Render brass solid sphere ***/


/* modelView = rotateMatrix * translateMatrix */
makeRotateMatrix(70, 1, 1, 1, rotateMatrix);
makeTranslateMatrix(2, 0, 0, translateMatrix);
multMatrix(modelMatrix, translateMatrix, rotateMatrix);

/* invModelMatrix = inverse(modelMatrix) */
invertMatrix(invModelMatrix, modelMatrix);

/* Transform world-space eye and light positions to sphere's object-space. */
transform(objSpaceEyePosition, invModelMatrix, eyePosition);
cgSetParameter3fv(myCgFragmentParam_eyePosition, objSpaceEyePosition);
transform(objSpaceLightPosition, invModelMatrix, lightPosition);
cgSetParameter3fv(myCgFragmentParam_lightPosition, objSpaceLightPosition);

/* modelViewMatrix = viewMatrix * modelMatrix */
multMatrix(modelViewMatrix, viewMatrix, modelMatrix);

/* modelViewProj = projectionMatrix * modelViewMatrix */
multMatrix(modelViewProjMatrix, myProjectionMatrix, modelViewMatrix);

/* Set matrix parameter with row-major matrix. */
cgSetMatrixParameterfr(myCgVertexParam_modelViewProj, modelViewProjMatrix);
glutSolidSphere(2.0, 40, 40);

Now this may seem like a lot, but it is necessary for working with shaders.

The beginning where we call the setBrassMaterial() function is where we set the objects parameters.   We will get to that a bit later.  For now think of it as your glColor call.

The first part where we create the matrix using a simple rotation and translation matrix is fairly simple.  You would just pass on those parameters as if you were doing a normal glRotate or glTranslate call.  You can replace these with variables so you can move these.  For now this object is stationary so we do not need it to move

However the next part is where you  multiply them to get your modelMatrix and invert it to get your final matrix.  This is so we can calculate lighting with respect to the sphere object.  We then update our eye and light Cg parameters that we will see later.

The last bit of code creates the modelView matrix and actually draws the sphere.

Using Materials

The book uses this method of creating functions that set the emissive, ambient, diffuse, specular and shininess values.  Like this:

static void setBrassMaterial(void)

const float brassEmissive[3] = {0.0, 0.0, 0.0},
brassAmbient[3] = {0.33, 0.22, 0.03},
brassDiffuse[3] = {0.78, 0.57, 0.11},
brassSpecular[3] = {0.99, 0.91, 0.81},
brassShininess = 27.8;

cgSetParameter3fv(myCgFragmentParam_Ke, brassEmissive);
checkForCgError("setting Ke parameter");
cgSetParameter3fv(myCgFragmentParam_Ka, brassAmbient);
checkForCgError("setting Ka parameter");
cgSetParameter3fv(myCgFragmentParam_Kd, brassDiffuse);
checkForCgError("setting Kd parameter");
cgSetParameter3fv(myCgFragmentParam_Ks, brassSpecular);
checkForCgError("setting Ks parameter");
cgSetParameter1f(myCgFragmentParam_shininess, brassShininess);
checkForCgError("setting shininess parameter");


So this function just sets the colour of each of the light parameters that we want.  Using this we can make several material functions for different objects and control them independently in whatever way we want.  You can make a character, enemy and level material.  Right before you load your character, you can make their lighting bright so that they stand out.  For enemies, you can give them a bit of a red highlight to show the player that they pose a threat.

What to Initialise

Now we are in our initCg() function let us break it down into a vertex and fragment area.

Vertex Initialisation

myCgVertexProfile = cgGLGetLatestProfile(CG_GL_VERTEX);
checkForCgError("selecting vertex profile");

myCgVertexProgram =
myCgContext,              /* Cg runtime context */
CG_SOURCE,                /* Program in human-readable form */
myVertexProgramFileName,  /* Name of file containing program */
myCgVertexProfile,        /* Profile: OpenGL ARB vertex program */
myVertexProgramName,      /* Entry function name */
NULL);                    /* No extra compiler options */
checkForCgError("creating vertex program from file");
checkForCgError("loading vertex program");

#define GET_VERTEX_PARAM(name) \
myCgVertexParam_##name = \
cgGetNamedParameter(myCgVertexProgram, #name); \
checkForCgError("could not get " #name " parameter");


This is a fairly simple vertex initialisation.  The main point is to see that we are passing the modelViewProj matrix.  If you go back up to our draw code you can see where we update myCgVertexParam_modelViewProj parameter.

Vertex Shader Code

void v_fragmentLighting(
float4 position : POSITION,
float3 normal   : NORMAL,

out float4 oPosition : POSITION,
out float3 objectPos : TEXCOORD0,
out float3 oNormal   : TEXCOORD1,

uniform float4x4 modelViewProj)
oPosition = mul(modelViewProj, position);
objectPos = position.xyz;
oNormal = normal;

You can still see that this vertex shader is still simple.  We take our model view matrix and multiply that by our position and output both our position and our object position.

Fragment Initialisation

#define GET_FRAGMENT_PARAM(name) \
myCgFragmentParam_##name = \
cgGetNamedParameter(myCgFragmentProgram, #name); \
checkForCgError("could not get " #name " parameter");


/* Set light source color parameters once. */
cgSetParameter3fv(myCgFragmentParam_globalAmbient, myGlobalAmbient);
cgSetParameter3fv(myCgFragmentParam_lightColor, myLightColor);

This not the full code for the initialisation.  This smidgen of code contains the new parameters that we will be passing into our fragment shader to compute our lighting.

Fragment Shader Code

void basicLight(
float4 position : TEXCOORD0,
float3 normal   : TEXCOORD1,

out float4 color : COLOR,

uniform float3 globalAmbient,
uniform float3 lightColor,
uniform float3 lightPosition,
uniform float3 eyePosition,
uniform float3 Ke,
uniform float3 Ka,
uniform float3 Kd,
uniform float3 Ks,
uniform float shininess)
float3 P = position.xyz;
float3 N = normalize(normal);

// Compute emissive term
float3 emissive = Ke;

// Compute ambient term
float3 ambient = Ka * globalAmbient;

// Compute the diffuse term
float3 L = normalize(lightPosition - P);
float diffuseLight = max(dot(L, N), 0);
float3 diffuse = Kd * lightColor * diffuseLight;

// Compute the specular term
float3 V = normalize(eyePosition - P);
float3 H = normalize(L + V);
float specularLight = pow(max(dot(H, N), 0), shininess);
if (diffuseLight <= 0) specularLight = 0;
float3 specular = Ks * lightColor * specularLight;

color.xyz = emissive + ambient + diffuse + specular;
color.w = 1;

This code takes in our parameters that we pass in our C++ code to compute emissive, ambient, diffuse and specular lighting.  Emissive and ambient are fairly easy to compute, however diffuse and specular require some more work.

Emissive Light

Emissive is the light that is emitted or given off by a surface.  This can be used to stimulate glowing
Equation: emissive = Ke
Ke is the materials emissive color

Ambient Light

Ambient or ambience is light that has bounced around from different objects.  This can be used to make your environments better.  You can have a grey ambient for smoggy cities or a nice bright yellow ambient for forests and nature environments.
Equation: ambient = Ka * globalAmbient
Ka is the material’s ambient reflectance
globalAmbient is the color of the incoming ambient light

Diffuse Light 1

Diffuse light is reflected off a surface equally in all directions.  Even if an object has small nooks and crannies, the light will bounce of its rough texture
Equation: diffuse = Kd * lightColor * max(N dot L, 0)
Kd is the material’s diffuse color
lightColor is the color of the incoming diffuse light
N is the normalised surface normal
L is the normalised vector toward the light source
P is the point being shaded

Diffuse Lighting 2
Specular Light 1

Specular lighting is light scattered from a surface around the mirror direction.  It is only seen on very shiny and metallic materials.  Unlike the above types of light, Specular depends on where the viewer is looking at for it to work.  It also takes into account how shiny a surface is.
Equation:  specular = Ks * lightColor * facing * (max(N dot H, 0))^shininess
Kd is the materials specular color
lightColor is the color of the incoming specular light
N is the normalized surface normal
V is the normalized vector toward the viewpoint
L is the normalized vector  toward the light source
H is the normalized vector that is halfway between V and L
P is the point being shaded
facing is 1 is N dot L is greater then 0 and 0 otherwise

Specular Light 2

Then you add all the lights together and that is lighting in a nutshell.

Fragment Lighting

Thank your for reading,
– Moose

Shaders 102 – Code Integration


Today I am going to talk about how to integrate shaders into your code.  Starting with the layout of a shader program and how to integrate it in your code.

Basic Shader Code

Here is a vertex shader:

//vertex shader from chapter 3 of the Cg textbook

struct C3E2v_Output {
float4 position : POSITION;
float3 color : COLOR;
float2 texCoord : TEXCOORD0;

C3E2v_Output C3E2v_varying(
float2 position : POSITION,
float4 color : COLOR,
float2 texCoord : TEXCOORD0)
C3E2v_Output OUT;

OUT.position = float4(position,0,1);
OUT.color = color;
OUT.texCoord = texCoord;

return OUT;


The program first begins with an output structure as follows:

struct C3E2v_Output {
float4 position : POSITION;
float3 color : COLOR;
float2 texCoord : TEXCOORD0;

Since we know that this is the vertex shader, and it has to pass values to the rest of the graphics pipeline, this is structure is just some of the values that our shader will be using.  By defining an output structure we can manipulate the items inside it.  Basically this is like a variable declaration for a function where we would be outputting position, color and texture coordinates.

Last week we talked about how Cg has vectors and matrices integrated into their variable declaration.

  • float4 position : is essentially =[x,y,z,w] where w=1.  If this was written in C++ it would be float position[4]={x,y,z,w}
  • float3 color : is similar to the vector above, but this is used to represent our colour channels =[r,g,b]
  • float2 texCoord : is our U and V coordinates in our texture = [u,v]
While this program does not showcase the matrix declaration of Cg, here are some examples
  • float4x4 matrix1 : this is a four by four matrix with 16 elements
  • half3x2 matrix2 : this is a three by two matrix with 6 elements
  • fixed2x4 matrix3 : this is a two by four matrix with 8 elements
If you wanted to declare a matrix with some values, you would do it like this:
float2x3 = {1.0, 2.0,
            3.0, 4.0,
           5.0, 6.0}


Next we have our entry function:

C3E2v_Output C3E2v_varying(
float2 position : POSITION,
float4 color : COLOR,
float2 texCoord : TEXCOORD0)

This is what defines our fragment or vertex program.  This is similar to the main function in C/C++.  What this is telling us, is that our shader is taking in position, colour and texture coordinates.  We know that our structure above has the same parameters as our input, so we know that we are going to be manipulating these parameters and then outputting them.

Last we have our function body:

C3E2v_Output OUT;

OUT.position = float4(position,0,1);
OUT.color = color;
OUT.texCoord = texCoord;

return OUT;

First we start of by creating our structure object called OUT.  This code really doesn’t do much but set values in the structure equal to the inputs and output them to the next stage in the pipeline.  The interesting piece of code is the OUT.position = float4(position,0,1) part.  This takes the incoming position with only two incoming parameters (x,y) and converts it into a float4 by giving the last two variables a 0 and 1 value to get (x,y,0,1).

3D Graphics Application Integration

Creating Variables

So knowing how that code works is great, however implementing Cg code in your C++ code is where I normally spend most of my time working with shaders.  The actual shader code is fairly easy to work with, but integrating it in your Graphics Application is where the real pain is.  The Cg book doesn’t really cover this explicitly, however it does have examples in the API in your /Program Files/NVIDIA Corporation/Cg/examples/ folder.

To start there are a number of things you have to declare, I normally do these as global variables:

static CGcontext myCgContext;
static CGprofile myCgVertexProfile,
static CGprogram myCgVertexProgram,
static CGparameter myCgVertexParam_constantColor;

static const char *myProgramName = "Varying Parameter",
*myVertexProgramFileName = "C3E2v_varying.cg",
*myVertexProgramName = "C3E2v_varying",
*myFragmentProgramFileName = "C2E2f_passthru.cg",
*myFragmentProgramName = "C2E2f_passthru";

Obviously the names of these do not have to be the same as what I have written, but the idea is to teach you what each one is.

The first part of creating the CGcontext is the part I know the least about.  I believe it is the part where you initialise the shader program.  So just be sure to ALWAYS do this.

The next part is creating your vertex and fragment profile.  This is another thing to always do.

The next two parts are where you are given a lot of freedom.  The CGparameter will vary from program to program.  These are essentially parameters that you take in your graphics application and send to your shader.  constantColor is just a variable that we can send to our shader to replace the colour of every pixel or vertex.  Later on I will post on how we can send in parameters like diffuse light color, light position, attenuation parameters and much more.

The last part is the program names.  This is where you define your main function for each shader and the name of their file name.  Common names for each are fragment_passthru or vertex_passthru.

Initialise Shaders

The next step is where you physically create the shader program.  Somewhere in your glut loop you should create a initCg() void function where you place all your initialisations.  The book places everything in main, which I find to be stupid, so don’t do that.  It creates a lot of hard to read clutter.

myCgContext = cgCreateContext();
checkForCgError("creating context");
cgSetParameterSettingMode(myCgContext, CG_DEFERRED_PARAMETER_SETTING);

myCgVertexProfile = cgGLGetLatestProfile(CG_GL_VERTEX);
checkForCgError("selecting vertex profile");

myCgVertexProgram =
myCgContext,              // Cg runtime context
CG_SOURCE,                // Program in human-readable form
myVertexProgramFileName,  // Name of file containing program
myCgVertexProfile,        // Profile: OpenGL ARB vertex program
myVertexProgramName,      // Entry function name
NULL);                    // No extra compiler options;
checkForCgError("creating vertex program from file");
checkForCgError("loading verex program");

This code is fairly simple.  The beginning part creates the CgContext.  Then we create our vertex profile.  Lastly we tell the program where our Cg file is and the name of our entry function.  You would do something similar for the fragment shader.

The checkForCgError help your debug.  If anything goes wrong in the shader at that point, your cgError function will output an error code.

The other thing you can place in your initCg function is a GET_PARAM statement where you can pass variables to your shader program.

#define GET_PARAM(name) \
myCgVertexParam_##name = \
cgGetNamedParameter(myCgVertexProgram, #name); \
checkForCgError("could not get " #name " parameter");


#define GET_PARAM2(varname, cgname) \
myCgVertexParam_##varname = \
cgGetNamedParameter(myCgVertexProgram, cgname); \
checkForCgError("could not get " cgname " parameter");

GET_PARAM2(material_Ke, "material.Ke");
GET_PARAM2(material_Ka, "material.Ka");
GET_PARAM2(material_Kd, "material.Kd");
GET_PARAM2(material_Ks, "material.Ks");
GET_PARAM2(material_shininess, "material.shininess");

The is an example of sending parameters directly to your entry function and to other structures you can make.  The first batch of code sends the modelViewProj matrix, GlobalAmbient colour and the eyePosition of the camera.  These would be items you list in your entry function input parameters.

The other batch of code is an example of you sending parameters to a structure called Material.  Material has emmissive, ambient, diffuse and specular lighting along with a shininess parameter.  This is helpful for creating virtual objects that reflect light differently so things can look like rubber or plastic.

 Enabling and Disabling

The last and simplest thing to do is to enable and disable your shaders during your OpenGL draw loop.

When you are drawing your objects, you need to bind your fragment and vertex shaders by:

//Enable Shaders
checkForCgError("binding vertex program");
checkForCgError("enabling vertex profile");

checkForCgError("binding fragment program");
checkForCgError("enabling fragment profile");

Once that is done you would go on with the rest of your draw calls and at the end of your draw loop you would disable them by:

// Disable Shaders
checkForCgError("disabling vertex profile");

checkForCgError("disabling fragment profile");


That is pretty much a very basic description of how shaders are integrated into your C++ graphics application.

Thank you for reading,
– Moose

Shaders 101 – Intro



I have read up to chapter 5 in the CG textbook (almost halfway done) and I thought it would be good to do a general summary of what I have learned so far.  Granted I might be misinformed or have fragmented knowledge about some aspects, but I hope I will be corrected in the comments.

What are Shaders and How Do They Work?

First of we are talking about the programming language Cg created by NVIDIA.  The point of shaders and the Cg language is to help you communicate via code with your graphics card to control the shape, appearance and motion of objects drawn.  Essentially it allows you to control the graphics pipeline, like a boss.  Cg programs control how vertices and fragments (potential pixels) are processed.  This means that our Cg programs are executed inside of our graphics cards.

Shaders are a powerful rendering tool for developers because they allows us to utilise our graphics cards.  Since our CPU’s are more suited toward general purpose operating system and application needs, it is better to use the GPU that is tailor built for graphics rendering.  GPU’s are built to effectively process and rasterize millions of vertices and billions of fragments per second.  The great thing about CG is that it gives you the advantages of a high level language (readability and ease of use) while giving you the performance of a low level assembly code.  Cg does not provide pointers and memory allocation tools.  However it supports vectors and matrices and many other math operations that make graphics calculations easier.

Cg is not meant to be used as a full fledged programming language.  We still need to build our 3D applications in C++ (or any language) then use our shader language (CG, HLSL, GLSL, RenderMan etc.) to optimise our graphics using the GPU.

The Graphics Pipeline

Graphics Pipeline: From the CG Textbook

In order to understand how shaders work, we have to have a general understanding on how the graphics pipeline (stages operating in parallel) work.  First your 3D application sends several geometric primitives (polygons, lines and points) to your GPU.  Each vertex has a position in 3D space along with its colour, texture coordinate and a normal vector.

Vertex Transformation

This is the first processing stage of the pipeline.  First several mathematical operations are performed on each vertex.  These operations can be:

  • Transformations (vertex space to screen space) for the rasterizer
  • Generating texture coordinates for texturing and lighting to determine its colour

Primitive Assembly and Rasterization

Once the vertices are processed, they are sent to this stage of the pipeline.  First the primitive assembly step assembles each vertex into geometric primitives.  This will result in a sequence of triangles, lines or points.

Geometric Primitives

After assembling the primitives will need to be clipped.  Since we are limited to a screen we cannot view the entire screen.  So according to our view frustum we clip and discard polygons (culling).  Once our screen is clipped our next step is to rasterize.  This is the process of determining what pixels are covered by a geometric primitive.


The last important item in this process is for the user to understand the difference between pixels and fragments.  A pixel represents a location on the frame buffer that has colour, depth and other information.  A fragment is the data needed to generate and update those pixels.

Fragment Texturing and Coluring

Now that we have all our fragments the next set of operations determine its final colour.  This stage performs texturing and other math operations that influence the final colour of each fragment.

Raster Operations

This is the last stage of the graphics pipeline.  It is also one of the more complex stages.  Once the completed fragments come out of the previous stage the graphics API perform several operations on the incoming data.  Some of them are:

  • Pixel ownership test
  • Scissor test
  • Alpha test
  • Stencil test
  • Depth test
  • Blending
  • Dithering
  • Logic operations
This is pretty much the above process in a nutshell.
Pipeline In a Nutshell

Programmable Graphics Pipeline

So what was the point of talking about the pipeline?  Now that we know how the fixed pipeline works and how normal graphics API’s send information to be processed we can see where are shaders are executed and what they do.

Programmable Graphics Pipeline

The two important parts of this diagram are the programmable vertex processor that runs our Cg vertex programs and the programmable fragment processor that runs our Cg fragment programs.  The biggest difference between each one is the fact the the fragment processor allows for texturing.


Now that we know how the graphics pipeline works we can create programs to manipulate the pipeline however we want.  Next week we shall take a look at how to make programs for Cg and how they work in your 3D application.

Thank you for reading,

Technology: New Controller Design


I have recently come across an interesting article that showcases this new input device (controller) with another red ‘tactor’ in the middle of each joystick that can vibrate and move independently of the controller.

A new direction for game controllers: Prototypes tug at thumb tips to enhance video gaming

The cool thing about the controller is that it brings a new level of interaction for console games.  Normally the only indication a user can get from the controller is a small vibration.  This is normally used to emulate car crashes and explosions.  This new vibrating tactor allows for more ways of physical interaction with the player.

  • First you can see the user moving both the white pad and the red tactor like a normal joystick
  • You can also see the internal tactor moving independently giving an interesting user response
  • They begin showing examples of in-game applications where the tactor is useful

When moving in prone we we the tactor begin to swivel and move similar to the way we snake crawl when moving in prone.

When bouncing we see the tactor jump up and down.

I just thought that this was fairly interesting and worth sharing.


Game Construction: Terraria and 2D Level Generation

Hello Reader!

Before we start I’d like to go on a bit of a tangent here about the concept and idea of a console developed by Valve.  This has been popularized as the ‘Steam Box‘.  What does this mean for future games and the future of the industry? Also these are the rumoured specs: Core i7 CPU, 8GB of RAM, and an NVIDIA GPU.

Rumoured Steambox

Valve is the world leader in PC/Mac video game online distribution. There are other services like this, EA has Origin but their customer service is a nightmare.  Steam on the other hand is loved by its community and is truly a great service.  With Sony, Nintendo and Microsoft (Big Three) pushing the market toward the consoles, Steam is keeping PC gaming alive.  Onlive tried releasing a cloud gaming console, however it is not doing so well.

This Steambox or ‘GabeCube’ would help drive PC game development and change the future of how we play video games.  Valve would offer better licensing fees to compete with the Big Three.  The most important element of the Steambox would be the sudden move to digital distribution.  The Steambox would help solve many issues that people have with consoles right now.  First of would be the limited technical specifications, overpriced games and the online gaming fees.

Here is a patent for a controller design that Valve applied for back in 2009/10.

Getting back on track with the main topic I wanted to cover today:


My post last week on Minecraft’s level generation was fairly abstract since I don’t have much knowledge of 3D noise generation and the use of voxels. This week I will try to conceptualize how the 2D level generation in Terraria works.


The technique that I have seen for 2D level generation works well when working with underground items like rocks, dirt and ore.  However things like grass, snowflakes and other biological entities are generated using a different method that is much more complex and harder to conceptualize.



We know we are dealing with just dirt and rocks that are below ground.  Lets get biblical and start dividing up the heavens from the earth.  The first step, similar to Minecraft is to take a function that can divide up our world and tell us what is solid and what is air.  This part is fairly easy because we all know a great function for diving things up into two parts linearly.

GRADIENT!  Given two points a gradient creates a line using parameters like: P1=(x1,y1,z1,w1,u1,v1) to P2=(x2,y2,z2,w2,u2,v2).  Here we have our points and our texture coordinates. But all we need is to set up a simple gradient from -1 to 1 and align it with the Y-axis.  Then we get something like this:

Simple Gradient

Using this as a starting point we can move onto our next part.  Similar to toon/cel shading we need a step function.  There are too many levels here to work with and we only need 2, solid or air.

After using the step function

Now this is much better.  We can clearly see what is air and what is ground.  Unfortunately this is boring.  We need to remove that horizontal divide in the middle because straight lines and linearity in games a boring.  The great thing is that we have another function that can help us with this.  However this is more of a technique then a function.

TURBULENCE! All this function does is translate points or translate the input coordinates of a function.  What we want to do is scatter that centre area given a range and offset of random numbers.  Now imagine the top part of our ‘level’ as y=1 and the bottom part is y=-1 leaving the centre to be y=0.  Given a range from y=0.25 and y=-0.25 we want to create come turbulence.  This would be the result of turbulence:

After Applying Turbulence

Better then having the flat line, however if a world had random blocks floating in the air we would have a serious problem.  This will not be our finished product, but it gives us something to work with.  Right now we have a messy chaotic pattern and we want a nice smooth terrain shape.  This is where it get complicated, we begin to use fractals.

The fractals I am talking about is a combiner of noise sources using different fractal methods.  First we have the basic noise groups like:

Value Noise – Generated by assigning a random value in the range (-1,1) at every lattice point in the image then using interpolation to generate corner values for each cell
Gradient Noise – This is Perlin’s original noise function using a similar method to Value noise. However it uses a wave-like pattern for generating the lattice points for a smoother output
Gradval Noise – This is a hybrid between Value and Gradient noise mainly created to hide artefacts that occur in the interpolation process in the Gradient Noise
Simplex Noise – Improved version of Perlin Noise using a ‘simplex’ instead of a hypercube. In 2D space a simplex is a equilateral triangle and in 3D it is a tetrahedron.
White Noise
White Noise – This is a more chaotic and random noise seen on many old TV’s. It generates a chaotic random signal with no pattern.

Just so we can get a better understanding of how noise is generated let us go through a few step-by-step examples:

All this image is, is Value noise without interpolating the values. You can easily see how each grid element has a value
Now if we add linear interpolation to the lattice coordinates we get something like this

You can experiment by using cubic and quintic interpolation to get varying results, but that is the process in a nutshell.

Getting back on track on how to implement fractals in our level design.  What we want to do is take our level begin distorting it so that each point gets randomly offset by a value determined by the fractal.

Fractal generated by a function

Here is an example of a fractal in the scope of our level. So how is this useful to us? Lets say the black areas represent -0.25 and white represents 0.25.  So anywhere where the fractal is at its darkest black we move the point -0.25 downward and upward if it is white.  Values in the middle will have interpolated values.  Why is this good? This will give us smooth hils because of the gradient of the fractal and we can get floating islands and overhangs with this.

Fractal Distortion Applied

There is a lot more we can do still to this image, its just a matter of adding more filters and functions to tweak in how you want.  We can take a horizontal gradient and create a heightmap if you want to get rid of the floating islands and want different types of terrain.

Horizontal Gradient

Taking this gradient we can apply a similar offset technique to move our terrain up or down as needed.

After Heightmap Adjustment

Closing Remarks

This process is very similar to the idea of shaders. Each point is just data and we can manipulate it as we want.  If we want to add caves we can create a fractal like:

Cave Shape

And using a step function and offset we can apply this to our level and get:

Smooth Cave System

Now if you want to make that cave system smoother, just add another fractal to perturb each area a bit.

Final Cave Shape


Source – [http://accidentalnoise.sourceforge.net/minecraftworlds.html]

Thank you for reading
– Moose

Game Construction: Minecraft

Hello Reader!


With the great success of Minecraft behind us now, I have always wondered how they managed to develop a random 3D world.  Generating a cube is fairly simple, the difficult part is how they were able to adjust the terrain level.  I did a bit of research and I was able to find out that this is called “procedural generation” of a world.  Heightmaps can be generated using some noise function like the Perlin noise in a pure mathematical process.  Once these heightmaps are generated they are applied to a series of voxel grids or objects.

Also, generating entire worlds at one instance is very processor intensive so many developers subdivide the area into smaller chunks and then generate.


My goal today is to gain a better understanding on how 3D worlds are generated and hopefully apply this to one of my future games.


Basic Terrain

First we must define what actual basic terrain is.  We need to cut our world up into a solid and open areas.  Solid areas will be where the actual terrain like grass, dirt and sand are.  Open areas will be the air and perhaps the water.

Perlin Noise

Here is a good noise function to use as a heightmap.  Since this is just an image we would be sampling from it is very efficient in terms of data storage.  The only problem about this heightmap is that it works well in a 2D game not 3D.  Games like Minecraft are volumetric, the have cliffs, caves, tunnels and overhangs.  We need a function that operates in 3D space to determine if the terrain is solid or open.


The next major step is to take a function that we can input (x,y,z) that references a single voxel cell in the world and we can apply a noise function to that point.  This function could also sample from a noise function and apply it to a voxel cube.

Noise applied to Cube

This is a basic heightmap applied to the top of the voxel cube.  If we were to take a noise function like this:


And apply it to each side of the cube, we can get:

Noise Applied to Each Face

Closing Remarks

I haven’t gone too much into the coding aspect of how they are generated since I am am not too familiar with the generation and manipulation of voxels.  The general idea is fairly easy to grasp.  One can create a bunch of noise samples and apply each to a subdivision of the world to get the desired effect.

Minecraft Map

Looking at this map of a Minecraft world we can see the different types of terrain and “Biomes” that they have.  The developers probably sample different noise functions for different hightmaps of different types of biomes that they want to generate.  As for the rarity of gold/diamonds/iron under the terrain, they most likely use another noise function that affects lower subdivisions and places them over their respective terrain.

For example: Diamonds would only spawn once the player is at a low enough elevation and they normally spawn near lava.  To do this I would have different noise functions for different elevations and areas using some nested “if” and “else” statement.  If everything was completely random we would be having desserts appearing underground.  So by having set elevations for different biomes the game is a bit more realistic and it creates a better player experience.

I was able to find a good noise generating open source software for C++ online called libnoise that would be great for a project like this.  It even has a few tutorials that help you create terrain .bmp files for heightmaps and what not.


Next week I will talk about the 2D level generation of Terraria.


Thank you for reading
– Moose

Week 6 Objectives and Game Construction: Character Customization


Quick update of what happened last week and other shader-related stuff.

So the homework questions got a base XP!  This really made my weekend happy, I can now attempt more questions and be motivated to complete them other then the normal intrinsic psychological motivators the added XP gives me that extrinsic motivator I needed to drive me to complete those questions.

I put a slight hold on the future reading of the CG book to get a few easy questions complete for class or the tutorial so that will fill my shader appetite for this week.  Aside from that I will try to finish the CG text over the reading week in between my preparation for the 3 mid terms.  I am extremely thankful that we do not have mid terms for Game Design and Intermediate Computer Graphics.  That would have been brutal.


Game Construction: Character Customization

Last week I said I would blog about Procedural Animation, however I did not get a chance to get any example’s to share to help my blog.  In any case my short attention span found something more interesting, this awesome video of character customization from EVE online! (watch in HD)


  • It seems like the player hovers their mouse over control points to modify the mesh
  • The player is able to drag, pinch and pull vertices to effect the mesh
  • The hair seems to be another model on its own while having its own physical properties in terms of adjustments and colour
  • The body customization is pretty amazing, being able to adjust so many human proportions so easily.  This is the best character customization I have seen in a game yet.
  • The clothes also seem to be an object on its own

Here is another example that I would like to breakdown from Skyrim

  • The part that we care about starts around 2:20
  • In comparison to EVE’s customization, Skyrim’s system uses sliders for pre-sets.  It seems like they just load new models for each physical change to the model and modify the texture map on other sliders
  • The Skin tone slider adjusts the texture map while the weight slider makes the mesh look bigger and scales it up
  • When the player adjusts more physical features of the face, it seems like he just applies a transformation to control points on the face mesh that are saved to a default state when playing the game
  • Skyrim’s customization is mainly on the face of the avatar as opposed to the entire body


EVE’s character customization is similar to a terrain editor or a modelling program where users can physically edit the mesh of an model.  However the user interface is much less complicated then a modelling program but still has several tools like SPORE’s creature creator tool.

 The developers would place control points in areas and have a minimum and maximum of edits the player can do to each control point.  While this is similar to a slider, it works well only on PC’s because of the mouse input and not so well on console because of the analog sticks and d-pad.

In terms of how they did it, for areas where they wanted to pull geometry closer together, the control point would push vertices closer together or farther apart depending on the user input.  The more complicated thing is how they managed to get the texture map to edit with the model.  When the user edited the lips and eyes, the texture moved along very well with the model.  I think that the same control points were mapped to the texture map to produce these results.

Closing Statements

As we can see, a simple tool like a mesh editor can be used as a tool for character customization.  If someone were to get a skeletal animation system working, all they would have to do is create a joint system for editing the mesh and make those joints the control points for the user.

Another way to do this using morphing between OBJ files is to use a slider like in Skyrim and have a combination of models to use as a canvas.  Buy adding more adjustments to that model the user can save them and export that model to to used for the main player avatar.


Thank you for reading


Week 5 Objectives and Mesh Skinning/Skeletal Animation


I’d like to start by saying that I am behind on my work.  I am behind on not being ahead? Paradoxical.  I had hoped by this time I would have been fairly far in my understanding of shader programming, however I have only managed to finish up to chapter 4 in the CG textbook after going at it from Friday till tonight.  However last night I did have a huge discussion with one of my roomates for about 5 hours about “Procedural Animation”.  I might blog about that next week since it is something I would like to research and implement in a game one of these days.

Skeletal Animation/Mesh Skinning

This week a spent a bit of time researching on how to properly rig some of our characters to a joint system so we can use an animation blending system using BVH files rather then keyframing everything.  Our end goal is to head into the motion capture area in the game lab and record some stuff to incorporate in our game.  Worse case scenario we will use Pinocchio to automatically rig our characters. However when we tried using it last semester we ended up having several problems with and ended up wasting time debugging.

That is a pretty big scope to measure up to if we don’t get skeletal animation working.  I understand the concept and have found some base code online for how to create a base for our skeletal animation which is not to bad, however the only problem is weighting the joint structure to the mesh.  We tried exporting that from Maya and that was causing some issues.  It would be best if we could export the weights/mesh from Maya, load the BVH and perform our mesh skinning.  However I ended up pausing that research since I read a few articles on performing the mesh skinning by using the GPU and thought it would be better to do that.

I did find a decent 2D Bone System tutorial on GPWiki, however their 3D one is not yet completed.


Overall this week I caught up to the tutorials that we have done in terms of knowing what the CG code does, however I am not yet at a point to complete any homework questions.  In the next few days I hope to catch up with the lecture material (lighting/diffuse/specular) and demo homework questions M2 and M3 (toon shading and diffuse/specular/distance light in both vertex and pixel/fragment shaders).

I also feel I should be able to complete the E8 question to make an object glow like in Batman: Arkham City.  I assume that the professor means this (http://youtu.be/YrWWUlgoLzQ?hd=1&t=3m25s) when the enemy character has a green glow that ripples down his body.

I am probably undershooting my scope a bit here with the homework questions, but in order to accomplish them I have about 60 pages of reading to get through.  Unfortunately I am a slow reader, add debugging to the mix and that will take some time.  Now instead of writing about reading it I am now actually going to get started on it.


I need to be more productive and power through these CG tutorials to get some XP in the next few days. 3 chapters down 7 more to go!

Thank you for reading,

Week 4 Objectives + Global Game Jam


Looking back on my Week 3 Objectives, I realize that I really didn’t do anything.  It is a bit sad considering we .  BUT!  I spent the weekend at the Global Game Jam and my team and I made a disco zombie game.

Prior Objectives

  • Start CG programming and tutorials to finish homework questions
  • Learn how to do cel-shading/toon shading
  • Figure out how to properly do mesh skinning
  • See if I can export the weighting of the joint structure from maya to our game engine
  • Work on inverse kinematics to reduce foot skating

Aside from reading the first chapter in the CG tutorial book I have not really found the time to do the other objectives.  However tonight I plan on working through the second chapter of that book so that I can begin cutting away at a few homework questions.

Week 4: Objectives

Our modelling professor (Derek F.) said “Whatever you think you can do, take that time and multiply it by three and that is how long it will take you.”  Clearly I overestimated what I could do in one week, but nonetheless I have a set few goals for the next three weeks.

As of now my priority is learning shaders.  The mesh skinning, IK and toon shading will soon follow.

Global Game Jam

My GDW team and two other guys (Mike A. and David Y.) created the game Zombie Fever!  I learned a lot, more about scope and how fast I can create pixel art.  We decided to use GameMaker for our basic game engine and Branden and Kevin took the reigns on that while David, Tyler and I created lots of pixel art.  Mike A. tried converting himself from pictures to pixel art and the other Mike found awesome disco music to use while he post processed them in Soundbooth.

Timmy The Zombie

This is the zombie I made, he is supposed to be moonwalking in the game.

The game started out from us looking at this years theme, the ouroboros, we started talking about infinite loops and the circle of life.  My idea was we create a game about zombies that follows the concept: Human turns into zombie, Zombie eats human, zombie turns back into human.  Then we got thinking more and we wanted a bright colour palette and somehow disco dance dance revolution made it into our game.  By the end our concept was one main character (the pixelated version of Mike) turned into a zombie and dances his way back to humanity.

HERE is the final build of the game if you choose to play it.
HERE is just the .exe.

Till next week
– Moose