Spatial Indexing and Quadtrees

This post is going to be me breaking down and trying t understand this blog post (done by Nick) about Spatial indexing with Quadtrees and Hilbert Curves.  Hopefully I can learn how this algorithm would be useful for terrain and environment representation.  I have a few ideas currently, but I’d like to break down and understand the algorithm first.

Spatial Database’s and Spacial Indices

In order to understand what a spatial index is, we need to know what a spatial database (WIKI) is.  A spatial database is essentially a collection of data that stores points, lines and polygons and any other object data.  Simple terms: databases that store information related to objects in space.  Many databases feature the following functionality:

  • Spatial Measurements: Distance between points
  • Spatial Functionality: Modify existing features while creating new ones
  • Spatial Predicates: Allows for binary/(true/false) queries like, “is there a boss located in that cave 1km from here?”
  • Constructor Functions: Creates new features in the database with an SQL query specifying the vertices (node points) that create lines.  If the first and last vertex are connected (first and last vertex are identical) the feature will be a polygon
  • Observer Functions: Queries that return specific information like the location of the center of a circle

[ASIDE] SQL – Structured Query Language that is designed for managing data in rational database management systems
[ASIDE] SQL Queries – Queries which retrieve data based on specific criteria.  Most important element in SQL

Spatial Queries

Spatial indices are used by the spatial database to make faster queries.
These queries are essentially functions with the following template: functionName (parameter(s)) : return type

Types of queries:

  • Distance(geometry, geometry) : number
  • Equals(geometry, geometry) : boolean
  • Disjoint(geometry, geometry) : boolean
  • Intersects(geometry, geometry) : boolean
  • Touches(geometry, geometry) : boolean
  • Crosses(geometry, geometry) : boolean
  • Overlaps(geometry, geometry) : boolean
  • Contains(geometry, geometry) : boolean
  • Length(geometry) : number
  • Area(geometry) : number


Trees. OH NO! I personally hated my class on algorithms and data structure’s, except for the part on trees.  Trees were simple to understand and they were related to the more interesting computer animation algorithms (Skeletal Animation, global vs local transformations).  Trees are simply data structures of linked nodes.

So quadtrees are used as a form of spatial indexing.  They are a special type of tree where each internal node has exactly 4 children.  They are heavily used in partitioning 2D space by recursively subdividing it into four quadrants and regions.

This is an example of a 2D quadtree. Ignore the points for now if they confuse you. They are just point data.  This is to show how a quadtree divides an indexed area

Nick describes the quadtree very well on his blog.  Each node in the tree is represented by a bounding box covering a part of the space that in indexed, the root node would be covering the entire area.  Each node is either a leaf node with one or more indexed points, but with no children.  If it is an internal node then it has exactly 4 children, one for each quadrant obtained by diving the area covered in by half on each axes.  This next picture should make things a bit more clear:

Representation of how a quadtree is structured internally

To query a quadtree is very simple.  Since we have a good graphical representation of our quadtree we are essentially dealing with intersecting areas.  We know that the root encompasses the entire area of the quadtree, and each child is a subsection of that.  So to query, we need to examine each child node and see if the area inside that quadrant intersects with our current query.  Once we encounter a leaf node is when we have to search each entry within that area to see if it intersects with the query area.  Then you simply return it if it does.

Geohashing and Hilbert Curves

Both of these are ways of not using the quadtree to recursively look up through the tree, but rather use different methods of querying.

The order in which geohashing visits each quad
The order in which a hilbert curve visits each quad


Nick goes into a lot more detail in the geohashing and hilbert curve methods, if you are so inclined to learn more about them, click here.  However, as a game developer quadtrees and spatial databases make me think of maps in games.  Ways of representing them (mini-maps) or ways of maybe organizing our scene managers in ogre?  Who knows, but now that I am more familiar with the quadtree data structure I am sure my team an I can think of many ways to implement this useful data structure in our game.

Shaders 102 – Code Integration


Today I am going to talk about how to integrate shaders into your code.  Starting with the layout of a shader program and how to integrate it in your code.

Basic Shader Code

Here is a vertex shader:

//vertex shader from chapter 3 of the Cg textbook

struct C3E2v_Output {
float4 position : POSITION;
float3 color : COLOR;
float2 texCoord : TEXCOORD0;

C3E2v_Output C3E2v_varying(
float2 position : POSITION,
float4 color : COLOR,
float2 texCoord : TEXCOORD0)
C3E2v_Output OUT;

OUT.position = float4(position,0,1);
OUT.color = color;
OUT.texCoord = texCoord;

return OUT;


The program first begins with an output structure as follows:

struct C3E2v_Output {
float4 position : POSITION;
float3 color : COLOR;
float2 texCoord : TEXCOORD0;

Since we know that this is the vertex shader, and it has to pass values to the rest of the graphics pipeline, this is structure is just some of the values that our shader will be using.  By defining an output structure we can manipulate the items inside it.  Basically this is like a variable declaration for a function where we would be outputting position, color and texture coordinates.

Last week we talked about how Cg has vectors and matrices integrated into their variable declaration.

  • float4 position : is essentially =[x,y,z,w] where w=1.  If this was written in C++ it would be float position[4]={x,y,z,w}
  • float3 color : is similar to the vector above, but this is used to represent our colour channels =[r,g,b]
  • float2 texCoord : is our U and V coordinates in our texture = [u,v]
While this program does not showcase the matrix declaration of Cg, here are some examples
  • float4x4 matrix1 : this is a four by four matrix with 16 elements
  • half3x2 matrix2 : this is a three by two matrix with 6 elements
  • fixed2x4 matrix3 : this is a two by four matrix with 8 elements
If you wanted to declare a matrix with some values, you would do it like this:
float2x3 = {1.0, 2.0,
            3.0, 4.0,
           5.0, 6.0}


Next we have our entry function:

C3E2v_Output C3E2v_varying(
float2 position : POSITION,
float4 color : COLOR,
float2 texCoord : TEXCOORD0)

This is what defines our fragment or vertex program.  This is similar to the main function in C/C++.  What this is telling us, is that our shader is taking in position, colour and texture coordinates.  We know that our structure above has the same parameters as our input, so we know that we are going to be manipulating these parameters and then outputting them.

Last we have our function body:

C3E2v_Output OUT;

OUT.position = float4(position,0,1);
OUT.color = color;
OUT.texCoord = texCoord;

return OUT;

First we start of by creating our structure object called OUT.  This code really doesn’t do much but set values in the structure equal to the inputs and output them to the next stage in the pipeline.  The interesting piece of code is the OUT.position = float4(position,0,1) part.  This takes the incoming position with only two incoming parameters (x,y) and converts it into a float4 by giving the last two variables a 0 and 1 value to get (x,y,0,1).

3D Graphics Application Integration

Creating Variables

So knowing how that code works is great, however implementing Cg code in your C++ code is where I normally spend most of my time working with shaders.  The actual shader code is fairly easy to work with, but integrating it in your Graphics Application is where the real pain is.  The Cg book doesn’t really cover this explicitly, however it does have examples in the API in your /Program Files/NVIDIA Corporation/Cg/examples/ folder.

To start there are a number of things you have to declare, I normally do these as global variables:

static CGcontext myCgContext;
static CGprofile myCgVertexProfile,
static CGprogram myCgVertexProgram,
static CGparameter myCgVertexParam_constantColor;

static const char *myProgramName = "Varying Parameter",
*myVertexProgramFileName = "",
*myVertexProgramName = "C3E2v_varying",
*myFragmentProgramFileName = "",
*myFragmentProgramName = "C2E2f_passthru";

Obviously the names of these do not have to be the same as what I have written, but the idea is to teach you what each one is.

The first part of creating the CGcontext is the part I know the least about.  I believe it is the part where you initialise the shader program.  So just be sure to ALWAYS do this.

The next part is creating your vertex and fragment profile.  This is another thing to always do.

The next two parts are where you are given a lot of freedom.  The CGparameter will vary from program to program.  These are essentially parameters that you take in your graphics application and send to your shader.  constantColor is just a variable that we can send to our shader to replace the colour of every pixel or vertex.  Later on I will post on how we can send in parameters like diffuse light color, light position, attenuation parameters and much more.

The last part is the program names.  This is where you define your main function for each shader and the name of their file name.  Common names for each are fragment_passthru or vertex_passthru.

Initialise Shaders

The next step is where you physically create the shader program.  Somewhere in your glut loop you should create a initCg() void function where you place all your initialisations.  The book places everything in main, which I find to be stupid, so don’t do that.  It creates a lot of hard to read clutter.

myCgContext = cgCreateContext();
checkForCgError("creating context");
cgSetParameterSettingMode(myCgContext, CG_DEFERRED_PARAMETER_SETTING);

myCgVertexProfile = cgGLGetLatestProfile(CG_GL_VERTEX);
checkForCgError("selecting vertex profile");

myCgVertexProgram =
myCgContext,              // Cg runtime context
CG_SOURCE,                // Program in human-readable form
myVertexProgramFileName,  // Name of file containing program
myCgVertexProfile,        // Profile: OpenGL ARB vertex program
myVertexProgramName,      // Entry function name
NULL);                    // No extra compiler options;
checkForCgError("creating vertex program from file");
checkForCgError("loading verex program");

This code is fairly simple.  The beginning part creates the CgContext.  Then we create our vertex profile.  Lastly we tell the program where our Cg file is and the name of our entry function.  You would do something similar for the fragment shader.

The checkForCgError help your debug.  If anything goes wrong in the shader at that point, your cgError function will output an error code.

The other thing you can place in your initCg function is a GET_PARAM statement where you can pass variables to your shader program.

#define GET_PARAM(name) \
myCgVertexParam_##name = \
cgGetNamedParameter(myCgVertexProgram, #name); \
checkForCgError("could not get " #name " parameter");


#define GET_PARAM2(varname, cgname) \
myCgVertexParam_##varname = \
cgGetNamedParameter(myCgVertexProgram, cgname); \
checkForCgError("could not get " cgname " parameter");

GET_PARAM2(material_Ke, "material.Ke");
GET_PARAM2(material_Ka, "material.Ka");
GET_PARAM2(material_Kd, "material.Kd");
GET_PARAM2(material_Ks, "material.Ks");
GET_PARAM2(material_shininess, "material.shininess");

The is an example of sending parameters directly to your entry function and to other structures you can make.  The first batch of code sends the modelViewProj matrix, GlobalAmbient colour and the eyePosition of the camera.  These would be items you list in your entry function input parameters.

The other batch of code is an example of you sending parameters to a structure called Material.  Material has emmissive, ambient, diffuse and specular lighting along with a shininess parameter.  This is helpful for creating virtual objects that reflect light differently so things can look like rubber or plastic.

 Enabling and Disabling

The last and simplest thing to do is to enable and disable your shaders during your OpenGL draw loop.

When you are drawing your objects, you need to bind your fragment and vertex shaders by:

//Enable Shaders
checkForCgError("binding vertex program");
checkForCgError("enabling vertex profile");

checkForCgError("binding fragment program");
checkForCgError("enabling fragment profile");

Once that is done you would go on with the rest of your draw calls and at the end of your draw loop you would disable them by:

// Disable Shaders
checkForCgError("disabling vertex profile");

checkForCgError("disabling fragment profile");


That is pretty much a very basic description of how shaders are integrated into your C++ graphics application.

Thank you for reading,
– Moose

Shaders 101 – Intro



I have read up to chapter 5 in the CG textbook (almost halfway done) and I thought it would be good to do a general summary of what I have learned so far.  Granted I might be misinformed or have fragmented knowledge about some aspects, but I hope I will be corrected in the comments.

What are Shaders and How Do They Work?

First of we are talking about the programming language Cg created by NVIDIA.  The point of shaders and the Cg language is to help you communicate via code with your graphics card to control the shape, appearance and motion of objects drawn.  Essentially it allows you to control the graphics pipeline, like a boss.  Cg programs control how vertices and fragments (potential pixels) are processed.  This means that our Cg programs are executed inside of our graphics cards.

Shaders are a powerful rendering tool for developers because they allows us to utilise our graphics cards.  Since our CPU’s are more suited toward general purpose operating system and application needs, it is better to use the GPU that is tailor built for graphics rendering.  GPU’s are built to effectively process and rasterize millions of vertices and billions of fragments per second.  The great thing about CG is that it gives you the advantages of a high level language (readability and ease of use) while giving you the performance of a low level assembly code.  Cg does not provide pointers and memory allocation tools.  However it supports vectors and matrices and many other math operations that make graphics calculations easier.

Cg is not meant to be used as a full fledged programming language.  We still need to build our 3D applications in C++ (or any language) then use our shader language (CG, HLSL, GLSL, RenderMan etc.) to optimise our graphics using the GPU.

The Graphics Pipeline

Graphics Pipeline: From the CG Textbook

In order to understand how shaders work, we have to have a general understanding on how the graphics pipeline (stages operating in parallel) work.  First your 3D application sends several geometric primitives (polygons, lines and points) to your GPU.  Each vertex has a position in 3D space along with its colour, texture coordinate and a normal vector.

Vertex Transformation

This is the first processing stage of the pipeline.  First several mathematical operations are performed on each vertex.  These operations can be:

  • Transformations (vertex space to screen space) for the rasterizer
  • Generating texture coordinates for texturing and lighting to determine its colour

Primitive Assembly and Rasterization

Once the vertices are processed, they are sent to this stage of the pipeline.  First the primitive assembly step assembles each vertex into geometric primitives.  This will result in a sequence of triangles, lines or points.

Geometric Primitives

After assembling the primitives will need to be clipped.  Since we are limited to a screen we cannot view the entire screen.  So according to our view frustum we clip and discard polygons (culling).  Once our screen is clipped our next step is to rasterize.  This is the process of determining what pixels are covered by a geometric primitive.


The last important item in this process is for the user to understand the difference between pixels and fragments.  A pixel represents a location on the frame buffer that has colour, depth and other information.  A fragment is the data needed to generate and update those pixels.

Fragment Texturing and Coluring

Now that we have all our fragments the next set of operations determine its final colour.  This stage performs texturing and other math operations that influence the final colour of each fragment.

Raster Operations

This is the last stage of the graphics pipeline.  It is also one of the more complex stages.  Once the completed fragments come out of the previous stage the graphics API perform several operations on the incoming data.  Some of them are:

  • Pixel ownership test
  • Scissor test
  • Alpha test
  • Stencil test
  • Depth test
  • Blending
  • Dithering
  • Logic operations
This is pretty much the above process in a nutshell.
Pipeline In a Nutshell

Programmable Graphics Pipeline

So what was the point of talking about the pipeline?  Now that we know how the fixed pipeline works and how normal graphics API’s send information to be processed we can see where are shaders are executed and what they do.

Programmable Graphics Pipeline

The two important parts of this diagram are the programmable vertex processor that runs our Cg vertex programs and the programmable fragment processor that runs our Cg fragment programs.  The biggest difference between each one is the fact the the fragment processor allows for texturing.


Now that we know how the graphics pipeline works we can create programs to manipulate the pipeline however we want.  Next week we shall take a look at how to make programs for Cg and how they work in your 3D application.

Thank you for reading,