Photo Blog 109

5 09 2010

While I was playing around with some more product photography (perhaps the next update will explain) Alyona decided to do some painting. I spun the camera around on the tripod and managed to catch this without her seeing. I like her pose and look of utter concentration. That and our random assortment of stuff around the apartment!

Painting (f8.0 1/10s 47mm)

Graphics – MGP – Lighting

3 07 2010

Now that our surface has normals we can light it with the afore-mentioned Phong illumination model. This model breaks lighting into 3 components: ambient, diffuse and specular.

Ambient light is that light which is everywhere due to light reflecting of other surfaces. If you look at the underside of your chair you will notice that there is some light, despite there not being a light shining directly on it. Ideally one would use a global illumination method to calculate this properly. Under Phong this is just a constant

Diffuse light is that which results from being in direct illumination from a light source. The defining characteristic of diffuse light is that it is strongest when the light is pointing directly at the surface and less when it is at an angle. Of course when the angle between the 2 hits 90 degrees there is no diffuse light at all (or if it is on the underside of an object). Under phong this is calculated using the angle between the surface normal and the direction to the light source.

Here the angle between the direction to the Light (L) and surface is calculated using the dot product between the normal (N) and light direction. This has the property that when they are in the same direction they will be 1 and when they are 90 degrees apart they will be 0.

Specular light is that which comes from the light source itself reflecting off the surface and into the camera. This allows us to model shiny (although not reflective) surfaces. Under Phong we now add the direction to the camera and compare the angle between this vector and the reflected vector from the light source. If they match up we have a reflection and if they don’t, neither do we.

The alpha in this equation determines how wide the spot of light will be. A small alpha equates to a larger spotlight.

But we can get the same results from a slightly different formula that is a little more efficient.
Instead of having to reflect the light vector to get the vector to compare against the view we can use what is known as the ‘Half Vector’. If we add the Light Vector and View Vector we will get something which should point directly away from the surface if they are reflected exactly around the normal (i.e there should be a specular reflection) and will point elsewhere if it they aren’t. As such we can compare this half vector with the normal for the same effect. Making the equation to use:

The wikipedia article has more if you are interested and is where I got the following demonstrative picture which clearly shows how each of these three components are used to create the final lighting of a strange object:

Phong Components (from the Wikipedia Article)

That is all well and good but we need to convert this into code. We can do this using the GLSL shaders introduced last time. As such the Vertex shader looks like this:

varying vec3 normal;
varying vec3 eyeDir;
varying vec3 lightDir;
void main() {
    gl_Position = ftransform();
    normal = normalize(gl_NormalMatrix * gl_Normal);
    vec3 vertex = vec3(gl_ModelViewMatrix * gl_Vertex);
    lightDir = normalize(gl_LightSource[0] - vertex);
    eyeDir = normalize(-vertex);
    gl_FrontColor = gl_Color;
} \n\

Here we calculate all the vectors we need for computing the lighting. These will then be interpolated between each vertex to be correct on the per-pixel level. This is far more efficient than computing them at each pixel. This approach can have problems for very large surfaces where the interpolation may not be done correctly / may be undefined. Our glass is fairly high resolution so this isn’t an issue here.

The Fragment shader then looks like this:

varying vec3 normal;
varying vec3 eyeDir;
varying vec3 lightDir;
void main() {
    vec3 ambient = vec3(0.2,0.2,0.2);
    vec3 diffuse = vec3(0.8,0.8,0.8)*( max(dot(normal,lightDir), 0.0);
    vec3 specular = pow(max(dot(normal,normalize(lightDir+eyeDir)),0.0),30.0));
    vec3 lightCol =  diffuse + specular + ambient;
     gl_FragColor = vec4(gl_Color.rgb * lightCol,1.0);

Here we calculate the 3 components (ambient, diffuse and specular) to create the intensity of the light at that point. This is then multiplied with the colour of the surface to get the final colour of the pixel.

This then gives us this image for our glass:

Phong Illumination

Graphics – MGP – Intro to Shaders

10 06 2010

Before we can go any further into the project we need to introduce the concept of shaders and the modification of the Fixed Functionality Pipeline.

In OpenGL the normal pipeline looks like that in the image below. Vertices, normals and colours go in one end, are transformed into their final location (depending on camera, matrix transformations etc) and then the colour of each pixel is determined before being put on the screen.

OpenGL Pipeline Simplified

In our use of shaders we are changing two stages of this pipeline by replacing the fixed functionality with little programs of our own. The two areas that we can replace are marked in green in the diagram.

Replacing the FFP
We are going to use GLSL shaders which means they are written in the GLSL language and supplied to the graphics driver as code at runtime. The graphics driver then compiles them into a shader and returns a reference for us to use.

To create the reference we need to follow the steps below:

  1. Create handles for our shaders
  2. Send the source code for each to the graphics driver
  3. Compile the shaders
  4. Check for compilation problems
  5. Tell our shader about the Fragment and Vertex shaders
  6. Link shader
  7. Validate shader
  8. Check for linker errors

The code to do this is as follows:

// references we will need
GLuint shaderHandle, vsHandle, fsHandle;
GLint result;
const GLChar* vsSource = "…";
const GLChar* fsSource = "…";

// compile the vertex shader part
vsHandle = glCreateShader (GL_VERTEX_SHADER_ARB); // Step 1
glShaderSource (vsHandle, 1, &vsSource, NULL); // Step 2
glCompileShader (vsHandle); // Step 3
glGetShaderiv (vsHandle, GL_COMPILE_STATUS, &result); // Step 4
if( result == GL_FALSE )
    ; // ohoh - need to handle the fail case (print what is wrong etc)

// compile the fragment shader part
fsHandle = glCreateShader (GL_FRAGMENT_SHADER_ARB); // Step 1
glShaderSource (fsHandle, 1, &fsSource, NULL); // Step 2
glCompileShader (fsHandle); // Step 3
glGetShaderiv (fsHandle, GL_COMPILE_STATUS, &result); // Step 4
if( result == GL_FALSE )
    ; // ohoh - need to handle the fail case (print what is wrong etc)

// link them together
shaderHandle = glCreateProgram (); // Step 1
glAttachShader (shaderHandle, vsHandle); // Step 5
glAttachShader (shaderHandle, fsHandle); // Step 5
glLinkProgram (shaderHandle); // Step 6
glValidateProgram (shaderHandle);	// Step 7
glGetShaderiv (shaderHandle, GL_LINK_STATUS, &result); // Step 8
if( result == GL_FALSE )
    ; // ohoh - need to handle the fail case (print what is wrong etc)

And that is pretty much it. There is a whole bunch of extra error checking you can do and you can even get and print the compilation errors if you want but the above is the simplest case.

Now that you have a proper shader you need to be able to turn it on and off. How this works is that you can enable a particular shader much like you would lighting and then each draw command submitted until the shader is turned off will use the new functionality.

As such the proper way to use it would be something like:

glUseProgram (shaderHandle); // Turn it on
// Do some drawing
glUseProgram(0); // Turn it off

Writing a (really) simple shader
The above code explains how to create a new shader and how to use to draw but assumes you have some code to submit to the graphics driver. Here we will describe the simplest possible shader and a small variation that lets you use the normal at each point as the colour (for visualization much like the image in the previous graphics post).

If all you want is to replicate the fixed funtionality the following fragment and vertex shaders will do the trick

// Vertex shader
void main ()
	gl_Normal = gl_NormalMatrix * normalize(gl_Normal);
	gl_Position = ftransform();
// Fragment shader
void main ()
	gl_FragColor = gl_FrontColor;

This is pretty boring though (and does even less than the normal pipeline) so lets change the fragment shader a bit to use the normal instead of the color.

// Fragment shader
void main ()
	gl_FragColor = vec4(, 1.0);

This will use the x, y and z components as the red, green and blue components respectively.

And here is that image again to show where we are at:


For further reading into what functionality is available when creating these shaders the GLSL spec is the place to go:

Graphics – MGP – Normals

7 06 2010

Last time we ended up with a solid but flat shaded model of our glass. The next step to making this look more realistic is to include per-pixel lighting. For our purposes we will use Phong Illumination but this (and every other lighting formula) requires that we know the normal at each point on the surface.

A normal is the vector that is perpendicular to the surface. For flat surfaces this is easy to understand but for curved surfaces it can be easier to understand it as the cross product of the two tangents to the surface (in essence the two tangents define a flat surface and the normal is then perpendicular to this). More information on normals can be found over on Wikipedia

For the specific case of our surface of revolution things are a bit easier than this. We have a 2D curve defined as a bezier curve for which we can find the 2D tangent. The tangent is found by solving the derivative of the Bezier equation. i.e we use:


Which is easier than it looks and mostly uses the code from the previous post on creating a surface.

But we need the vector that is perpendicular to the surface, not tangential so there is one further step before we can use this. To do this we rotate the vector anti-clockwise by 90 degrees to get the orthogonal vector.

The function itself looks something like this:

-(NSPoint) tangentOnCurve:(double) t
	NSPoint ret = NSMakePoint(0,0);
	double bern;
	// iterate over all points adding the influence of each point to the tangent
	for( int i=0; i < [mControlPoints count]; i++ )
		bern = Basis_Derv([mControlPoints count], i, t);
		ret.x += [[mControlPoints objectAtIndex: i] pointValue].x * bern;
		ret.y -=  [[mControlPoints objectAtIndex: i] pointValue].y * bern;
	// we then normalize this result (make sure the vector has a length of 1)
	double len = sqrt( ret.x*ret.x + ret.y*ret.y );
	if( len > 0.0 )
		ret.x = ret.x/len;
		ret.y = ret.y/len;
	return ret;

The rotation bit is being handled by lines 9 & 10 where the components are reversed to create the orthogonal vector.

This gives us a vector in 2D which we know is associated to a 2D point that is being rotated around the z-axis to create the surface of revolution. This rotation can be applied to the normal vector as well to create a final 3D vector.

Carying on from last time we need to put these normals into an array and upload them to the graphics card for use. The same as for vertices, to get the normals into graphics memory we can use:

// generate the buffer
// fill the buffer
glBindBuffer(GL_ARRAY_BUFFER, normalRef);
glBufferData(GL_ARRAY_BUFFER, numberVerts * 3 * sizeof(float), &normals, GL_STATIC_DRAW);

And then when drawing there are a couple extra lines (highlighted below):

// let it know which buffers we will be supplying
// let it know reference/offsets for the draw array
glBindBuffer(GL_ARRAY_BUFFER, drawRef);
glVertexPointer(3, GL_FLOAT, 0, 0);

// let it know reference/offsets for the normals array
glBindBuffer(GL_ARRAY_BUFFER, normalRef);
glNormalPointer(GL_FLOAT, 0, 0);

// tell it to draw the specified number of vertices
glDrawArrays(GL_TRIANGLE_STRIP, 0, numberVerts);
// turn stuff off again

If we were to color the surface using these vectors (a false colouring using the normals x,y,z as the r,g,b colour components) we get an image like the one below:


Next: Lighting

Photo Blog 108

7 06 2010

Something I have been interested in for a while but never had an impetus to explore is product photography. Recently however someone said they were needing some stuff done and simultaneously a tutorial popped up on one one of the blogs I read ( This was reason enough so I tried it out yesterday and made myself a light tent. It really was quite easy and just consisted of finding some nice diffuser material at the Look Sharp store on Victoria St and one of my storage boxes butchered for the task. The final setup looked like this:

Setup Shot

The two sides and top had been replaced with the semi-transparent material and I had two identical desk lamps shining from the sides. Unfortunately we had no paper big enough to act as the backdrop so used two pieces stuck together. This resulted in a line in the background but it is easy to imagine what it would look like without this…

So now on to the results. I didn’t have much time to play around before we had to head out but here are two examples of what was able to be achieved. It definitely required longer exposures than normal and a tripod to keep things steady. Even on small apertures focus was an issue this close to I had to pull the camera out a bit and then crop the final image.

Jewellery Example 1 (f22 2.5s 105mm)

And a second example:

Jewellery Example 2 (f22 1.3s 105mm)

The effect of having nice diffused lighting is obvious and I’m definitely keen to try some more!

Photo Blog 107

23 05 2010

My Birthday was last weekend and for it I was given a brand new Speedlight for my camera. I’m still very much learning how to use it but took it along to my brother’s 21st last night ( we aren’t really born a week apart but this is when he chose to celebrate it). Played around a bit and got a few OK shots.

The two I decided to post are for rather different reasons; the first was a bit of a test to see what it could *really* do and come out quite artistic. I angled the flash straight up and then shot into a mirror on one side of the bar. The result is a cool reflection of the partygoers with myself in the middle behind the camera.

Reflections (f4.0 1/30s 24mm)

It really is quite amazing that it can produce that much light and get a correct exposure at 1/30s. The camera was otherwise saying it would need 4 seconds! This was setting the flash to Manual at 1/1 power so really is a demonstration of what it can do…

The second photo is more personal and is a scene replayed at every 21st the world over. Stories from Mum & Dad to embarrass the birthday boy :-)

Typical 21st (f4.0 1/30s 45mm)

I still need to learn a lot more, and to start with I think I need to fiddle with the white balance a little and work out how not to get over-saturated pictures ( the above has a slight de-saturation applied in post).

Graphics – MGP – Tessellation

23 05 2010

Now that we have all the points necessary we want to draw these as a surface and the first step in this is to generate a set of vertices to get them displayed.

Before we create the vertices we need to know the draw command that we will be using with OpenGL as this will determine the order in which we need them. For our purposes it makes the most sense to use GL_TRIANGLE_STRIP, which needs an order of vertices alternating between two parallel tracks. Remembering that in GL_TRIANGLE_STRIP we use the last 2 vertices and the next supplied one to define a triangle this is equivalent to needing vertex co-ordinates in the order of the diagram below.

Triangle Strip Ordering

With this in mind, and given a particular resolution we can use the following to get the set we need:

float vertices_l[(resT+1)*resS*6];
int vertexCount = (resT+1)*resS*2;
int index = 0;
NSPoint3 vertex, normal;
for(int lt=0; lt < resT; lt++)
	for(int ls=0; ls < resS; ls++)
		index = (lt*(resS) + ls)*6;
		vertex = [self pointOnSurfaceT: (lt+1)/(double)resT S: ls/(double)resS];
		vertices_l[index + 0] = vertex.x;
		vertices_l[index + 1] = vertex.y;
		vertices_l[index + 2] = vertex.z;
		index += 3;

		vertex = [self pointOnSurfaceT: (lt)/(double)resT S: ls/(double)resS];
		vertices_l[index + 0] = vertex.x;
		vertices_l[index + 1] = vertex.y;
		vertices_l[index + 2] = vertex.z;
		index += 3;

Now of course we could keep this as an array and use immediate mode (the ol’ glBegin() / glEnd() combination) and iterate over it each frame but this would be terribly inefficient. To be a bit smarter we will use vertex arrays and load the data into the graphics card, which is not only faster but also sets us up for the final stage in the project where we transform the vertex data on the GPU.

Using vertex arrays is almost as easy as using immediate mode and follows 3 basic steps:

  1. Create the array reference
  2. Load the vertex data into the array
  3. Use the array whenever we want to draw the object

The first two steps are done only once with the reference being used whenever we need it. The code to perform these two steps is:

// generate the buffer
// fill the buffer
glBindBuffer(GL_ARRAY_BUFFER, drawRef);
glBufferData(GL_ARRAY_BUFFER, numberVerts * 3 * sizeof(float), &vertices, GL_STATIC_DRAW);

Whenever we want to use this buffer to draw any object we can do this as follows:

// let it know which buffer we are talking about
glBindBuffer(GL_ARRAY_BUFFER, drawRef);
// let it know offsets
glVertexPointer(3, GL_FLOAT, 0, 0);
// tell it to draw the specified number of vertices
glDrawArrays(GL_TRIANGLE_STRIP, 0, numberVerts);
// turn stuff off again

Which is dropped in wherever you would normally do the glBegin()/glEnd() bit.

You can see where the GL_TRIANGLE_STRIP bit comes in and how many fewer instructions need to be sent to the graphics driver each draw. And I do say graphics driver because it is up to it where it stores the stuff. You can give it a hint, however, which is what the GL_STATIC_DRAW is doing in the first code snippet. The STATIC means we intend to modify it once and will be drawing it many times and strongly suggest that the data should be kept in VRAM.

With all this work we get the following image:


Which isn’t really that impressive but there is a ways to go yet… next time: normals and lighting