Graphics – MGP – Melting Glass

15 10 2010

So finally we come to the actual melting of the glass by building on the foundations of the previous post. To recap, in the previous post we setup a process whereby we could render our geometry to an FBO, run a shader over this FBO (where each fragment corresponds to exactly one vertex of the geometry) and then use the resulting information as the new position/normal/colour for each vertex in the geometry.

Why not just do this in a vertex shader?
An interesting thing about this type of feedback is that changing the vertex position in one time step can be carried over to the next, something that can’t be achieved by just modifying a vertex position in a shader.

Getting Started
Before we can add the melting it is important to state what the most basic shader set is for this type of operation. I was naughty and didn’t include this in the previous post but should have. Thus the vertex shader that runs when rendering to the FBO is very simple (does nothing):

void main()
{
	gl_Position = gl_ProjectionMatrix*gl_Vertex;
	gl_TexCoord[0] = gl_MultiTexCoord0;
}

The fragment shader has a little more base code because it has to make sure the position, normal and colours are all passed on to the relevant render targets:

// references to the incoming textures (really the vertices etc)
uniform sampler2DRect colourTex; 
uniform sampler2DRect vertexTex;
uniform sampler2DRect normalTex;
void main()
{
    //retrieve the specific vertex that this fragment refers to
    vec3 vertex = texture2DRect(vertexTex,gl_TexCoord[0].st).xyz;
    vec4 color  = texture2DRect(colourTex,gl_TexCoord[0].st);
    vec3 normal = texture2DRect(normalTex,gl_TexCoord[0].st).xyz ; 
    // write the information back out to the various render targets
    gl_FragData[0] = vec4(vertex,1.0); 
    gl_FragData[1] = color; 
    gl_FragData[2] = vec4(normal,1.0);
}

With this in place we should finally have the geometry appearing again!

Melting

How the user will be interacting with the melting is through the use of right-clicking and dragging over the surface of the glass to heat areas up. These areas will then ‘flow’ in the direction of gravity as they also cool back down to a static state

Translating mouse clicks
This means the first thing we need is to know where the user is clicking, not just in the viewport but also on our geometry. Luckily glu provides us with an easy to use function for just this purpose. By using gluunproject we can turn a provided x and y coordinate into a position within the volume the user can see. The only other things we need to provide are the current modelview, projection, and viewport matrices along with how ‘deep’ into the scene we are wanting to reference.

The usage of gluunproject is thus:

GLint viewport[4];
GLdouble projection[16];
GLdouble modelview[16];                
GLdouble out_point[3]; 

glGetDoublev(GL_MODELVIEW_MATRIX, (GLdouble*)&modelview);
glGetDoublev(GL_PROJECTION_MATRIX, (GLdouble*)&projection);   
glGetIntegerv( GL_VIEWPORT, viewport );

gluUnProject( x, y, z, modelview, projection, viewport, &out_point[0], &out_point[1], &out_point[2]);

Which should populate out_pointwith the position in 3D space. The only missing thing is how to get the ‘z’ component of where we want to hit, for which there are two strategies:

  • The first is to use a number between 0 and 1 that matches approximately the plane we want to affect. For me the number 0.8 worked well
  • The second is to read the exact depth of the first object at the selected point

If we want to take the second approach we have to make sure we have not yet cleared the depth buffer from the previous rendering and then read the depth component for the pixel in question:

glReadPixels(x, y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &z);

‘Heating up’ the glass
Now that we have the point at which we know the user is clicking we can determine how close each vertex is to this point and apply some function to increase heat in this area. We can then store the amount of ‘heat’ that each vertex has in the colour channels so that next iteration we can recall the value and thus retain this information. Handily we humans use colour as an indication of heat so storing this information in the colour channel makes a lot of sense.

To determine the amount of heat we will apply to each vertex we first determine the distance between the vertex and the ‘affect point’ that the user is clicking.

uniform vec3 affect;
...
 float diff = 0.1 / dot( vertex - affect, vertex - affect ); // or 0.001/d^2

We can then use this to determine how much heat we want to apply:

  float addInt = smoothstep( 0.0, MAX_INTENSITY, diff); // max/d^2

And determine how much heat will be lost through cooling:

  float remInt = (color.r + color.g + color.b) * COOL_PERCENT;

And the final intensity for this vertex (in the range [0,3]

  float intensity = (color.r + color.g + color.b) + (addInt - remInt) * deltaTime;

To store this intensity for next time we can modify how the colour is stored so that instead of just the previous colour it is now:

  gl_FragData[1] = clamp(vec4(intensity,intensity-1.0,intensity-2.0,1.0),0.0,1.0); // colour

which will mean that red, green, and blue will be ‘filled’ in sequence up to a white hot heat.

For the purposes of completeness: I used the values of 1.0 and 0.5 for MAX_INTENSITY and COOL_PERCENT respectively.

I’m melting…
After calculating the intensity in the previous step we now have enough information to advect the vertex to a new position. Step one in this process is knowing which way is ‘down’. This can be extracted from the modelview matrix as if we treat the ‘y axis’ as being vertical then retrieving the ‘y axis’ for the current camera view will be the equivalent of gravity. Thus gravity becomes:

float gravity[3];
gravity[0] = modelview[1];
gravity[1] = modelview[5];
gravity[2] = modelview[9]

And the amount to move the vertex by vertically can be calculated in the shader by:

 vec3 transform = -(intensity*gravity*deltaTime*MAX_MELT);

 gl_FragData[0] = vec4(vertex + transform,1.0); // vertex

where MAX_MELT is just a scale factor of 0.02

With this the glass will start melting when right-clicked!!

The only thing that will be a little strange is that while advecting the vertices we are leaving the normals as they are, which isn’t exactly correct. However because we don’t have direct access to the surrounding vertices we can’t easily reconstruct the exact normal and in my experiments keeping it the same works as an approximation unless large changes in geometry are made…

So the results:

Melting Glass

And a short video:





Graphics – MGP – Rendering Geometry to an FBO

7 10 2010

The title of this post might seem a little strange both in content and intention. Hopefully by the end of it I will have explained what I mean by “rendering geometry to an FBO” and why we would want to do it.

First the why
Now that we have a reasonable approximation of a glass the next and final step is to make it melt. This involves simulating the process of heating up glass, letting gravity cause the heated (and now less viscous) glass to flow, and then letting the glass cool over time into its new shape.

This sounds pretty complicated (it isn’t) but most importantly it requires us to be able to take the position of each vertex in the glass, do something with it, and then store the new position for the next time we want to draw it. The naïve approach would be to keep all the info in main memory, do our thing, and then update the graphics card with the new positions every time. This has a few issues though:

  1. Even a fast computer is slow
  2. We end up transferring lots to the graphics card
  3. It isn’t anywhere near as cool as getting the GPU to do it!

And now the how…
Back in the post on tessellation we described how we were uploading the data onto the graphics card so that each time we wanted to draw this we didn’t have to upload it again.

So the idea behind rendering the geometry to an FBO and reading it back out is that we can use the existing vertex/normal/colours buffers as ‘textures’, run calculations on the graphics card within a shader, and then read the information back out into the same buffers to continue the rendering process as we have previously.

Based on this we can break down what we have to do into 4 steps: create an FBO to render into, interpret the existing buffers as textures, run a shader over them, and copy the result back into the original buffers.

1. Create an FBO
I’m not going to post the exact code I use for setting everything up exactly for the FBOs as there is a bit more going on (and this post is going to be long enough) but here are the basics. This code will set up a variable number of colour attachments to take advantage of the multiple render targets that FBOs can have.

// Create a reference to an fbo
glGenFramebuffersEXT(1, (GLuint*)&FBOId);
// bind to the new fbo
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, FBOId);
	
// Create the texture representing the results that will be written to
for( int i=0;i<num_draw_buffers;i++)
{
	glGenTextures(1, (GLuint*)&TexId[i]);
	glBindTexture(GL_TEXTURE_RECTANGLE_ARB, TexId[i]);
	glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB_FLOAT32_APPLE,  width, height, 0, GL_RGB, GL_FLOAT, 0x0);
	glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT + i, GL_TEXTURE_RECTANGLE_ARB, TexId[i], 0);
}
 
// Check the final status of this frame buffer
int status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
if( status != GL_FRAMEBUFFER_COMPLETE_EXT ){
	valid = false;
}
else{
	valid = true;
}
// Unbind FBO so we can continue as normal
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);

The other setup we need to do is to allocate texture references that we can eventually associate with the buffers.

But first: a non-obvious issue. Buffers for holding Vertex, Normal and Colour information are one dimensional and reasonably free of space limitations. Textures are different and different graphics cards/drivers have different limitations. Thus we need to break up our 1D buffer into a 2D buffer where the width is no larger than the max texture width supported by the card.

// calculate the width and height of the fbo we need
if( numberVerts < max_width )
{
	renderWidth = numberVerts;
	renderHeight = 1;
	renderOverrun = 0;
}
else {
	renderWidth = max_width;
	renderHeight = numberVerts / max_width + 1;
	renderOverrun = max_width * renderHeight - numberVerts;
}

And now that we have our dimensions we can allocate the required texture references. As you may notice the data portion of the call is NULL which means we aren’t actually providing data, just allowing for a reference to a texture of the given size.

We need to use GL_RGB_FLOAT32_APPLE as our internal storage method as we need reasonably high precision that would be otherwise lost. We will also be using GL_TEXTURE_RECTANGLE_ARB instead of the standard 2D because we need the precision when accessing the elements of the texture.

glGenTextures(1, (GLuint*)&vertexTex);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, vertexTex);
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB_FLOAT32_APPLE,  renderWidth, renderHeight, 0, GL_RGB, GL_FLOAT, 0x0);

2. Interpret the existing Buffers as Textures
This step is one of the simplest and involves only 3 steps for each buffer. One just has to bind the texture, bind the buffer and ‘update’ the texture from the buffer. The trick is not to provide data when updating the texture which tells the card to use the bound buffer as the data source instead.

glBindTexture(GL_TEXTURE_RECTANGLE_ARB, vertexTex);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, vertexBufferRef);
glTexSubImage2D( GL_TEXTURE_RECTANGLE_ARB,	0, 0, 0, renderWidth, renderHeight, GL_RGB, GL_FLOAT, NULL );

3. Render the data into the FBO
Rendering the data into the FBO involves setting up one quad that covers the entire frame so that each fragment corresponds to one vertex. Thus each time the fragment shader runs it will advect one vertex (and update colours and normals). For this to happen we have to change the frame buffer we are rendering into, bind the textures we updated in Step 2, enable the shader, render our quad, and then disable everything again.

// Step 1: Change the frame buffer and set up the view port to be rendering 
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, FBOId);
	
glPushAttrib(GL_VIEWPORT_BIT);
glViewport(0,0, width, height);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);	// Clear Screen And Depth Buffer	// OUCH!! Big performance hit
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluOrtho2D(0.0,width,0.0,height);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
	
GLenum dbuffers[] = {GL_COLOR_ATTACHMENT0_EXT, GL_COLOR_ATTACHMENT1_EXT, GL_COLOR_ATTACHMENT2_EXT};
glDrawBuffers(numTexId, dbuffers);

// Step 2: bind the textures
glEnable(GL_TEXTURE_RECTANGLE_ARB);
glActiveTextureARB( GL_TEXTURE0_ARB );
glBindTexture(GL_TEXTURE_RECTANGLE_ARB,vertexTex);

// Step 3: enable the shader
shader->enableShader();

// Step 4: render the quad - yes it is immediate mode. yes that is bad
glBegin(GL_QUADS);
glMultiTexCoord2f(GL_TEXTURE0, 0,0); 
glVertex2f(0, 0);
glMultiTexCoord2f(GL_TEXTURE0, renderWidth,0); 
glVertex2f(renderWidth, 0);
glMultiTexCoord2f(GL_TEXTURE0, renderWidth,renderHeight); 
glVertex2f(renderWidth, renderHeight);
glMultiTexCoord2f(GL_TEXTURE0, 0,renderHeight); 
glVertex2f(0, renderHeight);
glEnd();

// Step 5: turn everything back to normal
shader->disableShader();
glPopAttrib();
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();

4. Read the FBO render targets back into the Textures
This is also an easy step as we are just reversing the process taken in Step 2. The difference is that we are now binding the render target as our source (instead of the buffer) and we read into the buffer using glReadPixels instead of glTexSubImage2D. Once again using NULL for the ‘data’ portion tells it to use the attached read buffer.

// bind to the FBO so we can reference its render targets
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo->getID());
		
// read the output of each render target back into the buffers provided
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
glBindBuffer(GL_PIXEL_PACK_BUFFER,vertexBufferRef);
glReadPixels(0, 0, renderWidth, renderHeight, GL_RGB, GL_FLOAT, 0);

And the final output!
If something didn’t work then the final output will be nothing at all (doh!) but if it did you will be greeted with the impressive sight of… exactly what we had when we started. But we now are ready to add the melting.

Glass

Addendum: Actually, because I haven’t specified what the shaders are for rendering into the FBO nothing will be passed on till the next rendering step so nothing will be shown… till the next post





Photo Blog 112

26 09 2010

While it is a subject that has been done to death the Golden Gate bridge is pretty spectacular and I don’t think anyone goes there without taking a photo. We arrived in the early afternoon and there was already the stereotypical fog rolling in off the sea which made it all quite dramatic, if a little cold.

The Bridge (f14 45mm 1/100s)
All I’ve done is convert to Black and White using a blue filter (which makes the bridge’s red closer to black) and I really like the almost antique feel. Maybe I should add some grain or something…





Photo Blog 111

12 09 2010

As mentioned in the last photo blog I’ve been working on some more product photography. The new setup is a bit simpler than before using just one light and the box set up in a way that is a bit easier to manipulate. I also have the props / background that was needed by my Aunt for her shots which meant that the white backdrop wasn’t necessary. This made it heaps easier to set up each individual item and cut the time required down substantially.

The setup:

The Setup

The only real issue was making sure that I was using a custom white balance that matched the lighting as otherwise it would have been a pain in post-processing. In the end the only changes needed in post where a bit of cropping and use of the ‘heal’ brush in Photoshop to remove some artifacts near the edges.

A couple example images:

Example Shot 1 (f16 2.5s 105mm)

Example Shot 2 (f16 2.5s 105mm)





Graphics – MGP – Glass

8 09 2010

With this installment we will finally make our object look like it is made of glass. Luckily we have already been introduced to shaders in a previous post so all that remains to be done is to describe how we can incorporate our new environment/cube map into our shader and use it to fake reflection and refraction.

The first thing we are going to need for our calculations is the normal and position associated for each pixel. These can be calculated in the Vertex shader, as can the usual set of calculations needed for basic operations. As such our Vertex shader is quite simple as shown in Figure 1.

varying vec3 normal;
varying vec3 view;

void main()
{
    // the basics that have to be done
    gl_FrontColor = gl_Color;
    gl_TexCoord[0] = gl_MultiTexCoord0;
    gl_Position = ftransform();
	
    // calculate the view vector in eye-space using the ModelView and Projection matrix
    view = (gl_ModelViewProjectionMatrix * gl_Vertex).xyz;
    // calculate the normal vector in eye-space using the ModelView and Projection matrix
    normal = normalize(gl_NormalMatrix * gl_Normal);
}
// Figure 1

Using the same code as in the previous post the cube map texture should automatically be available to us in the GLSL shader as a texture. The only difference is that the type of the texture is samplerCube instead of sampler2D. Thus we begin our fragment shader like this:

// our cube map texture
uniform samplerCube tex;
//variables passed in from our vertex shader
varying vec3 view;
varying vec3 normal;

void main()
{	

When we want to access our cube map we do so through the use of a vector which expresses the direction we wish to sample the ‘cube’ that surrounds our object. For example, by calculating the reflection vector of the view around the normal and sampling this we can see what would be ‘reflected’ if our object was a perfect mirror.

    // create the reflection vector 
    vec3 reflect_v = normalize(reflect(view,-normal));
    // sample the cube map to find the reflected color
    vec3 colReflect = textureCube(tex,reflect_v).rgb;

GLSL provides a very useful function which we use here for performing the reflection of ‘view’ around ‘-normal’ so we don’t even have to think about it.

We can do exactly the same thing to find the colour that would come from refracting through the surface. This refraction would of course depend on the refractive indices of the two materials. Because we are trying to model glass this means that we are going from a refractive index of 1.0 (air) to 1.2 (glass).

    const float eta = 1.00 / 1.20;
    // create the reflection vector 
    vec3 refract_v = normalize(refract(view,-normal,eta));
    // sample the cube map to find the reflected color
    vec3 colReflect = textureCube(tex,refract_v).rgb;

There is one major limitation to this of course. A real simulation would model the effect of coming out the other side of the glass, and perhaps even entering another material if there are other objects in the scene / concave objects. This is pretty serious but for our purposes it works ok because we limit ourselves to only one object and it kind of looks ‘good enough’.

Finally we composite the reflected and refracted components together, along with a bit of specular lighting from the phong model discussed previously.

    // calculate the specular lighting
    float light = pow(max(dot(normal, gl_LightSource[0].halfVector.xyz),0.0),1000.0);
    // coefficients for each of the calculated components (reflection, refraction, lighting). These DON'T have to add to 1.0
    const vec3 coeff = vec3( 0.3, 0.6, 0.5 );
    gl_FragColor = vec4(coeff.x * colReflect + coeff.y * colRefract + coeff.z * vec3(light,light,light) + gl_Color.rgb,1.0);
}

This gives us a final result that looks something like this:

Glass





Photo Blog 110

7 09 2010

Recently I got to do a bit of road trippen’ between Las Vegas and San Fransisco and we took the route through Death Valley and Yosemite National Park. There is some stunning scenery through there as well as some rather interesting extremes. In 12 hours we went from -200feet to 10000feet and from 46ºC to 3ºC. Luckily I was traveling with a couple of other photographers so there were plenty of opportunities for stopping and taking photos.

Obviously there are lots of photos from there that I’m proud of but one in particular would have to be a panorama I took while leaving Death Valley. So here it is:

Death Valley Panorama

I really do suggest you click on it to view it at a higher resolution as seeing it in a small frame like above doesn’t do it justice.





Graphics – MGP – Environment Mapping

6 09 2010

From the previous step we have a well illuminated cup that looks like it is made of clay, but hardly like a glass. To make this a little more realistic we are going to use environment mapping. How this works is that we define a set of textures representing the six sides of a cube surrounding our scene. This is stored in a cube map which can be accessed like any other texture and so at any point we can ask what colour is in a particular direction and so create pseudo reflection and refraction by asking what from the scene would be reflected or refracted.

Creating the Cube Map
The first step to do this is to acquire a Cube Map. The one I have to use is one I found quite a while ago of the Swedish Royal Palace at night. It has a lot of interesting colours and lighting and while it isn’t the natural habitat for a wine glass it suits well enough for our purposes. You can often find textures appropriate for this sort of thing around the internet and one good collection in particular is Emil Persson’s over here.

These images then need to be loaded into graphics memory so they can be used This is done by using the same glTexImage2D calls as usual but with the targets in Figure 1 instead.

GL_TEXTURE_CUBE_MAP_NEGATIVE_Z
GL_TEXTURE_CUBE_MAP_POSITIVE_Z
GL_TEXTURE_CUBE_MAP_NEGATIVE_X
GL_TEXTURE_CUBE_MAP_POSITIVE_X
GL_TEXTURE_CUBE_MAP_NEGATIVE_Y
GL_TEXTURE_CUBE_MAP_POSITIVE_Y
// Figure 1

Once all 6 textures are loaded we can generate the environment map needed for reflections by calling the functions outlined in Figure 2.

glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
	
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP);
glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP);
glTexGeni(GL_R, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP);
// Figure 2

And finally, every time we want to employ our new Environment Map all we have to do is turn it on like we would any other texture.

// Enable
	glEnable(GL_TEXTURE_CUBE_MAP);
	glEnable(GL_TEXTURE_GEN_S);
	glEnable(GL_TEXTURE_GEN_T);
	glEnable(GL_TEXTURE_GEN_R);

// Draw
DrawObject();

// Disable
	glDisable(GL_TEXTURE_CUBE_MAP);
	glDisable(GL_TEXTURE_GEN_S);
	glDisable(GL_TEXTURE_GEN_T);
	glDisable(GL_TEXTURE_GEN_R);
// Figure 3

The result we now get hardly looks like a lot more like pewter than glass but we can already see the basic effect of environment mapping and how it might allow us to preform refraction as well as reflection.

Environment Mapping

This cube/environment map will now be available for us in the shader so we can make more customized requests of it.





Photo Blog 109

5 09 2010

While I was playing around with some more product photography (perhaps the next update will explain) Alyona decided to do some painting. I spun the camera around on the tripod and managed to catch this without her seeing. I like her pose and look of utter concentration. That and our random assortment of stuff around the apartment!

Painting (f8.0 1/10s 47mm)





Graphics – MGP – Lighting

3 07 2010

Now that our surface has normals we can light it with the afore-mentioned Phong illumination model. This model breaks lighting into 3 components: ambient, diffuse and specular.

Ambient light is that light which is everywhere due to light reflecting of other surfaces. If you look at the underside of your chair you will notice that there is some light, despite there not being a light shining directly on it. Ideally one would use a global illumination method to calculate this properly. Under Phong this is just a constant

Diffuse light is that which results from being in direct illumination from a light source. The defining characteristic of diffuse light is that it is strongest when the light is pointing directly at the surface and less when it is at an angle. Of course when the angle between the 2 hits 90 degrees there is no diffuse light at all (or if it is on the underside of an object). Under phong this is calculated using the angle between the surface normal and the direction to the light source.

Here the angle between the direction to the Light (L) and surface is calculated using the dot product between the normal (N) and light direction. This has the property that when they are in the same direction they will be 1 and when they are 90 degrees apart they will be 0.

Specular light is that which comes from the light source itself reflecting off the surface and into the camera. This allows us to model shiny (although not reflective) surfaces. Under Phong we now add the direction to the camera and compare the angle between this vector and the reflected vector from the light source. If they match up we have a reflection and if they don’t, neither do we.

The alpha in this equation determines how wide the spot of light will be. A small alpha equates to a larger spotlight.

But we can get the same results from a slightly different formula that is a little more efficient.
Instead of having to reflect the light vector to get the vector to compare against the view we can use what is known as the ‘Half Vector’. If we add the Light Vector and View Vector we will get something which should point directly away from the surface if they are reflected exactly around the normal (i.e there should be a specular reflection) and will point elsewhere if it they aren’t. As such we can compare this half vector with the normal for the same effect. Making the equation to use:

The wikipedia article has more if you are interested and is where I got the following demonstrative picture which clearly shows how each of these three components are used to create the final lighting of a strange object:

Phong Components (from the Wikipedia Article)

That is all well and good but we need to convert this into code. We can do this using the GLSL shaders introduced last time. As such the Vertex shader looks like this:

varying vec3 normal;
varying vec3 eyeDir;
varying vec3 lightDir;
void main() {
    gl_Position = ftransform();
    normal = normalize(gl_NormalMatrix * gl_Normal);
    vec3 vertex = vec3(gl_ModelViewMatrix * gl_Vertex);
    lightDir = normalize(gl_LightSource[0].position.xyz - vertex);
    eyeDir = normalize(-vertex);
    gl_FrontColor = gl_Color;
} \n\

Here we calculate all the vectors we need for computing the lighting. These will then be interpolated between each vertex to be correct on the per-pixel level. This is far more efficient than computing them at each pixel. This approach can have problems for very large surfaces where the interpolation may not be done correctly / may be undefined. Our glass is fairly high resolution so this isn’t an issue here.

The Fragment shader then looks like this:

varying vec3 normal;
varying vec3 eyeDir;
varying vec3 lightDir;
void main() {
    vec3 ambient = vec3(0.2,0.2,0.2);
    vec3 diffuse = vec3(0.8,0.8,0.8)*( max(dot(normal,lightDir), 0.0);
    vec3 specular = pow(max(dot(normal,normalize(lightDir+eyeDir)),0.0),30.0));
    vec3 lightCol =  diffuse + specular + ambient;
     gl_FragColor = vec4(gl_Color.rgb * lightCol,1.0);
}

Here we calculate the 3 components (ambient, diffuse and specular) to create the intensity of the light at that point. This is then multiplied with the colour of the surface to get the final colour of the pixel.

This then gives us this image for our glass:

Phong Illumination





Graphics – MGP – Intro to Shaders

10 06 2010

Intro
Before we can go any further into the project we need to introduce the concept of shaders and the modification of the Fixed Functionality Pipeline.

In OpenGL the normal pipeline looks like that in the image below. Vertices, normals and colours go in one end, are transformed into their final location (depending on camera, matrix transformations etc) and then the colour of each pixel is determined before being put on the screen.

OpenGL Pipeline Simplified

In our use of shaders we are changing two stages of this pipeline by replacing the fixed functionality with little programs of our own. The two areas that we can replace are marked in green in the diagram.

Replacing the FFP
We are going to use GLSL shaders which means they are written in the GLSL language and supplied to the graphics driver as code at runtime. The graphics driver then compiles them into a shader and returns a reference for us to use.

To create the reference we need to follow the steps below:

  1. Create handles for our shaders
  2. Send the source code for each to the graphics driver
  3. Compile the shaders
  4. Check for compilation problems
  5. Tell our shader about the Fragment and Vertex shaders
  6. Link shader
  7. Validate shader
  8. Check for linker errors

The code to do this is as follows:

// references we will need
GLuint shaderHandle, vsHandle, fsHandle;
GLint result;
const GLChar* vsSource = "…";
const GLChar* fsSource = "…";

// compile the vertex shader part
vsHandle = glCreateShader (GL_VERTEX_SHADER_ARB); // Step 1
glShaderSource (vsHandle, 1, &vsSource, NULL); // Step 2
glCompileShader (vsHandle); // Step 3
glGetShaderiv (vsHandle, GL_COMPILE_STATUS, &result); // Step 4
if( result == GL_FALSE )
    ; // ohoh - need to handle the fail case (print what is wrong etc)

// compile the fragment shader part
fsHandle = glCreateShader (GL_FRAGMENT_SHADER_ARB); // Step 1
glShaderSource (fsHandle, 1, &fsSource, NULL); // Step 2
glCompileShader (fsHandle); // Step 3
glGetShaderiv (fsHandle, GL_COMPILE_STATUS, &result); // Step 4
if( result == GL_FALSE )
    ; // ohoh - need to handle the fail case (print what is wrong etc)

// link them together
shaderHandle = glCreateProgram (); // Step 1
glAttachShader (shaderHandle, vsHandle); // Step 5
glAttachShader (shaderHandle, fsHandle); // Step 5
glLinkProgram (shaderHandle); // Step 6
glValidateProgram (shaderHandle);	// Step 7
glGetShaderiv (shaderHandle, GL_LINK_STATUS, &result); // Step 8
if( result == GL_FALSE )
    ; // ohoh - need to handle the fail case (print what is wrong etc)

And that is pretty much it. There is a whole bunch of extra error checking you can do and you can even get and print the compilation errors if you want but the above is the simplest case.

Now that you have a proper shader you need to be able to turn it on and off. How this works is that you can enable a particular shader much like you would lighting and then each draw command submitted until the shader is turned off will use the new functionality.

As such the proper way to use it would be something like:

glUseProgram (shaderHandle); // Turn it on
// Do some drawing
glUseProgram(0); // Turn it off

Writing a (really) simple shader
The above code explains how to create a new shader and how to use to draw but assumes you have some code to submit to the graphics driver. Here we will describe the simplest possible shader and a small variation that lets you use the normal at each point as the colour (for visualization much like the image in the previous graphics post).

If all you want is to replicate the fixed funtionality the following fragment and vertex shaders will do the trick

// Vertex shader
void main ()
{
	gl_Normal = gl_NormalMatrix * normalize(gl_Normal);
	gl_Position = ftransform();
}
// Fragment shader
void main ()
{
	gl_FragColor = gl_FrontColor;
}

This is pretty boring though (and does even less than the normal pipeline) so lets change the fragment shader a bit to use the normal instead of the color.

// Fragment shader
void main ()
{
	gl_FragColor = vec4( gl_Normal.xyz, 1.0);
}

This will use the x, y and z components as the red, green and blue components respectively.

And here is that image again to show where we are at:

Normals

For further reading into what functionality is available when creating these shaders the GLSL spec is the place to go: OpenGL.org