Graphics – MGP – Melting Glass

15 10 2010

So finally we come to the actual melting of the glass by building on the foundations of the previous post. To recap, in the previous post we setup a process whereby we could render our geometry to an FBO, run a shader over this FBO (where each fragment corresponds to exactly one vertex of the geometry) and then use the resulting information as the new position/normal/colour for each vertex in the geometry.

Why not just do this in a vertex shader?
An interesting thing about this type of feedback is that changing the vertex position in one time step can be carried over to the next, something that can’t be achieved by just modifying a vertex position in a shader.

Getting Started
Before we can add the melting it is important to state what the most basic shader set is for this type of operation. I was naughty and didn’t include this in the previous post but should have. Thus the vertex shader that runs when rendering to the FBO is very simple (does nothing):

void main()
	gl_Position = gl_ProjectionMatrix*gl_Vertex;
	gl_TexCoord[0] = gl_MultiTexCoord0;

The fragment shader has a little more base code because it has to make sure the position, normal and colours are all passed on to the relevant render targets:

// references to the incoming textures (really the vertices etc)
uniform sampler2DRect colourTex; 
uniform sampler2DRect vertexTex;
uniform sampler2DRect normalTex;
void main()
    //retrieve the specific vertex that this fragment refers to
    vec3 vertex = texture2DRect(vertexTex,gl_TexCoord[0].st).xyz;
    vec4 color  = texture2DRect(colourTex,gl_TexCoord[0].st);
    vec3 normal = texture2DRect(normalTex,gl_TexCoord[0].st).xyz ; 
    // write the information back out to the various render targets
    gl_FragData[0] = vec4(vertex,1.0); 
    gl_FragData[1] = color; 
    gl_FragData[2] = vec4(normal,1.0);

With this in place we should finally have the geometry appearing again!


How the user will be interacting with the melting is through the use of right-clicking and dragging over the surface of the glass to heat areas up. These areas will then ‘flow’ in the direction of gravity as they also cool back down to a static state

Translating mouse clicks
This means the first thing we need is to know where the user is clicking, not just in the viewport but also on our geometry. Luckily glu provides us with an easy to use function for just this purpose. By using gluunproject we can turn a provided x and y coordinate into a position within the volume the user can see. The only other things we need to provide are the current modelview, projection, and viewport matrices along with how ‘deep’ into the scene we are wanting to reference.

The usage of gluunproject is thus:

GLint viewport[4];
GLdouble projection[16];
GLdouble modelview[16];                
GLdouble out_point[3]; 

glGetDoublev(GL_MODELVIEW_MATRIX, (GLdouble*)&modelview);
glGetDoublev(GL_PROJECTION_MATRIX, (GLdouble*)&projection);   
glGetIntegerv( GL_VIEWPORT, viewport );

gluUnProject( x, y, z, modelview, projection, viewport, &out_point[0], &out_point[1], &out_point[2]);

Which should populate out_pointwith the position in 3D space. The only missing thing is how to get the ‘z’ component of where we want to hit, for which there are two strategies:

  • The first is to use a number between 0 and 1 that matches approximately the plane we want to affect. For me the number 0.8 worked well
  • The second is to read the exact depth of the first object at the selected point

If we want to take the second approach we have to make sure we have not yet cleared the depth buffer from the previous rendering and then read the depth component for the pixel in question:

glReadPixels(x, y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &z);

‘Heating up’ the glass
Now that we have the point at which we know the user is clicking we can determine how close each vertex is to this point and apply some function to increase heat in this area. We can then store the amount of ‘heat’ that each vertex has in the colour channels so that next iteration we can recall the value and thus retain this information. Handily we humans use colour as an indication of heat so storing this information in the colour channel makes a lot of sense.

To determine the amount of heat we will apply to each vertex we first determine the distance between the vertex and the ‘affect point’ that the user is clicking.

uniform vec3 affect;
 float diff = 0.1 / dot( vertex - affect, vertex - affect ); // or 0.001/d^2

We can then use this to determine how much heat we want to apply:

  float addInt = smoothstep( 0.0, MAX_INTENSITY, diff); // max/d^2

And determine how much heat will be lost through cooling:

  float remInt = (color.r + color.g + color.b) * COOL_PERCENT;

And the final intensity for this vertex (in the range [0,3]

  float intensity = (color.r + color.g + color.b) + (addInt - remInt) * deltaTime;

To store this intensity for next time we can modify how the colour is stored so that instead of just the previous colour it is now:

  gl_FragData[1] = clamp(vec4(intensity,intensity-1.0,intensity-2.0,1.0),0.0,1.0); // colour

which will mean that red, green, and blue will be ‘filled’ in sequence up to a white hot heat.

For the purposes of completeness: I used the values of 1.0 and 0.5 for MAX_INTENSITY and COOL_PERCENT respectively.

I’m melting…
After calculating the intensity in the previous step we now have enough information to advect the vertex to a new position. Step one in this process is knowing which way is ‘down’. This can be extracted from the modelview matrix as if we treat the ‘y axis’ as being vertical then retrieving the ‘y axis’ for the current camera view will be the equivalent of gravity. Thus gravity becomes:

float gravity[3];
gravity[0] = modelview[1];
gravity[1] = modelview[5];
gravity[2] = modelview[9]

And the amount to move the vertex by vertically can be calculated in the shader by:

 vec3 transform = -(intensity*gravity*deltaTime*MAX_MELT);

 gl_FragData[0] = vec4(vertex + transform,1.0); // vertex

where MAX_MELT is just a scale factor of 0.02

With this the glass will start melting when right-clicked!!

The only thing that will be a little strange is that while advecting the vertices we are leaving the normals as they are, which isn’t exactly correct. However because we don’t have direct access to the surrounding vertices we can’t easily reconstruct the exact normal and in my experiments keeping it the same works as an approximation unless large changes in geometry are made…

So the results:

Melting Glass

And a short video:


Graphics – MGP – Rendering Geometry to an FBO

7 10 2010

The title of this post might seem a little strange both in content and intention. Hopefully by the end of it I will have explained what I mean by “rendering geometry to an FBO” and why we would want to do it.

First the why
Now that we have a reasonable approximation of a glass the next and final step is to make it melt. This involves simulating the process of heating up glass, letting gravity cause the heated (and now less viscous) glass to flow, and then letting the glass cool over time into its new shape.

This sounds pretty complicated (it isn’t) but most importantly it requires us to be able to take the position of each vertex in the glass, do something with it, and then store the new position for the next time we want to draw it. The naïve approach would be to keep all the info in main memory, do our thing, and then update the graphics card with the new positions every time. This has a few issues though:

  1. Even a fast computer is slow
  2. We end up transferring lots to the graphics card
  3. It isn’t anywhere near as cool as getting the GPU to do it!

And now the how…
Back in the post on tessellation we described how we were uploading the data onto the graphics card so that each time we wanted to draw this we didn’t have to upload it again.

So the idea behind rendering the geometry to an FBO and reading it back out is that we can use the existing vertex/normal/colours buffers as ‘textures’, run calculations on the graphics card within a shader, and then read the information back out into the same buffers to continue the rendering process as we have previously.

Based on this we can break down what we have to do into 4 steps: create an FBO to render into, interpret the existing buffers as textures, run a shader over them, and copy the result back into the original buffers.

1. Create an FBO
I’m not going to post the exact code I use for setting everything up exactly for the FBOs as there is a bit more going on (and this post is going to be long enough) but here are the basics. This code will set up a variable number of colour attachments to take advantage of the multiple render targets that FBOs can have.

// Create a reference to an fbo
glGenFramebuffersEXT(1, (GLuint*)&FBOId);
// bind to the new fbo
// Create the texture representing the results that will be written to
for( int i=0;i<num_draw_buffers;i++)
	glGenTextures(1, (GLuint*)&TexId[i]);
	glBindTexture(GL_TEXTURE_RECTANGLE_ARB, TexId[i]);
	glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB_FLOAT32_APPLE,  width, height, 0, GL_RGB, GL_FLOAT, 0x0);
// Check the final status of this frame buffer
int status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
	valid = false;
	valid = true;
// Unbind FBO so we can continue as normal
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);

The other setup we need to do is to allocate texture references that we can eventually associate with the buffers.

But first: a non-obvious issue. Buffers for holding Vertex, Normal and Colour information are one dimensional and reasonably free of space limitations. Textures are different and different graphics cards/drivers have different limitations. Thus we need to break up our 1D buffer into a 2D buffer where the width is no larger than the max texture width supported by the card.

// calculate the width and height of the fbo we need
if( numberVerts < max_width )
	renderWidth = numberVerts;
	renderHeight = 1;
	renderOverrun = 0;
else {
	renderWidth = max_width;
	renderHeight = numberVerts / max_width + 1;
	renderOverrun = max_width * renderHeight - numberVerts;

And now that we have our dimensions we can allocate the required texture references. As you may notice the data portion of the call is NULL which means we aren’t actually providing data, just allowing for a reference to a texture of the given size.

We need to use GL_RGB_FLOAT32_APPLE as our internal storage method as we need reasonably high precision that would be otherwise lost. We will also be using GL_TEXTURE_RECTANGLE_ARB instead of the standard 2D because we need the precision when accessing the elements of the texture.

glGenTextures(1, (GLuint*)&vertexTex);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, vertexTex);
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB_FLOAT32_APPLE,  renderWidth, renderHeight, 0, GL_RGB, GL_FLOAT, 0x0);

2. Interpret the existing Buffers as Textures
This step is one of the simplest and involves only 3 steps for each buffer. One just has to bind the texture, bind the buffer and ‘update’ the texture from the buffer. The trick is not to provide data when updating the texture which tells the card to use the bound buffer as the data source instead.

glBindTexture(GL_TEXTURE_RECTANGLE_ARB, vertexTex);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, vertexBufferRef);
glTexSubImage2D( GL_TEXTURE_RECTANGLE_ARB,	0, 0, 0, renderWidth, renderHeight, GL_RGB, GL_FLOAT, NULL );

3. Render the data into the FBO
Rendering the data into the FBO involves setting up one quad that covers the entire frame so that each fragment corresponds to one vertex. Thus each time the fragment shader runs it will advect one vertex (and update colours and normals). For this to happen we have to change the frame buffer we are rendering into, bind the textures we updated in Step 2, enable the shader, render our quad, and then disable everything again.

// Step 1: Change the frame buffer and set up the view port to be rendering 
glViewport(0,0, width, height);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);	// Clear Screen And Depth Buffer	// OUCH!! Big performance hit
glDrawBuffers(numTexId, dbuffers);

// Step 2: bind the textures
glActiveTextureARB( GL_TEXTURE0_ARB );

// Step 3: enable the shader

// Step 4: render the quad - yes it is immediate mode. yes that is bad
glMultiTexCoord2f(GL_TEXTURE0, 0,0); 
glVertex2f(0, 0);
glMultiTexCoord2f(GL_TEXTURE0, renderWidth,0); 
glVertex2f(renderWidth, 0);
glMultiTexCoord2f(GL_TEXTURE0, renderWidth,renderHeight); 
glVertex2f(renderWidth, renderHeight);
glMultiTexCoord2f(GL_TEXTURE0, 0,renderHeight); 
glVertex2f(0, renderHeight);

// Step 5: turn everything back to normal
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);

4. Read the FBO render targets back into the Textures
This is also an easy step as we are just reversing the process taken in Step 2. The difference is that we are now binding the render target as our source (instead of the buffer) and we read into the buffer using glReadPixels instead of glTexSubImage2D. Once again using NULL for the ‘data’ portion tells it to use the attached read buffer.

// bind to the FBO so we can reference its render targets
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo->getID());
// read the output of each render target back into the buffers provided
glReadPixels(0, 0, renderWidth, renderHeight, GL_RGB, GL_FLOAT, 0);

And the final output!
If something didn’t work then the final output will be nothing at all (doh!) but if it did you will be greeted with the impressive sight of… exactly what we had when we started. But we now are ready to add the melting.


Addendum: Actually, because I haven’t specified what the shaders are for rendering into the FBO nothing will be passed on till the next rendering step so nothing will be shown… till the next post

Photo Blog 112

26 09 2010

While it is a subject that has been done to death the Golden Gate bridge is pretty spectacular and I don’t think anyone goes there without taking a photo. We arrived in the early afternoon and there was already the stereotypical fog rolling in off the sea which made it all quite dramatic, if a little cold.

The Bridge (f14 45mm 1/100s)
All I’ve done is convert to Black and White using a blue filter (which makes the bridge’s red closer to black) and I really like the almost antique feel. Maybe I should add some grain or something…

Photo Blog 111

12 09 2010

As mentioned in the last photo blog I’ve been working on some more product photography. The new setup is a bit simpler than before using just one light and the box set up in a way that is a bit easier to manipulate. I also have the props / background that was needed by my Aunt for her shots which meant that the white backdrop wasn’t necessary. This made it heaps easier to set up each individual item and cut the time required down substantially.

The setup:

The Setup

The only real issue was making sure that I was using a custom white balance that matched the lighting as otherwise it would have been a pain in post-processing. In the end the only changes needed in post where a bit of cropping and use of the ‘heal’ brush in Photoshop to remove some artifacts near the edges.

A couple example images:

Example Shot 1 (f16 2.5s 105mm)

Example Shot 2 (f16 2.5s 105mm)

Graphics – MGP – Glass

8 09 2010

With this installment we will finally make our object look like it is made of glass. Luckily we have already been introduced to shaders in a previous post so all that remains to be done is to describe how we can incorporate our new environment/cube map into our shader and use it to fake reflection and refraction.

The first thing we are going to need for our calculations is the normal and position associated for each pixel. These can be calculated in the Vertex shader, as can the usual set of calculations needed for basic operations. As such our Vertex shader is quite simple as shown in Figure 1.

varying vec3 normal;
varying vec3 view;

void main()
    // the basics that have to be done
    gl_FrontColor = gl_Color;
    gl_TexCoord[0] = gl_MultiTexCoord0;
    gl_Position = ftransform();
    // calculate the view vector in eye-space using the ModelView and Projection matrix
    view = (gl_ModelViewProjectionMatrix * gl_Vertex).xyz;
    // calculate the normal vector in eye-space using the ModelView and Projection matrix
    normal = normalize(gl_NormalMatrix * gl_Normal);
// Figure 1

Using the same code as in the previous post the cube map texture should automatically be available to us in the GLSL shader as a texture. The only difference is that the type of the texture is samplerCube instead of sampler2D. Thus we begin our fragment shader like this:

// our cube map texture
uniform samplerCube tex;
//variables passed in from our vertex shader
varying vec3 view;
varying vec3 normal;

void main()

When we want to access our cube map we do so through the use of a vector which expresses the direction we wish to sample the ‘cube’ that surrounds our object. For example, by calculating the reflection vector of the view around the normal and sampling this we can see what would be ‘reflected’ if our object was a perfect mirror.

    // create the reflection vector 
    vec3 reflect_v = normalize(reflect(view,-normal));
    // sample the cube map to find the reflected color
    vec3 colReflect = textureCube(tex,reflect_v).rgb;

GLSL provides a very useful function which we use here for performing the reflection of ‘view’ around ‘-normal’ so we don’t even have to think about it.

We can do exactly the same thing to find the colour that would come from refracting through the surface. This refraction would of course depend on the refractive indices of the two materials. Because we are trying to model glass this means that we are going from a refractive index of 1.0 (air) to 1.2 (glass).

    const float eta = 1.00 / 1.20;
    // create the reflection vector 
    vec3 refract_v = normalize(refract(view,-normal,eta));
    // sample the cube map to find the reflected color
    vec3 colReflect = textureCube(tex,refract_v).rgb;

There is one major limitation to this of course. A real simulation would model the effect of coming out the other side of the glass, and perhaps even entering another material if there are other objects in the scene / concave objects. This is pretty serious but for our purposes it works ok because we limit ourselves to only one object and it kind of looks ‘good enough’.

Finally we composite the reflected and refracted components together, along with a bit of specular lighting from the phong model discussed previously.

    // calculate the specular lighting
    float light = pow(max(dot(normal, gl_LightSource[0],0.0),1000.0);
    // coefficients for each of the calculated components (reflection, refraction, lighting). These DON'T have to add to 1.0
    const vec3 coeff = vec3( 0.3, 0.6, 0.5 );
    gl_FragColor = vec4(coeff.x * colReflect + coeff.y * colRefract + coeff.z * vec3(light,light,light) + gl_Color.rgb,1.0);

This gives us a final result that looks something like this:


Photo Blog 110

7 09 2010

Recently I got to do a bit of road trippen’ between Las Vegas and San Fransisco and we took the route through Death Valley and Yosemite National Park. There is some stunning scenery through there as well as some rather interesting extremes. In 12 hours we went from -200feet to 10000feet and from 46ºC to 3ºC. Luckily I was traveling with a couple of other photographers so there were plenty of opportunities for stopping and taking photos.

Obviously there are lots of photos from there that I’m proud of but one in particular would have to be a panorama I took while leaving Death Valley. So here it is:

Death Valley Panorama

I really do suggest you click on it to view it at a higher resolution as seeing it in a small frame like above doesn’t do it justice.

Graphics – MGP – Environment Mapping

6 09 2010

From the previous step we have a well illuminated cup that looks like it is made of clay, but hardly like a glass. To make this a little more realistic we are going to use environment mapping. How this works is that we define a set of textures representing the six sides of a cube surrounding our scene. This is stored in a cube map which can be accessed like any other texture and so at any point we can ask what colour is in a particular direction and so create pseudo reflection and refraction by asking what from the scene would be reflected or refracted.

Creating the Cube Map
The first step to do this is to acquire a Cube Map. The one I have to use is one I found quite a while ago of the Swedish Royal Palace at night. It has a lot of interesting colours and lighting and while it isn’t the natural habitat for a wine glass it suits well enough for our purposes. You can often find textures appropriate for this sort of thing around the internet and one good collection in particular is Emil Persson’s over here.

These images then need to be loaded into graphics memory so they can be used This is done by using the same glTexImage2D calls as usual but with the targets in Figure 1 instead.

// Figure 1

Once all 6 textures are loaded we can generate the environment map needed for reflections by calling the functions outlined in Figure 2.

// Figure 2

And finally, every time we want to employ our new Environment Map all we have to do is turn it on like we would any other texture.

// Enable

// Draw

// Disable
// Figure 3

The result we now get hardly looks like a lot more like pewter than glass but we can already see the basic effect of environment mapping and how it might allow us to preform refraction as well as reflection.

Environment Mapping

This cube/environment map will now be available for us in the shader so we can make more customized requests of it.