So finally we come to the actual melting of the glass by building on the foundations of the previous post. To recap, in the previous post we setup a process whereby we could render our geometry to an FBO, run a shader over this FBO (where each fragment corresponds to exactly one vertex of the geometry) and then use the resulting information as the new position/normal/colour for each vertex in the geometry.

**Why not just do this in a vertex shader? **

An interesting thing about this type of feedback is that changing the vertex position in one time step can be carried over to the next, something that can’t be achieved by just modifying a vertex position in a shader.

**Getting Started**

Before we can add the melting it is important to state what the most basic shader set is for this type of operation. I was naughty and didn’t include this in the previous post but should have. Thus the vertex shader that runs when rendering to the FBO is very simple (does nothing):

void main() { gl_Position = gl_ProjectionMatrix*gl_Vertex; gl_TexCoord[0] = gl_MultiTexCoord0; }

The fragment shader has a little more base code because it has to make sure the position, normal and colours are all passed on to the relevant render targets:

// references to the incoming textures (really the vertices etc) uniform sampler2DRect colourTex; uniform sampler2DRect vertexTex; uniform sampler2DRect normalTex; void main() { //retrieve the specific vertex that this fragment refers to vec3 vertex = texture2DRect(vertexTex,gl_TexCoord[0].st).xyz; vec4 color = texture2DRect(colourTex,gl_TexCoord[0].st); vec3 normal = texture2DRect(normalTex,gl_TexCoord[0].st).xyz ; // write the information back out to the various render targets gl_FragData[0] = vec4(vertex,1.0); gl_FragData[1] = color; gl_FragData[2] = vec4(normal,1.0); }

With this in place we should finally have the geometry appearing again!

# Melting

How the user will be interacting with the melting is through the use of right-clicking and dragging over the surface of the glass to heat areas up. These areas will then ‘flow’ in the direction of gravity as they also cool back down to a static state

**Translating mouse clicks**

This means the first thing we need is to know where the user is clicking, not just in the viewport but also on our geometry. Luckily glu provides us with an easy to use function for just this purpose. By using *gluunproject* we can turn a provided x and y coordinate into a position within the volume the user can see. The only other things we need to provide are the current modelview, projection, and viewport matrices along with how ‘deep’ into the scene we are wanting to reference.

The usage of *gluunproject* is thus:

GLint viewport[4]; GLdouble projection[16]; GLdouble modelview[16]; GLdouble out_point[3]; glGetDoublev(GL_MODELVIEW_MATRIX, (GLdouble*)&modelview); glGetDoublev(GL_PROJECTION_MATRIX, (GLdouble*)&projection); glGetIntegerv( GL_VIEWPORT, viewport ); gluUnProject( x, y, z, modelview, projection, viewport, &out_point[0], &out_point[1], &out_point[2]);

Which should populate *out_point*with the position in 3D space. The only missing thing is how to get the ‘z’ component of where we want to hit, for which there are two strategies:

- The first is to use a number between 0 and 1 that matches approximately the plane we want to affect. For me the number 0.8 worked well
- The second is to read the exact depth of the first object at the selected point

If we want to take the second approach we have to make sure we have not yet cleared the depth buffer from the previous rendering and then read the depth component for the pixel in question:

glReadPixels(x, y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &z);

**‘Heating up’ the glass**

Now that we have the point at which we know the user is clicking we can determine how close each vertex is to this point and apply some function to increase heat in this area. We can then store the amount of ‘heat’ that each vertex has in the colour channels so that next iteration we can recall the value and thus retain this information. Handily we humans use colour as an indication of heat so storing this information in the colour channel makes a lot of sense.

To determine the amount of heat we will apply to each vertex we first determine the distance between the vertex and the ‘affect point’ that the user is clicking.

uniform vec3 affect; ... float diff = 0.1 / dot( vertex - affect, vertex - affect ); // or 0.001/d^2

We can then use this to determine how much heat we want to apply:

float addInt = smoothstep( 0.0, MAX_INTENSITY, diff); // max/d^2

And determine how much heat will be lost through cooling:

float remInt = (color.r + color.g + color.b) * COOL_PERCENT;

And the final intensity for this vertex (in the range [0,3]

float intensity = (color.r + color.g + color.b) + (addInt - remInt) * deltaTime;

To store this intensity for next time we can modify how the colour is stored so that instead of just the previous colour it is now:

gl_FragData[1] = clamp(vec4(intensity,intensity-1.0,intensity-2.0,1.0),0.0,1.0); // colour

which will mean that red, green, and blue will be ‘filled’ in sequence up to a white hot heat.

For the purposes of completeness: I used the values of 1.0 and 0.5 for *MAX_INTENSITY* and *COOL_PERCENT* respectively.

**I’m melting…**

After calculating the intensity in the previous step we now have enough information to advect the vertex to a new position. Step one in this process is knowing which way is ‘down’. This can be extracted from the modelview matrix as if we treat the ‘y axis’ as being vertical then retrieving the ‘y axis’ for the current camera view will be the equivalent of gravity. Thus gravity becomes:

float gravity[3]; gravity[0] = modelview[1]; gravity[1] = modelview[5]; gravity[2] = modelview[9]

And the amount to move the vertex by vertically can be calculated in the shader by:

vec3 transform = -(intensity*gravity*deltaTime*MAX_MELT); gl_FragData[0] = vec4(vertex + transform,1.0); // vertex

where *MAX_MELT* is just a scale factor of 0.02

With this the glass will start melting when right-clicked!!

The only thing that will be a little strange is that while advecting the vertices we are leaving the normals as they are, which isn’t exactly correct. However because we don’t have direct access to the surrounding vertices we can’t easily reconstruct the exact normal and in my experiments keeping it the same works as an approximation unless large changes in geometry are made…

So the results:

And a short video: