^{Part 1 : Intro - Part 2 : Semantics - Part 3 : Vector Types - Part 4 : Intrinsics & Textures - Part 5 : Blinn-Phong Shader}
So here’s the time to combine all the knowledge that’s been acquired in the last few pages, into a real shader which is actually useful.
I chose the a Blinn-Phong reflection shader because it’s a pretty standard lighting model, and it’s something that needs to be re-written everytime you override TVMesh/TVMinimesh/TVActor/TVLandscape’s default shader; a shader overrides all rendering, including basic lighting.
I originally intended to use the straight Phong reflection model, but a surprising and interesting presentation demonstrated that the Blinn-Phong shading model is more true-to-life than Phong after all... so why not use this one instead, since it’s even less computationally expensive.
The Blinn-Phong reflection model is the standard lighting model used in both the DirectX and OpenGL rendering pipelines.
What we’re going to do is mimic the fixed-function pipeline functionality, with with two differences :
Apart from the colors, there are three vectors that define how lit or how unlit a point is on a surface, according to this model.
The reflection model is split in three components, but actually DirectX materials provide four, so we’re going to add a fourth one. All those components are summed to get any point’s reflected color, so its perceptible color, and the one we’ll render.
Up to now, all that’s been said was also true for the Phong reflection model. The difference between the two lies with how specular highlights are calculated.
The halfway vector is the vector that is halfway between the light and view vectors. This vector is then used in conjuction with the normal vector to get the specular highlight.
Using this vector avoids the costlier reflection equation (2 * dot(normal, light) * normal - light
) that Phong reflection requires.
Enough theory for now, on to the actual shader code! Here’s the partial vertex shader :
// Global-Level Semantics float4x4 matWorldViewProj : WORLDVIEWPROJECTION; // Vertex Shader Input Structure struct VS_INPUT { float4 position : POSITION; // Vertex position in object space float3 normal : NORMAL; // Vertex normal in object space float2 texCoord : TEXCOORD; // Vertex texture coordinates }; // Vertex Shader Output Structure struct VS_OUTPUT { float4 position : POSITION; // Pixel position in clip space float2 texCoord : TEXCOORD0; // Pixel texture coordinates float3 normal : TEXCOORD1; // Pixel normal vector float3 view : TEXCOORD2; // Pixel view vector }; #define PS_INPUT VS_OUTPUT // What comes out of VS goes into PS! // Vertex Shader Function VS_OUTPUT VS(VS_INPUT IN) { VS_OUTPUT OUT; // Basic transformation of untransformed vertex into clip-space OUT.position = mul(IN.position, matWorldViewProj); // No scaling or translation is done, simply assign them and let the GPU interpolate OUT.texCoord = IN.texCoord; // Calculate the normal vector // Calculate the view vector return OUT; }
The main problem with this vector is that the vertex shader input provides an object-space representation, and we need a world-space representation for the reflection model. Intuitively, we’d need to multiply the normal by the World matrix using the mul
intrinsic, but it’s not that simple...
In the example drawing, a scaling operation is performed on X on the surface. The red vector is the normal of the pre-transformation surface transformed with the scaling matrix, while the green is clearly the actual normal of the resulting surface. That shows visually that the transformation matrix, or the World matrix in Direct3D terms, is not the answer.
I cannot explain exactly why (but this guy can), but the matrix to use when transforming normals is the inverse-transposed World matrix. Using this one would give the green vector in the drawing, and works for all 3D situations as well.
But there’s more! Translation does not affect normals. Indeed, if you move an object from a place to another, its orientation stays the same... Only rotation and scaling affect a surface’s normals. So we need to strip the 4th column of the matrix to avoid transforming the normals with the object’s translation.
// An additional global-level semantic float4x3 matWorldIT : WORLDINVERSETRANSPOSE; [...] // Calculate the normal vector OUT.normal = mul(matWorldIT, IN.normal);
The view vector is somehow tricky as well. There is no semantic that gives it to us directly in TV3D, so we need to find it out.
Since the view vector points from the camera position to the current vertex’s (or pixel’s) position, then we can use vector subtraction to find it out intuitively. It’s also the only way I’ve found that actually works...
Thankfully, TV3D does provide the camera’s position with a semantic. All we need to do is transform the vertex position (which is object-space) to world-space, so that the vector subtraction is performed entirely in world-space.
// Two other global-level semantics float4x4 matWorld : WORLD; float3 viewPosition : VIEWPOSITION; [...] // Calculate the view vector float3 worldPos = mul(IN.position, matWorld).xyz; OUT.view = viewPosition - worldPos;
We use swizzling to get only the X, Y and Z components of the resulting coordinate; since a 4D vector multiplied with a 4×4 matrix gives a 4D vector. We don’t really care about the (rather obscure) W component here.
When working in world-space, this one is constant during the effect’s lifetime - for all vertices of the model, so to speak. We can then use the semantic-mapped vector directly in the pixel shader, where it’s needed.
The vertex shader was rather simple, that’s because we decided to implement the effect per-pixel! Here it goes, partial as well :
// Light direction global-level semantic float3 dirLightDir : LIGHTDIR0_DIRECTION; // Pixel Shader Function float4 PS(PS_INPUT IN) : COLOR { // Normalize all vectors in pixel shader to get phong shading // Normalizing in vertex shader would provide gouraud shading float3 light = normalize(-dirLightDir); float3 view = normalize(IN.view); float3 normal = normalize(IN.normal); // Calculate the half vector // Calculate the emissive lighting // Calculate the ambient reflection // Calculate the diffuse reflection // Calculate the specular reflection // Fetch the texture coordinates float2 texCoord = IN.texCoord; // Sample the texture // Combine all the color components // Calculate the transparency // Return the pixel's color }
One might wonder... why invert the light vector? That’s because in all reflection models (all that I know of), the light vector is seen as going out of the surface to the light, whereas DirectX treats the light direction as coming out of the light to the surface. Even if the latter makes more sense, we must obey the model...
As explained above, the halfway vector is just the average of the light and view vectors.
// Calculate the half vector float3 halfway = normalize(light + view);
Intuitively we could have divided it by 2 before normalizing it, but since a vector multiplied or divided by a constant represents the same direction, the operation is useless. Normalizing it right away is more efficient.
Note that we have to normalize it, because averaging two normalized vectors does not give a normalized vector, and this hurts when calculating the specular reflection.
We will be using material semantics in the next couple of sections. For both emissive and ambient lighting, since they’re constant, no vector intervenes in the computation.
// Two first global-level material semantics float3 materialEmissive : EMISSIVE; float3 materialAmbient : AMBIENT; [...] // Calculate the emissive lighting float3 emissive = materialEmissive; // Calculate the ambient reflection float3 ambient = materialAmbient;
There is a difference between the two though; ambient lighting depends on the light’s color. We’ll make that distinction while summing the components to the final color.
As per Lambert's Cosine Law and as defined in the Phong reflection model, the diffuse reflection is the dot product of the light vector and the surface’s normal. We also modulate with the material’s diffuse component.
// Another global-level material semantic float4 materialDiffuse : DIFFUSE; [...] // Calculate the diffuse reflection float3 diffuse = saturate(dot(normal, light)) * materialDiffuse.rgb;
Note that we use all four components of the DIFFUSE
semantic because the alpha component defines the opacity of the surface. This will come in handy later.
The saturate
intrinsic is also used to avoid negative dot products on the faces opposing the light.
The Blinn-Phong model defines specular reflection as the dot product between the halfway vector and the surface’s normal, raised to the n
th power; where n
is specular power, or the exponent which defines how shiny the surface is - how focused the specular highlights are. The material’s specular component is modulated here too.
// Two final global-level material semantics float3 materialSpecular : SPECULAR; float materialPower : SPECULARPOWER; [...] // Calculate the specular reflection float3 specular = pow(saturate(dot(normal, halfway)), materialPower) * materialSpecular;
Again, saturate
is used to prevent backface light reflection.
Up to here, all components which needed to be modulated with the light’s color haven’t been. That has been overlooked for optimization purposes... it will be multiplied once when combining all color components.
We need to add the sampler and texture’s declaration at global level, then just sample the 2D texture using the tex2D
intrinsic.
// Texture declaration, mapped to the first (and default) texture stage texture texTexture : TEXTURE0; // Sampler declaration, no forced filters or anything sampler sampTexture = sampler_state { Texture = (texTexture); }; [...] // Sample the texture float4 texColor = tex2D(sampTexture, texCoord);
Then again, keeping all four components is mandatory if we want to use the texture’s alpha channel for transparency.
All the little parts are calculated, it’s time to bring them together.
// Light color global-level semantic float3 dirLightColor : LIGHTDIR0_COLOR; [...] // Combine all the color components float3 color = (saturate(ambient + diffuse) * texColor + specular) * dirLightColor + emissive; // Calculate the transparency float alpha = materialDiffuse.a * texColor.a; // Return the pixel's color return float4(color, alpha);
Note the operation sequence in the color
calculation. The ambient and diffuse need to be modulated by the texture color, but not the specular; but those three need to be modulated by the light’s color. The emissive is entirely apart.
A note on the saturation of ambient + diffuse
, it’s there to avoid over-brightness on the texture color if the sum exceeds 1.
Transparency is defined (in this context) by the material’s opacity modulated by the texture’s alpha channel.
Then we simply combine the four RGBA components with a float4
constructor.
I always prefer to make it optimized for every shader model I support, so here it is the full sequence from SM3 to SM2. SM1.4 is not supported because I use the normalize
instrinsic in the pixel shader, which isn’t defined for ps_1_4
and lower. We’d need to use a normalization cubemap, a topic that I will cover in a later article.
technique TSM3 { pass P { VertexShader = compile vs_3_0 VS(); PixelShader = compile ps_3_0 PS(); } } technique TSM2a { pass P0 { VertexShader = compile vs_2_0 VS(); PixelShader = compile ps_2_a PS(); } } technique TSM2b { pass P0 { VertexShader = compile vs_2_0 VS(); PixelShader = compile ps_2_b PS(); } } technique TSM2 { pass P { VertexShader = compile vs_2_0 VS(); PixelShader = compile ps_2_0 PS(); } }