Shaders and HLSL Programming Part 2 : Semantics

What are they?

A semantic is, in a general sense, a mapping to a specific concept for the shader to use. It has many uses; there are input and output semantics for the vertex and the pixel shader, and there are “global” semantics which are set by the surrounding framework/engine to avoid setting a ton of parameters by hand.

There is an HLSL-specific operator that is used to map any semantic to any variable or function return : the colon (”:”). For example, if I wanted to have the (needed) world-view-projection matrix, the camera position, the simulation time and the material’s diffuse value, I could use the following at shader-global scope :

float4x4 wvpMatrix : WORLDVIEWPROJECTION;
float3 camPosition : CAMERAPOSITION;
float time : TIME;
float4 materialDiffuse : DIFFUSE;

And in the first shader example in the 1st part of the tutorial, I mapped the “COLOR” pixel shader output semantic directly to the function return. This can be done too, and is useful in the case of the pixel shader output, where you’ll likely return this single value at all times :

float4 PS() : COLOR {
   return float4(1, 1, 1, 1);
}

To send and recieve multiple semantic-mapped variables in a HLSL function, either a list of semantic-mapped parameters can be put in the function header, like this :

float4 VS(float4 inPosition : POSITION, float4 inNormal : NORMAL) : POSITION {
	...
}

Or C-like structures can be used, which is usually cleaner. Their declaration is as follows, here a vertex shader input structure :

struct VS_INPUT {
	float4 inPosition : POSITION;
	float4 inNormal : NORMAL;
};

And then they can be used like a type name, like here in the input of the vertex shader function :

float4 VS(VS_INPUT IN) : POSITION {
	...
}

The members are accessed using a dot operator, like in most languages.

Semantic Types

Global, Shader-Level Semantics

HLSL is geared towards practical use and simplicity. In low-level 3D programming, you end up using the same constant data over and over, like transformation matrices and light/camera positions and directions; things you don’t wanna set by yourself everytime you instanciate a shader.

Here is an extensive list of the semantics that can be mapped to at shader-level when using the Truevision3D 6.5 engine :

Semantic name(s) Datatype Description
WORLD, WORLDMATRIX, MWORLD, MATWORLD matrix4x4 World matrix
VIEW, VIEWMATRIX matrix4x4 View matrix
PROJ, PROJECTION, PROJECTIONMATRIX, PROJMATRIX matrix4x4 Projection matrix
WORLDVIEW, WORLDVIEWMATRIX, WV, WVIEWMATRIX matrix4x4 World-view matrix
WORLDVIEWPROJ, WVP, WORLDVIEWPROJECTION, WORLDVIEWPROJECTIONMATRIX matrix4x4 World-view-projection matrix
VIEWPROJ, VIEWPROJECTION, VIEWPROJMATRIX, VIEWPROJECTIONMATRIX matrix4x4 View-projection matrix
VIEWI, VIEWINVERSE, VIEWINVERSEMATRIX, VI matrix4x4 Inverse view matrix
WORLDIT, WORLDINVERSETRANSPOSE, WORLDTRANSPOSEINVERSE, WIT matrix4x4 Transposed, inverse world matrix
VIEWIT, VIEWINVERSETRANSPOSE, VIEWTRANSPOSEINVERSE, VIT matrix4x4 Transposed, inverse view matrix
CAMERAPOS, CAMERAPOSITION, VIEWPOS, VIEWPOSITION float3 Camera position
TIME, TIMECOUNT, TICKCOUNT, SECONDS float Simulation time
LIGHTDIRx_DIRECTION float3 Directional light ‘x’ direction
LIGHTPOINTx_POSITION float3 Point light ‘x’ position
LIGHTDIRx_COLOR float3 Directional light ‘x’ diffuse color
LIGHTPOINTx_COLOR float3 Point light ‘x’ diffuse color
LIGHTPOINT_NUM int Active point light count
AMBIENT float4 Material ambient color
DIFFUSE float4 Material diffuse color
EMISSIVE float4 Material emissive color
SPECULAR float4 Material specular color
SPECULARPOWER float Material specular power
TEXTUREx texture Texture stage ‘x’
FOGSTART float Fog (linear) start
FOGEND float Fog (linear) end
FOGDENSITY float Fog (exp/exp²) density
FOGCOLOR float3 Fog color
FOGTYPE, FOG_TYPE int Fog type (see below)

Note - The FOGTYPE semantic contains one of the four possible values :

#define FOG_TYPE_NONE    0
#define FOG_TYPE_EXP     1
#define FOG_TYPE_EXP2    2
#define FOG_TYPE_LINEAR  3 
 

Light numbers are from 0 to 4, with 5 possible simutaneous lights in one pass.

Texture numbers are from 0 to 3, the texture stage index set with SetTextureEx.

You can already see that most semantics have alternate names, and most matrices have their inverse, inverse-transposed and a lot of combination between other matrices, even if all of these combinations and transformations could be computed in HLSL code. But it’s all in the same goal; to save time to the programmer and to the GPU. All of these semantics are calculated or fetched once by the engine/framework, and their multiple appelations make them easy to remember and use.

Context-specific Semantics

But there’s more to semantics than global stuff. Semantics are used in HLSL to link all of those with the variables used in the function bodies :

  • In the vertex shader part, the current vertex’s input information and its output screen-space position;
  • In the pixel shader part, the custom input information and its output color.

Vertex Shader Input Semantics

Here’s a reference list of the most useful vertex shader input semantics :

Semantic name(s) Datatype Description
POSITION float4 Vertex object-space position
NORMAL float4 Vertex object-space normal
BINORMAL float4 Vertex object-space binormal
TANGENT float4 Vertex object-space tangent
COLOR float4 Vertex color
TEXCOORDx float2 Texture coordinate set ‘x’

Typically a .X or .TVM mesh can have two sets of texture coordinates (0 and 1). And for now you don’t need to know what a “vertex tangent” or a “binormal” is. It’ll just come in very handy for per-pixel lighting techniques.

And so you know, blend indices and blend weights for animation blending are also among these semantics, but I’ve never used them so I can’t document them.

Vertex Shader Output and Pixel Shader Input Semantics

And a reference list of the most useful vertex shader out/pixel shader input semantics :

Semantic name(s) Datatype Description
POSITION * float4 Vertex screen-space position
FOG float Vertex fog coefficient (SM1.0 and SM2.0 only)
TEXCOORDx float, float2, float3 or float4 User data

Starred semantics can not be used in the pixel shader input. They’re only output from the vertex shader to the Direct3D pipeline.

In the vertex shader output, the most relevant (and mandatory) semantic is POSITION. You can also calculate per-vertex fog using FOG in Shader Models 1 and 2 – in SM3.0 the Direct3D pipeline doesn’t use it.

A more interesting VS output/PS input is the TEXCOORDx semantic. It has 9 slots (from 0 to 8) in SM2.0 and higher (only 6, so 0 to 5, in SM1.4) and its vector size can vary from float to float4.

One can wonder why nine sets of texture coordinates can be output, and with no vector size restriction. It’s because TEXCOORDx semantics are the only link to the the pixel shader input. If you want to send any data to the pixel shader, it’s by using those. What happens is that when a pixel located between several vertices, the TEXCOORDx semantics of the surrounding vertices are interpolated (using bilinear interpolation) to get the exact pixel’s values. This works for texture coordinates, but for per-pixel normals and other vector or non-vector values as well! That is why you will find alot of diversified values in TEXCOORDx semantics, not merely what it was originally made for.

Pixel Shader Output

As expected, a pixel’s output value is its color, in float4 format. The name of the semantic is “COLOR”. A pixel shader can also return a depth value using the “DEPTH” semantic; but sorry, I have no idea how to use it.

Putting it together

Now that was a lot of stuff at once, here’s a complete commented example that summarizes the concepts.

// The combined world-view-projection matrix
float4x4 matWorldViewProj : WORLDVIEWPROJECTION;
// The mesh's diffuse material color
float3 diffuseCol : DIFFUSE;
 
// The vertex shader input structure
struct VS_INPUT {
	float4 position : POSITION;
	float2 texCoord : TEXCOORD0;
};
// Since the VS output and PS input structures are exactly the same in this case,
// defining aliases is faster (and possible in HLSL, noteworthy!)
#define VS_OUTPUT VS_INPUT
#define PS_INPUT  VS_INPUT
 
// Vertex shader function
VS_OUTPUT VS(VS_INPUT IN) {
	// Define an "instance" of the output structure
	VS_OUTPUT OUT;
	// Set the screen-space position
	OUT.position = mul(IN.position, matWorldViewProj);
	// And the texture coordinate
	OUT.texCoord = IN.texCoord;
	
	// Return this instance
	return OUT;
}
 
// Between the vertices and the pixels, the texture coordinates will be interpolated
 
// Pixel shader function
float4 PS(PS_INPUT IN) : COLOR {
	// Construct a float3 RGB color with the texture coordinate's interpolated value
        // and multiply the result by the material's diffuse color
	float3 rgbColor = float3(IN.texCoord, 1) * diffuseCol;
	
	// Return that RGB color with fully opaque alpha (1.0f)
	return float4(rgbColor, 1);
}
 
// Simplest technique/pass section
technique TSM1 {
    pass P0 {
        VertexShader = compile vs_1_1 VS();
        PixelShader  = compile ps_1_1 PS();
    }
}

I hope that was understandable!

I forced the blue component of the color to “1”, so the blue corner is where the u texture coordinate and the v texture coordinate are (0, 0), and the white corner is where (u, v) = (1, 1). The diffuse material value is set to white, so there is no base “tint” added from that. It was just to show that you can use global semantic-mapped variables everywhere, even in the pixel shader.


The next post will focus on the array/matrix types, how to use member swizzling (a HLSL-specific language feature). You can find it here.

 
tutorialsarticlesandexamples/programming_hlsl_shaders2.txt · Last modified: 2013/11/22 13:32