Shaders and HLSL Programming Pt. 4 : Intrinsics & Textures

Part 1: Introduction - Part 2: Semantics - Part 3: Vector Types - Part 4: Intrinsics and Textures - Part 5: Blinn-Phong Shader

Intrinsic functions are functions that are built-in to the HLSL language, and which (for the most part) map directly to GPU assembly instructions. They can be separated in two kinds, arithmetic instructions and texture lookup instructions. I’ll introduce texture declaration as well in the texture section.

There are quite a lot of functions available in HLSL, and they’re all well-documented on MSDN. So I’ll take the functions I find myself using the most, and a sample of their usage in concrete context.

Arithmetic Functions

Everytime I write “same type as var“, I mean “same type as var, or a type for whom the function’s operation on var is defined for var, or a type who is implicitely castable to var’s type”. Most functions are very flexible in HLSL, so they’ll accept any type without whining too much. This can be dangerous too, especially because of implicit downcasting...

General-purpose instructions

mul(m1, m2)

Available in all shader models.

Parameter Datatype
m1 any vector or matrix type
m2 same type as x, but the # of columns in m1 must match the # of rows in m2
(return value) a matrix with the same # of rows as m1 and the same number of columns as m2

Description : Does the matrix multiplication operation on m1 and m2; if they’re vectors, they’re treated like 1xn or mx1 matrices. I don’t think I need to say the importance of this operation, it’s even mandatory to a working vertex shader.


saturate(x)

Available in all shader models.

Parameter Datatype
x any scalar, vector or matrix
(return value) same type as x

Description : Clamps x (or all of x’s members) to the [0, 1] interval. In other words, any value below 0 will be mapped to 0, and any value larger than 1 will be mapped to 1.

Why do I like it? Simply because it does not map to an instruction, rather to an instruction modifier. What that means is that the last operation that was made on the variable will be added the “_sat” modifier. That makes it a costless intrinsic function, which does not consume any instruction slot. You can use it anytime you like, anytime it would be useful.

float3 normal; // the current pixel's normal (or the interpolation of all surrounding vertices' normals)
float3 lightDirection; // the direction of the light rays
 
float dp = saturate(dot(normal, lightDirection));

This is a classic example. When doing lambertian diffuse reflection, we want to avoid negative colors in the cases where N dot L < 0. This rule would also apply to phong specular reflection to avoid backface reflections.

Useful functions for lighting

I kind of went ahead of things with the last example. Here are some useful functions for lighting, but not exclusively reserved for lighting of course.

dot(v1, v2)

Available in all shader models.

Parameter Datatype
v1 any vector
v2 any vector, same size as v1
(return value) float

Description : Makes the dot product operation between v1 and v2. No example here, since it’s already above.


pow(a, n)

Available in all vertex shader versions, and 2.0 and higher for pixel shader.

Parameter Datatype Description
a float or int Base
n same type as a Exponent
(return value) same type as a Base to the nth power

Description : Returns an... simple enough.

float reflectedDotV; // from the phong reflection equation
float specularPower; // the power of the specular reflection
 
float specularAmount = pow(reflectedDotV, specularPower);

In the phong refection model, an exponent is applied to the result of a dot product to get a glossy or shiny specular reflection. This specularPower could (or should) be mapped to the SPECULARPOWER semantic.


normalize(v)

Available in all vertex shader versions, and 2.0 and higher for pixel shader.

Parameter Datatype
v any vector
(return value) same type as v

Description : Returns a normalized instance of the v vector, so the same direction but with a unit length.

float3 normal; // the current pixel's normal
 
normal = normalize(normal);

This function is mandatory if you want to use per-pixel shading, aka phong shading. The difference between per-vertex (aka gouraud shading) and per-pixel shading is where you place your normalization. If the normalization is in the vertex shader, you get per-vertex lighting; if it’s in the pixel shader, you get per-pixel lighting. The reason for this : performing bilinear interpolation on unit-length vectors will cause “shortened” vectors in the areas between vertices, whereas per-pixel normalization gives a unit-length vector for each pixel, and pixel-perfect shading.

An interesting thing to note, since this instruction is not available in early pixel shader versions, you have to emulate it some way... Either decomposing the operation with divisions (which is very slow or impossible in some shader versions), or use a normalization cubemap. I will speak of this later, after texture lookup has been covered.


reflect(i, n)

Available in all vertex shader versions, and 2.0 and higher for pixel shader.

Parameter Datatype Description
i any vector Incident vector
n same type as x The surface’s normal vector
(return value) same type as x The reflected vector

Description : It can be used to calculate the R vector for the phong reflection model, or the vector coordinate for cubic reflection mapping, but I don’t recommend using it.

float3 normal; // the current pixel's normal
float3 lightDirection; // the direction of the light rays
 
float3 reflected = reflect(lightDirection, normal);
/* OR... */
reflected = 2 * dot(normal, lightDirection) * normal - lightDirection;

I’d call it “the lazy guy’s reflection function”. It’s not a bad function, but it uses more instructions than it should, provided you already calculated N dot L for the lambertian diffuse term; this function re-calculates it.

Useful functions for filtering

When using floating point textures on older or non-nVidia hardware, bilinear/trilinear/anisotropic filtering is not provided by the hardware, so you have to perform it in the pixel shader. Here’s how two intrinsics would help doing bilinear filtering.

frac(x)

Available in all vertex shader versions, and 2.0 and higher for pixel shader.

Parameter Datatype
x float or any vector or matrix type
(return value) same type as x

Description : Returns the fractional part of x, in other words x - (int)x. So if you pass 5.495f or 9574.495f, the result will be 0.495f. I will explain one of its uses in a common example for frac and lerp.


lerp(x, y, s)

Available in all vertex shader versions, and 1.4 and higher for pixel shader.

Parameter Datatype Description
x float or any vector or matrix type Origin node
y same type as x Destination node
s same type as x (but usually a float between 0 and 1) Step, or Interpolation Coefficient
(return value) same type as x Origin + Step(Destination - Origin)

Description : Returns the linear interpolation from the x to y at s percent. If x = 2, y = 6 and s = 0.25f, then the result is 2 + 0.25f * (6 - 2) = 3.

float2 textureCoord; // The texture coordinate of the current pixel
float textureSize; // This must be filled by the application, the size of the texture we're sampling
float3 bilinearTaps[4]; // An array will hold the "taps", or texture samples, needed for bilinear filtering
float2 weight; // The 2D interpolation coefficient, which acts as a weight for the samples
float3 color; // The resulting, bilinear-interpolated texel
 
weight = frac(textureCoord * textureSize); // We have to multiply because texture coordinates are between [0, 1]
color = lerp(lerp(bilinearTaps[0], bilinearTaps[1], weight.x),
             lerp(bilinearTaps[2], bilinearTaps[3], weight.x), weight.y);

That was probably very incomprehensible, so I made those two diagrams :

Here you can see that weight is the fraction of the texture coordinates in the X and Y directions. The four round-corners-ish points are the bilinearTaps. What we want to do is get a weighted average of all four taps to get the interpolated color at the sample center (the circle in the middle).

To do so, LERPs are used. The contribution factor of the left texels in relation to the right texels is weight.x, and the contribution of the top texels in relation to the bottom texels is weight.y. By doing three LERPs, we can obtain the final color. That’s it!

I’m aware that this is not the simplest use of a linear interpolation. It could just be used to interpolate between two colors or vectors based on a specific factor like the texture coordinates, the vertex’s position or time, etc. I just thought of it as an interesting one, and it merges with the usage of frac().

Texture and sampler declaration

Before explaining texture lookup functions, it would be good to take a look at texture and sampler declarations. These are done at the same level as the shader-global parameters, and in fact can textures can be supplied by effect parameters.

Texture vs. Sampler

First let’s take a look at some code.

texture texDiffuse : TEXTURE0;
sampler sampDiffuse = sampler_state {
	Texture = (texDiffuse);
};

First there is a texture declaration, which I mapped to the TEXTURE0 semantic, so that it fetches the model’s first texture stage. You can see the texture as a static surface, a color-holding structure. You can’t really access it as-is.

Then the sampler. This is the texture’s access point, which provides all the filtering and U/V/W addressing operations. It is associated with a single texture, and can have more parameters in its sampler_state construct to parameterize how the addressing and filtering will be performed.

Texture types

There are four types of textures in Direct3D, which have slightly different texture and sampler declaration.

Texture Type Sampler Declaration Texture Declaration
Unidimensional (1D) sampler1D texture1D
Bidimensional (2D) sampler or sampler2D texture or texture2D
Tridimensional (3D), aka Volume Texture sampler3D texture3D
Cubemap samplerCUBE textureCUBE

1D textures can be seen as 2D textures which have a height of 1; they actually are 2D texures on the file system, loaded as a normal 2D textures in the application, then assigned to a texture1D/sampler1D in the shader. Even though they could be used as 2D textures as well, defining them as 1D probably gives a little speed-up and makes the code clearer and more defined. They are normally used for lookup tables, gradient-mapping or cel-shading.

3D textures, or volume textures in DirectX slang, can be seen as several layers of 2D textures packed in a single structure. The DDS file format is the only one that can hold a volume texture. It should be loaded in the application as a volume texture (not as a standard 2D texture). They can be used for texture animation (where the third dimension is time), volume lightmaps and more.

Cubemaps are an assemblage of 6 2D textures so that it forms a cube, with all faces pointing to its interior. These are often used for environment mapping and skyboxes. DDS is the only native format to provide support for cubemaps.

A note on mip mapping

Mip mapping is a process that uses downsized versions of a texture depending on their size on-screen. Usually the downsized versions are created when loading the texture, or they are embedded in the texture format (only DDS textures support that).

A texture can have log2(texture_size) mip levels, or downsized versions of itself. That means a 512×512 texture can have 9 mip levels because 2^9 = 512. Each of these mip levels has half the size of its succesor, which means that the full mip chain of a 512×512 texture is : 256×256, 128×128, 64×64, 32×32, 16×16, 8×8, 4×4, 2×2 and 1×1.

Now all of this applied to 2D textures. Abstraction to 1D textures is pretty obvious; just remove the second dimension. But what about volume (3D) textures?

Actually it’s the same thing, but it’s not intuitive. Each mip level of a volume texture is half the size of its successor, but in all its three dimensions. That means that a 512x512x16 volume texture has a first mip level of 256x256x8, and NOT 256x256x16. It might seem obvious, but what happens when you want to use the layers as animation frames? These mip levels have half the number of frames at each level!

It’s a pretty clever LOD trick, and manages to keep texture memory usage to a low when using mip mapping. But it’s something to keep in mind when using volume textures.

Sampler states

Here is the list of the usual parameters that can be provided in the sampler_state construct. All the state names and values are not case-sensitive, so you can capitalize them if you wish.

State Name Possible Values Description
AddressU Clamp, Mirror, MirrorOnce, Wrap, Border The U (horizontal) addressing mode
AddressV Same as AddressU The V (vertical) addressing mode
AddressW Same as AddressU The W (depth) addressing mode
MinFilter None, Point, Linear, Anisotropic The filter to use when the texture is minimized
MagFilter Same as MinFilter The filter to use when the texture is magnified
MipFilter Same as MinFilter The filter to use between mip levels
BorderColor Hexadecimal (à la HTML) The color to clamp to for the Border addressing mode

Addressing

The default addressing mode is Wrap, which makes the texture tile if the texture coordinates are larger than 1 or smaller than 0. Other modes can clamp to the border texels (Clamp), mirror the texture (Mirror), or mirror it once then wrap it (MirrorOnce). Another mode can clamp all border texels to a defined color : the Border adressing mode. If used, the BorderColor must be specified too, for example 0xffffff is white.

It might look like other modes than Wrap are really useless, but Clamp actually is a very useful mode for lookup textures, reflect/refract rendersurfaces and more. I’ve recently used Border for shadow-mapping and it’s been a life-saver. Though I haven’t found a use to Mirror/MirrorOnce yet. :P

Filtering

The default filtering mode is supplied by the application, for instance in TV3D it depends on the settings of TVTextureFactory. The modes use pretty standard texture filtering semantics, but you can specify which mode you want for each transform.

Example of filtering modes :

And some code on how to apply these :

// A 1D, clamped, bilinear-filtered sampler
sampler1D sampTest = sampler_state {
	Texture = (texTest);
	MinFilter = Linear;
	MagFilter = Linear;
	MipFilter = Point;
	AddressU = Clamp;
};

Keep in mind that all filtering mode are not necessarily available with all texture types. For example, like we saw earlier, most hardware will force floating-point samplers to Point, even if you set it otherwise.

Texture Sampling Functions

There are four different texture lookup functions, all of them very similar. All texture lookup function return a float4, which is the RGBA color sample at the specified texture coordinates.

A generic example for all the following commands :

texture tex;  // The texture
sampler samp = sampler_state {  // The associated sampler
	Texture = (tex);
};
 
float2 textureCoordinates;  // The UV texture coordinates, usually from the model
float4 color;  // The sampled color
 
color = tex2D(samp, textureCoordinates);

Basic lookup functions

tex1D(s, t)

Available in all pixel shader versions.

Parameter Datatype Description
s sampler1D Sampler
t float Texture Coordinate

Description : Samples from a 1D sampler s at (t.x).


tex2D(s, t)

Available in all pixel shader versions.

Parameter Datatype Description
s sampler2D Sampler
t float2 Texture Coordinate

Description : Samples from a 2D sampler s at (t.x, t.y).


tex3D(s, t)

Available in all pixel shader versions.

Parameter Datatype Description
s sampler3D Sampler
t float3 Texture Coordinate

Description : Samples from a 3D sampler s at (t.x, t.y, t.z).


texCUBE(s, v)

Available in all pixel shader versions.

Parameter Datatype Description
s samplerCUBE Sampler
t float3 Vector

Description : Samples from a cubemap sampler s when pointing towards at (t.x, t.y, t.z) from the cube’s interior centerpoint.

Some lookup function modifiers

I won’t go over all of them (since I haven’t even tried all yet), but here are two which will prove useful when doing more advanced shaders.

tex*bias(s, t)

Available since pixel shaders 2.0.

Additional Parameter Datatype Description
t float4 Augmented Texture Coordinate

Description : The fourth component of t, or t.w, contains the mipmap bias. This bias will force the hardware to use a bigger or smaller mipmap than the natural choice, to get sharper or smoother sampling. I found this to be useful when using reflection cubemaps; if you want cheap “glossy” reflection, use a positive bias and it’ll fetch a smaller mipmap, resulting in a slightly blurry reflection.


tex*proj(s, t)

Available since pixel shaders 2.0.

Additional Parameter Datatype Description
t float4 Projective Texture Coordinate

Description : In this case, t is a projective texture coordinate, which is used to perform screen-space refraction, reflection, lightmapping and shadowmapping, among other things. I won’t go in detail over this one just yet.

And that’s it for basic intrinsics! The next part is a case study of sorts, a full Blinn-Phong reflection shader, merging all the concepts that have been presented.

 
tutorialsarticlesandexamples/programming_hlsl_shaders4.txt · Last modified: 2013/11/22 13:32