Shaders are a way of dynamically “programming” a GPU to do all sorts of crazy little things. Usually shaders do math; that’s what the GPU is really good at, and you’ll find yourself number crunching more often than not. People use shaders for all kinds of things, like lighting and shadows, painting on a landscape, or blurring a texture. While you can perform any of these tasks on the good ‘ol cpu, the biggest advantage to shaders is that math on the GPU can be a lot faster, because the GPU’s processor isn’t busy being tied up with handling the message que, controlling AI, and all manner of other tasks.
Math. That’s what a shader does, at the very core of things - which makes sense, because math is the slowest part of a graphical program (especially a 3D one, though shaders arn’t limited to 3D). You can tell a shader to add, multiply, divide, find the dot product, raise to a power, etc. You can even do all of that on matrices, arrays, textures, etc.
This is actually a really important concept to understand - everything you do with a shader is math. Once you get your mind around that, it’ll become easy to see how shaders are capable of everything from blurring a texture to lighting up a room to creating a complete frogger clone.
One of the most useful features of shaders is the ability to manipulate textures. This is a simple process really - the shader (in the case of textures, a pixel shader) runs for each pixel in the texture, extracts the rgba values for the pixel, performs some kind of mathematical operation, and then outputs a new rgba value. This is exactly what a lighting shader does - it takes the rgba of the texture, figures out how bright that pixel should be, multiples the brightness * rgb, and outputs the new value. Multiple textures can be used “per-pass” (a pass is one loop of the shader, usually for every vertex/pixel of the mesh/model), and textures can be used for some really creative things (such as look-up tables).
Pretty much every new game that comes out these days uses shaders to one extent or another. Some games, like Unreal 3, are completely shader driven. They use shaders for heat haze, lighting, shadows, depth of field, motion blur, and more. There really is no limit to what can be done these days with a shader.
Shaders have actually been in use for years before they became mainstream in realtime graphics. Large CG studios like Pixar created the first shader languages (Pixar created RenderMan) as a way to allow artists flexability in creating effects like realistic lighting and fur. For a long time, though, shaders were far too expensive to be feasible for real-time applications - a single frame could take hours to days for a mainframe to compute. However, over time 3D cards increased exponentially in power; so much so that games were able to hit the magic 100fps mark and above. Obviously, there was a lot of wasted potential just sitting there. A few bright folks realized just how useful shaders could be in realtime applications, and decided to put that extra potential to good use. Shaders first appeared in DX around version 8.1, though they were extremely limited and not very useful. However, with the advent of DX 9+, shaders have been given a larger instruction set and the ability to fully access textures, as well as deal with floating point numbers. Today shaders and hardware are so powerful that it is almost possible to reproduce the same graphics that Pixar’s original RenderMan used to take hours to do, in realtime.
With the introduction of Dx9, two types of shaders are available: Vertex and Pixel Shaders.
Vertex Shaders are used to manipulate the verticies of 3D geometry. They run for every vertex in the mesh, and can manipulate their positions (to create, for example, a pulsing sphere). Recent uses for vertex shaders include displacement mapping (distorting a mesh’s verticies based on a height map to create detail) and animation blending (an animated mesh is accelerate by the vertex shader, taking the load off from the cpu). Vertex shaders can also manipulate the colours of each vertex, and so can be used for simple vertex lighting, fog, etc. When using a shader, there must ALWAYS be a vertex shader - it is a requirement.
Pixel Shaders are the more interesting of the two kinds. They’re meant to manipulate textures and are looped once for each pixel (hence the name). A pixel shader requires input from a vertex shader in order to figure out the proper texture coordinates for each pixel. At its most basic level, a pixel shader uses math to manipulate colours - but if you’re creative, there really isn’t anything you cannot do. Bumpmapping, per-pixel fog, blooming, splats, blurring - there is limitless potential in a pixel shader.
Because a shader is essentially a small script that can be concivably run thousands of times a frame, it needs to be short and compact; as a result the original language used to write pixel shaders was a stripped-down version of ASM. Fortunately because ASM is a little obscure for the masses, both Microsoft and the maintainers of OpenGL decided it would be prudent to create a higher-level language to write shaders with. MS introduce HLSL, which has a c-like syntax, and OGL introduced GLSL, an OGL varient. Nvidia jumped on the bandwagon a little later on and produced Cg, a cross-platform language similar to HLSL and GLSL.
Just to clarify, some languages only work with certain platforms (due to the fact that the shader needs to get compiled down to asm).
Direct X Cg, HLSL, ASM Open GL Cg, GLSL, ASM