Welcome! This is a place where we keep track of frequently (and not-so-frequently) used terms and their definitions, to serve as a resource for beginners to 3D programming with the TV3D engine. After all, if you can speak our language, it’s easier for us to help you, right? :D

We do our best to describe each term in plain and approachable English that anyone can understand. If you’ve got a question about something that hasn’t been explained here, or if you would like to request a term to be defined, feel free to let Fex know:

Experienced developers are highly encouraged to help the cause by explaining any unexplained terms or expanding upon the entries already here. Please keep in mind that this is a resource designed for newcomers to 3d development! If you are able to lend a hand, don’t forget to add yourself to the credits!

3D Art Software




Collision Detection

Collision Detection is the mathematical process of checking to see if objects are touching one another. Some of the most practical and common uses for Collision Detection algorithms include basic physics to make sure the player doesn’t fall through the floor or walk through walls, and firing physics to see if a bullet hits a target. Collision Detection tests can be very simple and basic, or extremely complicated, depending on what they need to be used for.

Collision Detection is usually closely tied to a game’s physics code.

Depth of Field

See DOF.



Stands for “Dynamic Linked Library.” A DLL is essentially a chunk of code that has been saved out into a .DLL file so that other programs can use it. For instance, if you had a chunk of code that processed important mathematical functions, and you wanted to use it in a lot of other programs, you could save that code out to MathematicalFunctions.DLL, then you could load that DLL file into another program and instantly access all of the math functions.

Of course, some DLLs are much more complicated than that. For instance, the TV3D engine is all contained in DLL files. In order for your programs to use the TV3D engine, you need to store the TV3D DLL files on your computer, then reference them in your program.

The users of this forum usually save out important tools as DLL files for other people to buy or use for free. For instance, Aeon has created TVMap.DLL: a DLL file that lets you load in .MAP files and render them in your program, and I’m working on CharGen.DLL, which will let you instantly put dynamic, sculptable characters into your program and render them.


Dynamic Linked Library

See DLL.



Around here, FPS usually stands for Frames-Per-Second, although it could also stand for First Person Shooter.

A program’s Frames-Per-Second is a measurement of how many times the rendering and processing loops are ran every second- or, in simpler terms, a measurement of how fast and smooth the program is running. The higher your program’s FPS, the smoother and faster it’s running.

Different kinds of games and programs have different needs as far as FPS is concerned. Simulations, technical demonstrations, and slower games such as strategy and turn-based games can easily slide by with 25-45 FPS, but games that require a lot of quick reflexes need higher FPS. Many modern racing games, first person shooters, and fighting games usually run at or above 70 FPS on standard gaming hardware.

A program’s FPS is influenced by two things: how the program is designed, and how strong the user’s computer is. Programs that render very complicated and detailed scenery or compute complicated algorithms will have a lower FPS than simple programs. Computers with strong hardware will run programs at higher FPS than weaker computers.

Frames Per Second

See FPS.


TV3D has a built in shader effect that lets objects in your scene use an emissive map to simulating glowing parts. For more information about emissive maps, see Textures.

Graphical User Interface

See GUI.


Stands for “Graphical User Interface,” pronounced as “gooey”. The term “GUI“ is similar to the term “UI“ in that it describes elements of a program that let the user and the program communicate: things like buttons, scroll bars, blinking lights and icons, health bars, and energy meters. The term “GUI" is more specific, however, in that it’s strictly referring to the graphical elements of the UI.

Most people associate the term “GUI" with 2d elements that are rendered along the edges and corners of a screen, in front of the 3d environment the user is actually dealing with. More and more modern games, however, are starting to include 3d GUI elements, such as spherical menus that can be rotated, or menus on 3d computer screens that a player’s character can interact with.

Some developers build their project’s GUI from scratch, but others choose to use a pre-made GUI library that already handles basic GUI elements like buttons, scroll bars, and text boxes. Many times, these GUI libraries are not developed specifically for any Engine, but can be mixed and matched with whatever engine the developer is already working with. Our Useful Links page contains links to popular GUI engines that you can use in your own projects.

For more information about the User Interface in general, visit UI.


High Level

See also: Low Level.


See Polycount.

Indoor Environments

Lookup Map

See Textures.

Lookup Texture

See Textures.

Low Level

See also: High Level.


See Polycount.


“Mesh” is just another name for a 3d object that can be rendered by games and other 3d programs. You could also call them “Models,” “Objects,” or more specific titles depending on what their purpose is (for instance, a Mesh that is meant to be animated like a character is often called an “Actor Mesh“ or a “Pawn Mesh.”)

The more complicated and detailed a Mesh is, the longer it takes for a program to render it. Meshes that are used in movies and 3d animations can be very detailed, since they are saved out to images and turned into a movie, but for real-time applications like video games, Meshes need to be simple enough for the program to render over and over very quickly in different poses and in different positions.

Meshes are created and edited in 3d editing software. This software ranges from very expensive high-end tools like 3DS MAX and Maya, to very inexpensive (and free) tools like Silo, Milkshape, and Blender. If you’re interested in learning how to make 3d Meshes, I recommend checking with a nearby high school or college to see if they offer any introductory 3D Art courses.

Meshes can be stored in many different file types, each of which stores basic information about what the mesh looks like, as well as special features from each type to the next. The .X file format is very generic and fully featured- it is designed to be used very easily by DirectX programs and engines that use DirectX (like TV3D).

TV3D has its own file format that can be used for Meshes, the .TVM format. It also has its own format for Actor Meshes, the .TVA format.

Motion Blur

Some people by mistake think that motion blur is associated with slow motion. Although in many cases, motion blur is accompanied with slower motion, it’s mostly done for reasons of visual effect impact and is NOT a requirement for the actual blurring procedure. Motion blur, to put it simple is the effect where the next frame of an animation doesn’t erase the last one completelly but it is simply drawn above it. This procedure will result in displaying the same object multiple times, causing this effect.

To amplify the effect, at each frame, all the previous frames are drawn is a reduced opacity making them more transperant as time passes. This will make the top and most recent frame completely opaque (not transperant) and the frames below it increasingly more transperant. The slow motion that usually comes with motion blur is because the effect doesn’t appear believable if each frame is too far (several mm) from the previous one. If the frames are too far apart from each other, the blurr will appear choppy and discontinued.

Normal Mapping


Occlusion Testing

Occlusion Testing is the mathematical process of checking to see if an object is visible to the camera. If something “fails” an Occlusion Test, then the object is either out of the camera’s view, or it is being covered up (occluded) by something else in the scene. Either way, the object doesn’t need to be rendered. By using Occlusion Tests to determine what objects need to be rendered and which ones can be skipped, you can dramatically improve a program’s rendering speed, which yields faster, smoother gameplay. Just like Collision Detection, Occlusion Testing can either be very simple or very complicated, depending on what it needs to be used for.




Although there are a lot of different things that determine the true complexity of a Mesh, the easiest and most common way to measure a Mesh’s complexity is by counting the number of polygons in the Mesh. Meshes that are extremely simple are called “Low-Poly” or “Low-Resolution,” while “Hi-Poly” or “High-Resolution” meshes are more detailed.

  • Low-Poly Example (Renders super fast. This is the kind of mesh that small video games on old or lightweight gaming systems use.) – Link Broken –
  • Mid-Poly Example (Renders fast enough to run on a modern video game system.)
  • Hi-Poly Example (Much too complex to use in most modern video games. This is the kind of mesh that is used in movies and animations.)

The preceding examples are the copywritten property of their respective authors. I hold that their use for educational demonstration constitutes Fair Use.

In order to make less detailed meshes look more complicated, they are painted with textures. The size and complexity of these textures also effects how quickly or slowly the mesh renders.

Over the last few years, there have been lots of new developments and techniques that allow real-time applications like video games to render low-poly and mid-poly meshes that appear more detailed than they really are. The most popular of these techniques is called Normal Mapping.



The word “resolution” is usually used to describe how wide and tall a flat surface is: especially when that surface is measured in pixels. For instance, a texture’s resolution might be 512×512 pixels. Your computer monitor’s resolution might be 640×480 pixels, 800×600 pixels, or 1024×768 pixels, or 1280×800 pixels, or a variety of other options.

The word “resolution” can also be used to describe how complicated or detailed an object is: especially when talking about a mesh's polycount.

Resolution is important in both contexts because the higher an object’s resolution, the longer it takes to render it.


A shader is a set of special instructions stored in a file that tells the TV3D engine (or any other 3D engine) how to render a 3d object or an entire 3d scene. You don’t -have- to use shaders when you render a 3d object or scene: they are there for developers who want to enhance their 3d environment with special effects.

Shaders can be used for variety of purposes. Some shaders boost a 3d object’s appearances with complicated special effects, while others are used to perform very low-level mathematical operations or simple tasks. TV3D has a few default shaders built in, including a basic one that lets you render a 3d object with diffuse textures, specular maps, glow maps, normal maps, material colors, and basic lighting.

Shaders usually come in one of two varieties. There’s the kind you apply to an object that renders it with special effects like normal mapping, or glow mapping, or fancy hologram effects, or a falloff map, or a bunch of other really cool stuff like that– and then there’s the kind that gets rendered over the whole scene for effects like depth-of-field, bloom, motion blur, night vision, etc.

See also: Zaknafein's Shader Tutorials


A Texture is a 2-dimensional image that is applied to a 3d Mesh (or any other 3d [or even 2d] object). Textures come in many different varieties and can serve many different purposes. Some of the most common types of textures include:

  • Diffuse Textures - Diffuse textures are essentially flat images that are painted onto a 3d object. Example Image
  • Specular Maps - Specular maps determine how “shiny” or “glossy” parts of a 3d object are. Example Image
  • Emmisive Maps - Emmisive maps determine what parts of a 3d object glow, and what color they glow. Example Image
  • Bump Maps - Black and white textures that are used to simulate depth on a 3d object. Nowadays, normal maps are used more often than bump maps. Example Image
  • Normal Maps - Special textures that contain lighting information, and are frequently used instead of bump maps because they carry more precise information and are faster to render. Example Image (More information...)
  • Lookup Maps - Special textures where each pixel contains reference data, instead of data to be rendered. For example, a character in a shooter game might have a lookup map that tells the program what parts of its body take the most damage from gunshots, and another lookup map that tells the program what sound effect to play when that part of the body is shot. Lookup maps can be designed by the programmer to do whatever he/she wants them to do.

User Interface

See UI.


Stands for “User Interface,” which is a pretty broad term that stands for anything on the front end of a program that helps the user and the program communicate back and forth, sending information or commands to one another.

The term “UI” is frequently used to describe the interactive components of a program. Usually, these are things we’re all familiar with: buttons, links, menus, scroll bars, and other windows controls. Of course, a user interface can also contain more bizarre elements: like a steering wheel that the user clicks and drags to steer a 3d ship, or a panel with gauges and switches the player can adjust while flying a plane in a flight simulation, or 3d surgical tools and body parts the user can interact with during a medical simulation.

In addition to the interactive parts of a program, the term “UI” also describes things that the program uses to inform the player: from simple things like text boxes and labels that display information or the program’s status, to blinking icons that indicate a problem with the program, to health bars and weapon icons used to let the player know about their character’s status, to audio alerts letting the player know that a building is going to explode around them!

If you’re one of those people, you could probably argue that anything the player could see, feel, or hear is part of a program’s UI- and you’d probably be right. Most of the time, though, when you hear the term “UI” or “GUI,” it’s just describing the parts of the program that are specifically designed to let the player and the program communicate: health bars, menus, weapon icons, mini maps, and the like. When used in this context, the term “GUI" is used, instead.

A program’s UI is a vital element of any program: a well-designed UI will let the user have solid, uninterrupted control and communication with their program, while a poorly-designed UI can drive the user mad, turning an otherwise high-quality program into an impossibly difficult and frustrating program.


The following people have supported newcomers by enhancing the Encyclopedia:

generalresources/encyclopedia.txt · Last modified: 2013/11/22 13:31