Shadows

Shadows are important for establishing 3D shapes. They give hints about 3D shape as well as light source location.

Shadow volumes

Shadow volumes work by creating a volume where shadows are in 3D space. All objects visible to the light source are detected. Then, from the light source, volumes are extended out through all triangles. This results in 3D volumes where shadows are in space. Then, for each fragment rendered from the camera's view, the number of volume boundaries encountered determines the shadow status: even means not in shadow, odd means in shadow.

Shadow volumes work well, but scale poorly with high polygon count and require lots of GPU compute time to render.

Shadow maps

Shadow maps don't create new geometry and instead compare the distance between surfaces and occluded (blocking objects). From the light's position, the scene is rendered into a texture. The texture records the distances of scene surfaces from the light's view. Then, the scene then is rendered from the normal camera view. Each point's distance from the light is compared to the distances stored in the texture map to determine if that point is in shadow.

When rendering the scene, in addition to the normal 3D transforms for rendering, all vertices are also transformed by the light's view matrices. This results in a projected point with x, y, and z values. The z value represents the distance from the light to the rendered surface. The x and y values can be used to query the light's depth map. This results in the closest occluding object in the light's view. If the two distances are similar, the occluding object is the rendered surface, so no shadows are cast. However, if the occluding surface is much closer, then the surface point must be in shadow.

The shadow map algorithm requires careful tweaks to work well, namely a bias value is often subtracted form the surface distance to correct aliasing issues.