Just a “boast post” of the combination of the techniques I’ve been working on.
- Before your main rendering loop, create a RenderTarget that is not the default screen render target;
- Draw all the visible objects on your landscape that case shadows. Luckily Imposters will cast excellent shadows, so most of the objects will be rendered pretty cheaply.
- Pass the shadow map texture to your terrain shader.
Thats the mechanics done, bit what does it all do ?
Generally in the game loop you keep track of the camera “eye” position and render everything based on that point of view. In the real world light from objects radiates into the camera to form your visible object field. In the game world you need to keep two cameras operating – the eye camera, and a second one representing the source of your light (i.e the sun or some point light source such as a lamp).
When you draw your surface features to generate your shadow map, you do so from the perspective of the light source, not the eye camera. You also don’t paint the objects texture but a color which represents the distance away from the light source of the pixel being painted. You are using the texture to record distance from light source, in the same way as a height map texture records distance above ground.
What you end up with is a texture with a “negative” of the outline of your objects seen from the lights point of view.
Above is a shadow map of a set of trees on the landscape.
When you pass in this texture into your landscape shader, you perform the same “depth to pixel” calculation in that shader and compare it to the same pixel in the shadow map. The maths of this escape me, but are explained in many web articles.
If both the shadow map and your landscape render code agree that the pixel is the same depth, then both the light and your camera can “see” the pixel uninterrupted by anything else. Where the pixel in the shadow map is nearer than the landscape, then the light “saw” something in front of the landscape and therefore what your camera is “seeing” is behind that obstruction, and should be in shadow.
Drawbacks and Glitches
This comes at a cost; you need to render all your shadow casting objects twice – once for the shadow map and once for the scene. You can mitigate this a lot by accepting a much shorter view distance by using a truncated view frustum when rendering your shadow casting objects (so long as you are using frustum clipping of your object field), which means that very distant object cast no shadow.
The second problem is pixellation. You can see the quality of the shadows in this clip are poor. This is a systemic problem with shadow mapping. It results from the need to check each rendered pixel with its equivalent on the shadow depth texture and that the camera and light source are different distances away from the particular pixel – in this case the pixel is about 500 units from my camera, but 800 units from my light source; consequently many of the pixels being rendered map to only a single pixel in the shadow map.
A fix for this is to render a series of different shadow maps with a stacked set of view frustums with each representing a vertical slice through the frustum. This technique is called Cascaded Shadow Maps and is illustrated well here
Diagram of a set of shadow maps generated from a nested stack of view frustums. Each shadow map models a given set of possible depths, and simulates a massive pixel-depth unobtainable with normal textures, which are likely to be 24bit. The need for the three textures in the above diagram can be avoided by using each color channel separately on a single texture and recognising that each color channel was rendered to represent a different resolution. However this is still 3 extra scene drawing calls just to create the shadow map, but with each one having a more truncated frustum and therefore with efficient culling, is no more costly that one single deep fields render call.