More Grass, Denser Grass

The CodeMasters blog entry http://blog.codemasters.com/grid/10/rendering-fields-of-grass-in-grid-autosport/ made me think again about my Grass rendering using a Geometry Shader. I had followed the suggestions from Outerra http://outerra.blogspot.co.uk/2012/05/procedural-grass-rendering.html to generate my grass but CodeMasters suggested combining this approach with simple billboards.

Instead of each geometry shader triangle strip representing a single blade of grass, why not just output a quad with a nicely detailed, colourful, texture. The textured quad might represent 5 or 10 blades of grass, rotated and scaled. This is a massive increase in grass density with better art, than the Outerra model.

With a bit of texture atlasing of various textures I could generate a very varied meadow with only some basic changes to my shader – and use less vertexes per location as well. Although the end result is clearly more “billboard” than “geometry” it still achieves a much higher density of foliage.

Here is the outcome with a four texture atlas

meadowvertical01

 

densemeadow

This is animated in the normal way using some perlin noise textures to generate movement. The density of the grass is overwhelming here – it looks like a forest. Changing the texture atlas to something more “grassy”;

meadowvertical02

densemeadow2

Mmm. Thats nice.

densemeadow3

Meadows underplating shadowed trees, distant ocean and mountains. 80 fps.

Advertisements

Shadows

This is quite a difficult issue to deal with for a large landscape. The basics of shadow mapping are well documented https://msdn.microsoft.com/en-gb/library/windows/desktop/ee416324(v=vs.85).aspx() and briefly;

  1. Draw your scene in two passes. The first pass is drawn from the location of the light source, and the second from the location of the viewer.
  2. On the first (light) pass, you render to an offscreen texture and only actually draw the depth of the pixel not the colour of the scene. The depth is calculated as output.Depth = (ps_input.Position.z / ps_input.Position.w);
  3. On the second (color) pass, you render your scene as normal to the viewport. However you pass in to your shader the texture you drew in Step 2 along with the View Matrix you used when you drew step 2.
  4. In the pixel shader, read the correct depth pixel from the depth texture you generated in Step 2 and compare it with the depth you calculate for your current pixel – if the depth stored in the texture is less than the depth you have calculated then the pixel should be shaded darker – it is in shadow.

This is fairly straightforward, but how does it work ? How do you actually use the depth picture you drew in step 2 ? The first step is to work out which pixel in your standard rendering pass (step 3) is equivalent to the same pixel you drew in step 2. Since both were rendered from different view points (and using a different projection matrix typically) the actual pixel being drawn in your Pixel Shader has no relationship to the one you drew in the earlier light pass.

The key is to pass into the vertex shader the View and Projection matrix values you used to generate your light pass in Step 2. You then calculate the vertexes position to generate a value which would have been the same for that vertex in the Step 2 vertex shader.

Vertex Shader Fragment

In Step 2 (depth pass) you would have calculated;

output.Position = mul(vertexPosition, Param_WorldMatrix);
output.Position = mul(output.Position, Param_ViewMatrix);
output.Position = mul(output.Position, Param_ProjectionMatrix

so in Step 3 (color pass) you need to calculated the same value, and pass in the matrixes you used in the Step 2 as a new set of parameters “Param_LightXXXXMatrix”

output.LightViewPosition = mul(vertexPosition, Param_WorldMatrix);
output.LightViewPosition = mul(output.LightViewPosition, Param_LightViewMatrix);
output.LightViewPosition = mul(output.LightViewPosition, Param_LightProjectionMatrix);

So your pixel shader will now receive the parameter LightViewPosition as well as the Position you would normally calculate in your vertex shader for this pass. The clever part comes in the pixel shader where you use the passed in LightViewPosition to generate a texture coordinate that can be used to read the correct pixel from the depth map texture;

Pixel Shader Fragment

This calculation uses the LightViewPosition you calculated in the vertex shader and generates a coordinate correct for sampling the depth map texture.

float2 projectTexCoord;
projectTexCoord.x = ((LightViewPosition.x / LightViewPosition.w) / 2.0f) + 0.5f;
projectTexCoord.y = ((-LightViewPosition.y / LightViewPosition.w) / 2.0f) + 0.5f;

This is called Texture Projection and this trick can be used anywhere you have a texture that is generated via a different View and Projection matrix.

Once you’ve got the texture coordinate for your depth map, you just read out the depth you recorded in Step 2 and compare it to the value you are currently about to write to your color pass.

Pixel Shader Fragment

So now we can sample the depth texture and read back the depth we calculated when we generated the same pixel from the lights position.

float realDistance = (LightViewPosition.z / LightViewPosition.w);
// Note the use of the free interpolated comparison method.
return depthMap.SampleCmp(DepthMapSampler, projectTexCoord, realDistance - depthBias);

So whats with the special “SampleCmp” ? We expect, because we used a different location and projection matrix for drawing the depth map, that the pixel we sample from the depth map wont be an exact 1:1 match for the pixel we are drawing to the scene. It may be skewed or scaled such that it represents a slightly different world position. Typically you would do a PCF set of four samples around the point you have calculated, and take the average of that calculation – this will give a nice anti-aliasing. However we need to consider that the depth map does not contain color – it contains depths. Trying to use anti-alias concepts on a depth map would generate nonsense. Two pixels lying next to  each other in the depth map might represent depth calculations for two objects very far apart in world space – a nearby object and a really distant object might record wildly different depth values only one pixel apart from each other.

Luckily in Shader Model 5 the designers gave us the new SampleCmp which allows us to do a four-tap PCF sample in the hardware but instead of returning an weighted average of the values it samples, it gives us a weighted average of those pixels that are less than a depth value we pass in (the third parameter). This is much more useful and gives our shadows a nice soft edge.

Shaking Shadows

aka Shadow Trembling, Shadow Shaking etc.

This is visible when you swivel the viewpoint or move the world viewpoint for your camera. Its generated because of the same problem which caused us to use the SampleCmp function described previously. The shadow map does not have a 1:1 mapping between its pixels and the pixels being rendered for the color pass. Slight variations in the floating point calculations between the light projection and the camera projection matrixes lead to pixels moving in and out of shadow seemingly at random round the edges of a shaded area.

This has a relatively simple workaround – dont change the light position or orientation other than in whole pixel steps. This is completely documented in the link referenced earlier. Implementing the “Stable Light Frustrum” calculations has an awesome benefit – becasuet the light matrixes dont change every time the camera matrixes change, you can afford to redraw your shadow map once every 10 or 20 frames (or when the camera substantially moves orientation or location). This means you can go to town on the GPU cost of calculating the shadows, bringing in multiple cascading shadows maps into play, but recalculating only the very nearest ones and then only quite infrequently.

Examples

These examples use false color to indicate which of the three shadow maps is being used to calculate the shadows;

csm1

Here with more natural colours

csm2