Post Processing Distance Blurring

Distance blurring is an effective technique to cover up the inconsistencies of distant objects, and to present a “real world” feel to distant objects.

In order to achieve this you need to render your scene to a Texture instead of the screen (deferred rendering) and pass that Texture through to a XNA SpriteBatch (i.e. render it in 2D) with an attached pixel shader to blur the colours.

In order to blur the colours based on relative depth from the current perspective the pixel shader should sample the depth buffer (recorded in an another Texture generated during deferred rendering and passed into the blur shader)

This technique is similar to attenuation where the colours are allowed to fade out but affects the sharpness of edges as well. Distant objects blur together slightly out of focus.

The pixel shader pseudo-code can be found here http://xboxforums.create.msdn.com/forums/t/7015.aspx.

image

Trees softening into the distance. To get double the blur effect, just run the shader twice.

Near Field Landscape Decoration

A challenge for large scale terrain engines is the ability to provide enough detail in the near field of view, with grass, vegetation, rocks and other ground cover. In a brute force implementation these would all be represented by individual meshes (perhaps using billboards or imposters for distant objects) in an series of 2D rectangular areas stored in a quadtree, which are then rendered when in the view frustum and within the feature specific far clipping plane.

image

Individually placed trees in a landscape

A problem with this approach is the vast scale of storage required to store every bush, rock and grass clump, even though retrieval is fast (using a quadtree) this is still a huge data set.

A more scalable, fast, method of achieving this is to store the above but with a small spatial sample of the decoration required. This can be randomly scattered in a 1.0 x 1.0 rectangle at a given density appropriate to your decoration type. This is then stored with the area of coverage in the quadtree, but crucially the sample vegetation covers only a fraction of the area of coverage – the perception of continuous coverage is generated within the vertex shader.

In the above example individual trees do require constant geolocation, but smaller features, visible at only shorter ranges, can get away with a repeating wrap of a small set of meshes.

Assuming a maximum visible range for a feature is 100 world units, and a scatter of sample of meshes inside a 1×1 world unit square, the user can trigger the DrawIndexedPrimitives() when their camera position encounters the large area of coverage.

Inside the VertexShader the position passed in can be scaled and wrapped on a 100×100 scale to produce a repeating field of vegetation that seems to the viewer fixed in space, but in reality is being wrapped in the same way that repeating textures are wrapped in a texture sampler.

float wrap(float value, float lower, float upper)
{
  float dist = upper - lower;
  float times = (float)floor((value - lower) / dist);

  return value - (times * dist);
}


VertexShaderOutput VertexShaderFunction_Decoration(VertexShaderInput input)
{
    VertexShaderOutput output;

    // This calculation from http://books.google.co.uk/books?id=08fx86eFQikC&pg=PA240&lpg=PA240&dq=billboard+rotation+inside+shader&source=bl&ots=0ApjfGYTyu&sig=wIGHzbjmn_B2S4koEc5nRgZIkVQ&hl=en&sa=X&ei=BtTmUPLSMK6k0AWln4HoCw&ved=0CHUQ6AEwCQ#v=onepage&q=billboard%20rotation%20inside%20shader&f=false

    // Wrap the coordinates based on the camera position.
    input.Position.x = wrap(input.Position.x - frac(Param_CameraPosition.x / 100)  ,-0.5,0.5);
    input.Position.z = wrap(input.Position.z - frac(Param_CameraPosition.z / 100)  ,-0.5,0.5);

    // Scale X,Y to 100,100
    input.Position.x *= 100;
    input.Position.z *= 100;

    float4 worldPosition = mul(input.Position,Param_WorldMatrix);

Example vertex shader fragment for wrapped surface features

image

The view above of grass clumps continues for 1000 world space units, wrapping the visible 100 world units of textures continuously as the camera moves giving the impression of endless grass.

image

In the above the trees are placed specifically in the landscape, but the grass is a wrapped randomised set of billboards.

image

In this shot it more clearly shows that the grass coverage is thicker nearer to the camera. This is done by rendering the same vertex buffer of grass, camera centred, at two different X,Z scales – the first at a world scale of 150 and a second pass at 30, giving a 5:1 density ratio of grass nearer to the camera. This technique reuses the existing vertex buffer and effect, just changing the World matrix.

The Humble For()

In most C# is normal to use the fantastically easy foreach() construct to iterate a set. This comes with a hidden cost;

  1. An incrementor object is created for the duration of the loop and then destroyed, which is fine if you dont suffer from heap fragmentation, or are not aware that different C# .net frameworks have different GC implementations (XBox for instance).
  2. the incrementor cannot easily be used in an anonymous code block within the loop, as, by the time the code is executed, the incrementor may yield a different value to the one it had when the code block was constructed.

Reconsider our old friend for() which is faster to execute, doesn’t lead to heap allocations and doesn’t suffer from the anonymous code block problem.