Dynamically scaled terrain like that generated by Geo Clip Mapping, CLOD or other algorithms which vary the number and location of the geometry based on the distance from the viewer have a nasty side effect. Whilst it is possible to sample the underlying height map texture to place billboards and 3D geometrical objects on the terrain with a reasonable degree of accuracy, the placement of horizontal features is more problematic.
The error margin between the height sampled on the heightmap texture and the accuracy of the underlying geometry (which gets more coarse the further from the viewer) means that vertically aligned billboards and geometry will alternatively float or sink below the level of the landscape surface as perceived by the viewer.
In the diagram above the naturalistic landscape curve allows the object to be placed on the surface without distortion. When rendered on a variable density triangle mesh (represented by the vertical red lines)
The placement is relatively unchanged. But as the mesh becomes more coarse (i.e. further away from the viewer) the object progressively interferes with the underlying geometry to float either above or below the perceived “correct” height.
This effect is most obvious if the cube were placed on the top of the hill, where its height map would make it hover obviously above the red geometry.
At a long distance this effect is rarely noticeable, for the same reason we can afford to approximate the underlying geometry and save render-time; because the object is very far away and such distortions aren’t easy to see.
One area where the distortion is particularly problematic is in horizontal surface features; rivers and roads. These features have a geometry of their own (typically a triangle strip) which does not correspond with the underlying landscape geometry (which is changing dynamically).
The illustration above shows a river geometry in blue overlaid onto a consistent landscape mesh in red. If both surface are exactly the same slope and are flat, this would work. In all other cases horrible distortions will be seen as the river geometry dips and dives under and over the landscape geometry. Add to this the fact that an LOD geometry does not have a consistent mesh, but varies depending on the distance from the viewer, and we cannot paint horizontal features very easily.
The Expensive Solution
A solution for this is to store the linear feature as a series of textures (or stencils) which cover specific parts of the landscape. This is called Texture Splatting – the landscape is rendered once for the underlying landscape texture and then repeatedly, once for each section of the river, road etc. This allows the pixel shader to sample the river stencil and draw a texture where the river is supposed to be, or clip the pixel and allow the underlying landscape texture to show through.
This method does work but has a massive performance penalty. If we look back at why we use dynamic landscape meshes in the first place; its because rendering large landscapes is very GPU intensive. Following this method requires use to render the landscape geometry many times – once for each river segment that is visible in the view frustum. Because linear features add a lot of value to a landscape having a much shorter frustum far plane for them to speed things up just creates horrible popping.
The Complicated Cheap Solution
The cheap solution is to use river segments and calculate the height that they need to be painted at with reference to the pixels that have already been drawn. The maths of this method are covered in the excellent series of articles by MJP – Position From Depth. The basic method is;
1) When drawing the expensive landscape mesh, take advantage of multiple render targets and draw to a separate render target your calculated depth from the camera inside your pixel shader. This step is like creating a deferred depth map, but in this case in View Space (i.e. relative to the camera).
2) Create a simple cube geometry mesh that and intersects the landscape for each of your river segments you want to draw. When drawing that simple geometry, pass in the depth texture you calculated in step 1. Using the location of the pixel on the surface of the cube and the original depth of the landscape at the same pixel location you can reconstruct the exact World position of the pixel at that location (i.e. find out the World position of the pixel which corresponds to the landscape behind the cube.
3) Check to see if the World position of the pixel calculated in 2 is within the footprint of the cube you are drawing. If it is, then the pixel is in the cube footprint and you can sample your river stencil, and paint the pixel blue. If it is not within the cube footprint then clip the pixel and leave the landscape showing.
This shows the concept – the red circles highlight the pixels that are being rendered on the cube surface. The dotted lines illustrate where we are using the ray angle to sample the previously rendered depth texture (the purple line). The yellow circles provide the calculated World Position of the pixel directly behind the cube pixel we are rendering. If that World Position falls within the cube horizontal footprint then that pixel can be rendered using whatever texture we need (water, roadway etc) otherwise we leave it clipped so the landscape shows through.
A simpler way of calculating the World Position than that described by MJP is to record all three World Coordinates when drawing the landscape, but at the expense of using two extra screen sized textures (for the X, Y and Z coordinates).
Here is an example where I am rendering a series of river sections (represented by the set of green wireframe cubes). The river stencils are calculated at pipeline generation time and stored in the asset database. They are passed in with the appropriate cube and the relative offset of the pixel on the cubes X,Z coordinates is used as the texture sample coordinates for the stencil.
The cube sizes are determined arbitrarily at asset generation time and limited by two constraints – the size of the corresponding river stencil texture, and the fact that every pixel forming the surface of the cube is rendered (with the vast majority being rejected) with no corresponding effect on the hardware depth clipping which you would find with a solid cube.
The important point about the river section above is that the river has no geometry and will always be on the surface of the landscape, irrespective of how that landscape geometry is formed.