Texture Blending

Texture blending with multiple textures is dealt with in many tutorials, principally in Caitlin Zima’s series of guides here. An issue when using this technique is that is truly does blend two dissimilar textures together, essentially blurring one into the other as shown below.


This is unrealistic. A better approach is documented here . This approach does not blend the values of the pixels by combining their values and dividing by a blend factor, but uses the blend factor to select a pixel from one or the other underlying texture. This means for any one pixel the result comes from one or the other texture but not a blend of the two.

The result can be seen here


This gives a more realistic looking result close up, with either stones or grass showing. In the link to Andrey Mishkinis’ example he provides the renderer with a height map alongside the visible texture and samples the height map of the two textures – the highest pixel wins and gets painted. This requires some complexity (and artistry) in terms of providing extra textures just for determining height. In my example above I side-step the issue to get a slightly less accurate effect but one which does not require an extra texture sample or further artwork.

To get the effect of depth on the texture I simply convert them to grayscale and sample the result – the darkest pixel is deemed to be the deeper, and does not get rendered. Because the conversion to grayscale is arbitrary (I could have chosen the reddest pixel as an alternative) the pixel painted is a bit arbitrary but consistent.

The HLSL to do this is (where groundCoverage and appliqueTexture are the result of a tex2D sample of my two textures.)

// Grayscale

float groundCoverageGrayscale = dot(groundCoverage.rgb, float3(0.30, 0.59, 0.11));
float appliqueGrayscale = dot(appliqueTexture.rgb, float3(0.30, 0.59, 0.11));

// Apply applique in preference to ground cover
if (appliqueGrayscale > groundCoverageGrayscale)
textureColor = normalize(appliqueTexture);
bump = appliqueBump;
textureColor = normalize(groundCoverage);
bump = groundCoverageBump;


Helpful Resources

Invaluable .net resources for landscape programming;

Clipper – An open source freeware polygon clipping library, with C#, Perl, Ruby examples.

Triangle.net – A free .net triangulation library.

Visual Studio 2013 Community Edition – free fully fledged .net development IDE including performance analysis tools.

XNA Game Studio – Although deprectated, the XNA libraries embedded in it and examples are a good starter – just don’t develop your application using Game Studio, just use the examples and binaries in standard VS.net

A Trip through the graphics pipeline – An excellent primer on the technology of graphics rendering.

Tessellating Tiles

Arranging in Distance Dependent Detail

One of the immediate problems to be solved when building a tiled landscape is the need to tessellate tiles of different scales. Assuming the tiles are repeats of the same regular mesh but with varying heights on the Y axis, and with each tile size being double the previous the following method can be used.

  • Arrange the tiles with respect to the distance from the viewer. In this example the number denotes the size of the tile, and the viewer is assumed to be on one of the most detailed “4” tiles.


  • Record the adjacencies of each tile to the next
  • Parse the adjacencies using the following rule

If the tiles are more than one size differential split the larger tile into its constituent four smaller tiles.


Results of pass 1 – reduces a “2” tile to its constituent “3” tiles.


Results of pass 2 – reduces the bottom right “1” tile to four constituent “2” tiles


Results of pass 3 – reduces the top right “2” tile to it four constituent “3” tiles


Results of pass 4 – reduces the top left corner.

After this iterative process the tile map now has no junctions where a tile meets another tile that is more than one size differential. This is important in the next step.

Preventing Ripping

  • Using the adjacency map record for each tile which edge abuts another tile which has a larger number.
  • When producing the mesh, store two numbers for the Y coordinate – its more accurate height, and the height as interpolated between its two neighbours, for each odd numbered vertex across the edge.


In the example above the numbered vertexes will have two values recorded for its Y (height) value – the actual value obtained from a heightmap or other source, and the value as a linear interpolation between its two neighbours, if it is an odd numbered vertex. So V1 interpolated height would be average of V0 and V2, V3 would be average of V2 and V4. This step is called “vertex welding”.


From the above diagram you can see that when rendering the black, more detailed mesh, you can choose to render the numbered vertexes either as their “natural” more accurate value, or the interpolated value – and that interpolated value will precisely match the “natural” value of the adjacent larger tile, as the interpolated point V1 will fall exactly on the line A-B and V3 on the line B-C. This will render a seamless join between the two tiles.

Tidying Up

Although no ripping will be evident the normals calculated for each vertex in a mesh are dependent on the values of their neighbours. At the edges of meshes you would not take into account the values of their neighbours in the adjacent mesh. In addition to prevent “jumping” or visible normal transition artefacts, the normals should be calculated with reference to an adjacent mesh at the same level of detail as the subject mesh.

Limitations of Runtime Heightmap Terrain

In previous posts I have discussed the techniques for runtime generation of landscape form from fixed meshes using heighmap textures to generate Y coordinate offsets, this technique is very fast in execution and very frugal in GPU bandwidth. It does have some limitations;

  • The heightmap size itself is limited to a single texture, meaning that its not infinitely extensible.
  • Ultimately the heightmap pixels map to a world voxel coordinate, and can only be interpolated to smaller world coordinates by using terrain-type specific noise (i.e. bumpy ground height noise etc).
  • Continual sampling of the heightmap at different resolutions using floating point can lead to sampling errors, especially near to the edge of the heightmap texture (DirectX no longer supports the margin property of the sampler so we have to include a gutter on the heightmap, further degrading its usable size).
  • Specific landform types cannot be usefully described – river channels, moraines, geological rock outcrops, and more generally any overhang, tunnel or cave.
  • The deformation of the heightmap by features such as rivers, roads, building platforms etc proved to be insurmountable – the heightmap resolution required in the near field of view was excessive, and the progressive halving of the resolution out in the medium and long field of view rendered these landscape features visibly incorrect (rivers got wider and less distinct, roads became impossible to depict).

These limitations can sometimes be overcome.

Heightmap Maximum Size

I generated a set of tessellating heightmaps to create an infinitely extensible heightfield. This approach caused the following issues;

  • In order to reduce to an acceptable limit the number of heightfield textures being submitted to the shader a progression of re-sampled heightmaps, each being the aggregation of four tessellating heightmaps were needed and a selection algorithm used to send the appropriately detailed map to the shader was introduced.
  • Each call to the shader required a minimum of four heightmaps as it was unlikely that the geometry mesh being drawn would match any given heightmap footprint.
  • Sampling errors at the junction between tessellations of different resolutions of heightmap and geometry became quite a problem.

Heightmap Minimum Resolution

As the viewer came closer to the landscape surface the resolution of the heightmap no longer gave a progressively more detailed landscape. Only the introduction of landscape specific noise using perlin textures or other procedural height generation solved this problem.

Next Steps

In order to overcome this problem my next steps will be to “go back to the beginning” and examine the techniques needed to render a large scale landscape without runtime height generation, using tiled meshes unique to each landscape section. This will require a lot more meshes, but its strength is that each landscape tile is a self contained mesh, and meshes are generally cheap. With a mesh you can describe any shape with a known level of resolution (which can vary over the mesh) and accurately describe linear features within the design pipeline rather than at runtime.

Linear Surface Features on Dynamic Terrain

The Problem

Dynamically scaled terrain like that generated by Geo Clip Mapping, CLOD or other algorithms which vary the number and location of the geometry based on the distance from the viewer have a nasty side effect. Whilst it is possible to sample the underlying height map texture to place billboards and 3D geometrical objects on the terrain with a reasonable degree of accuracy, the placement of horizontal features is more problematic.

The error margin between the height sampled on the heightmap texture and the accuracy of the underlying geometry (which gets more coarse the further from the viewer) means that vertically aligned billboards and geometry will alternatively float or sink below the level of the landscape surface as perceived by the viewer.


In the diagram above the naturalistic landscape curve allows the object to be placed on the surface without distortion. When rendered on a variable density triangle mesh (represented by the vertical red lines)


The placement is relatively unchanged. But as the mesh becomes more coarse (i.e. further away from the viewer) the object progressively interferes with the underlying geometry to float either above or below the perceived “correct” height.


This effect is most obvious if the cube were placed on the top of the hill, where its height map would make it hover obviously above the red geometry.


At a long distance this effect is rarely noticeable, for the same reason we can afford to approximate the underlying geometry and save render-time; because the object is very far away and such distortions aren’t easy to see.

One area where the distortion is particularly problematic is in horizontal surface features; rivers and roads. These features have a geometry of their own (typically a triangle strip) which does not correspond with the underlying landscape geometry (which is changing dynamically).


The illustration above shows a river geometry in blue overlaid onto a consistent landscape mesh in red. If both surface are exactly the same slope and are flat, this would work. In all other cases horrible distortions will be seen as the river geometry dips and dives under and over the landscape geometry. Add to this the fact that an LOD geometry does not have a consistent mesh, but varies depending on the distance from the viewer, and we cannot paint horizontal features very easily.

The Expensive Solution

A solution for this is to store the linear feature as a series of textures (or stencils) which cover specific parts of the landscape. This is called Texture Splatting – the landscape is rendered once for the underlying landscape texture and then repeatedly, once for each section of the river, road etc. This allows the pixel shader to sample the river stencil and draw a texture where the river is supposed to be, or clip the pixel and allow the underlying landscape texture to show through.

This method does work but has a massive performance penalty. If we look back at why we use dynamic landscape meshes in the first place; its because rendering large landscapes is very GPU intensive. Following this method requires use to render the landscape geometry many times – once for each river segment that is visible in the view frustum. Because linear features add a lot of value to a landscape having a much shorter frustum far plane for them to speed things up just creates horrible popping.

The Complicated Cheap Solution

The cheap solution is to use river segments and calculate the height that they need to be painted at with reference to the pixels that have already been drawn. The maths of this method are covered in the excellent series of articles by MJP – Position From Depth. The basic method is;

1) When drawing the expensive landscape mesh, take advantage of multiple render targets and draw to a separate render target your calculated depth from the camera inside your pixel shader. This step is like creating a deferred depth map, but in this case in View Space (i.e. relative to the camera).

2) Create a simple cube geometry mesh that and intersects the landscape for each of your river segments you want to draw. When drawing that simple geometry, pass in the depth texture you calculated in step 1. Using the location of the pixel on the surface of the cube and the original depth of the landscape at the same pixel location you can reconstruct the exact World position of the pixel at that location (i.e. find out the World position of the pixel which corresponds to the landscape behind the cube.

3) Check to see if the World position of the pixel calculated in 2 is within the footprint of the cube you are drawing. If it is, then the pixel is in the cube footprint and you can sample your river stencil, and paint the pixel blue. If it is not within the cube footprint then clip the pixel and leave the landscape showing.


This shows the concept – the red circles highlight the pixels that are being rendered on the cube surface. The dotted lines illustrate where we are using the ray angle to sample the previously rendered depth texture (the purple line). The yellow circles provide the calculated World Position of the pixel directly behind the cube pixel we are rendering. If that World Position falls within the cube horizontal footprint then that pixel can be rendered using whatever texture we need (water, roadway etc) otherwise we leave it clipped so the landscape shows through.

A simpler way of calculating the World Position than that described by MJP is to record all three World Coordinates when drawing the landscape, but at the expense of using two extra screen sized textures (for the X, Y and Z coordinates).

In Action


Here is an example where I am rendering a series of river sections (represented by the set of green wireframe cubes). The river stencils are calculated at pipeline generation time and stored in the asset database. They are passed in with the appropriate cube and the relative offset of the pixel on the cubes X,Z coordinates is used as the texture sample coordinates for the stencil.


The cube sizes are determined arbitrarily at asset generation time and limited by two constraints – the size of the corresponding river stencil texture, and the fact that every pixel forming the surface of the cube is rendered (with the vast majority being rejected) with no corresponding effect on the hardware depth clipping which you would find with a solid cube.


The important point about the river section above is that the river has no geometry and will always be on the surface of the landscape, irrespective of how that landscape geometry is formed.



Post Processing Distance Blurring

Distance blurring is an effective technique to cover up the inconsistencies of distant objects, and to present a “real world” feel to distant objects.

In order to achieve this you need to render your scene to a Texture instead of the screen (deferred rendering) and pass that Texture through to a XNA SpriteBatch (i.e. render it in 2D) with an attached pixel shader to blur the colours.

In order to blur the colours based on relative depth from the current perspective the pixel shader should sample the depth buffer (recorded in an another Texture generated during deferred rendering and passed into the blur shader)

This technique is similar to attenuation where the colours are allowed to fade out but affects the sharpness of edges as well. Distant objects blur together slightly out of focus.

The pixel shader pseudo-code can be found here http://xboxforums.create.msdn.com/forums/t/7015.aspx.


Trees softening into the distance. To get double the blur effect, just run the shader twice.