Mega terrain architecture (real time perlin generated planets)


The basic premise of mega terrain is no pregeneration of anything besides half a dozen base textures. As the camera moves, texture maps are added and merged. The reason for using textures instead of using the Perlin function directly is to be able to do GPU based erosion. The entire terrain is represented by a single heightmap. Individual tiles of pre eroded perlin textures are added, then eroded again. The smallest perlin texture is 1 / 65536th the size of the biggest, thats half a meter per texel if the planet is earth sized and I use 1024 x 1024 pixel textures. I use several contrived perlin functions similar to the ones on my perlin page to generate continents and mountain ranges. The real bugbear as you'll see is water and generation of rivers in real time.

Note this system is not for beginners or people who are afraid of maths. If you are at the beginner end of the spectrum and want to do planets, consider a tiled sphere and diamond square fractal terrain. A total beginner could start with pre generated tiles just like you would use with a 2.5D game.

Artistic theme and contrivance

Google some satellite images and compare them to weather forcasting images. Notice the satellite images have very little to no discernable shading due to elevation. It's common for bumpmapped weather images to greatly exaggerate the heights of mountains however. Also the colours for both are likely to be posterised or enhanced. A true representation of a terrain from high up will be based fundamentally on dryness. On a NASA site somewhere they talk about colours of images and how accurate they are to what the human eye would see. We don't need to know too much about that, but the point I am trying to make is a real world image doesn't nessecarily look as colourful or understandable as the earth images we see all the time.

In practical terms, say you want to land on a planet, somwhere nice and lush, from high up it's not so easy to tell where that would be. Or say you want to land somewhere mountainous, apart from the major tectonic faults, that's not so obvious either.

Real NASA images are kind of hard to interpret. Also notice how important major rivers are in the terrain, not the water itself but the silt and deltas.

So what to do, have 1000 different shades of dirt, or give the terrain a fun look? From my other work, you may have seen I like the naive, slightly cartoonised look for a terrain. Enhancing colours and terrain normals makes it easier because we don't have the same fidelity of terrain generation as the above shots. This brings me to a crucial factor: The above images are about 800 x 800 texels, but to generate would require at least 4 times that resolution. Why? because the erosion follows slope, and a slope is defined by a trend of pixels. Individual hills one pixel high don't tell us which way to erode or give us a buffer for movement of terrain. This is critical because we need that water flow data to know where the deltas and silt plains are.

Take a look at this site: They talk about the exaggeration of reliefs and why it's done. This one has braille and looks quite tactile.... and fun!

So assuming we want to exaggerate our heights and colours, what now? As we have a lush map, we're in good stead for texturising our terrain. We will be using a rain map too, rather than raining evenly. This will give us deserts and forrests. Trying to simulate their formation other ways is too involved. Normals for heightmaps will be amplified based on distance of camera to the surface. The pitfall to normal amplification is interpolating small shadows into big ones. You can't just go crazy, but a bit extra makes up for lack of other texturing detail. Lerping perlin volume texture based on dryness is the key texturing method used.

NASA's version of Africa and mine. I used a rainmap based on a real weather map (notice they don't provide sea rain in their weather data so you have to align carefully). Normal shading is starting to appear in mine, but there's no mountain shadows in the NASA shot (not sure what angle the sun is at). I seriously lack the water data, like deltas and silt flow. I have to make up for it somehow or I'd have too many flat expanses.

GPU limitations

Using the GPU has a few drawbacks:

.It can't really pass any info back except maybe as an occlusion query (I'm talking shader version 3.0 here).
.The CPU still needs to inspect the texture for things like collision detection (see my page on GPU based collision detection).
.It can't transcend its dimensional reality, or it can only operate on the texel it is supposed to be rendering.

Vanilla erosion doesn't really need any of these requirements, just sample neighbouring texels and add or subtract colour for itself. I do flooding by using occlusion queries. For pregenerated terrain I still use the CPU. It's many times slower but much more flexible, powerful and easy to debug. Also pregenerating rivers by CPU is much easier. The GPU erosion algorithm is a little simpler and it needs to be more brutal so I can do fewer passes. More brutal erosion leads to more anomalies. One obvious anomaly is when choosing texels, giving preferance based on order of processing. You know what I mean, looping for x = ... and if for example there are several texels the same height, either the first or the last is always chosen. This can lead to regular geometric patterns forming. Use some way to avoid this, a pseudorandom function, even really crappy for other stuff is adequate.

There are quite a few compromises we have to make in quality of erosive realism and texturising. I'll list them as I think of them.

I should also mention here don't trust the intrinsic GPU maths functions! I've had numerous problems with very small numbers. For typical applications, like shading you probably wont notice, but for my fisheyeing reversing the fisheye is critical to my engines success.

Having the HLSL maths blues?

Instead of Try
asin(x) 2 * atan(x / (1 + sqrt(1 - x * x))
pow(a,b) exp(log(a) * b)

Especially with numbers approaching 0. Also try doing the calculations in the pixel shader if you think a linear interpolation is distorting your results. Big call keeping the maths there, I haven't had to resort to that yet.

Texture management

You'll need a thread that loads and unloads textures (or generates them) as the camera moves around. It will need to merge them, and also convert eroded textures into height data to be used for collision detection. There's no rocket science here, spead out the erosion renders over multiple frames so you don't hitch while moving. If you're not leaving the planets surface, you have an easy time. You don't need to fish eye the height map and can just tile it as you move around.

As you're using the GPU for this work, you'll want to look into mutex locking or some other technique to make sure you don't try to render while the window is redrawing. Either that or just do the work in the draw thread.

For those of us that want to do space ship visitations, things get tough. I can only do 5 to 10 erosion passes per frame without slowing down the render cycle. Proper eroding requires at least 100. Thats around one regen every second. In one second in a spaceship I can fly beyond the high LOD area of the heightmap. Increasing the high LOD area means it will take proportionally longer to generate, because you're eroding stuff you're not travelling to as well as the new area you're entering. Pre eroding each texture helps cut this work down. It's not technically accurate, but summing the resulting water works adequately.

I suppose I should also mention texture coordinates here too. I support both wrapped textures, and axis aligned textures. I use axis aligned for debugging, like the earth heightmap. I actually wrap the random textures aligned to Y up, Z front. So only the texture as a circle is used. This is obviously a problem because the default shader wrap is for a rectangle. Either way really, if you haven't tried to textureise a sphere before it gets a little complicated at the seams. If you textureise litterally, where the corners are effectively chopped at the scribed circle, they will join at the opposite side of the circle. If you textureise based on the normal facing the camera, you wont get this seam because the stretching and tugging is always on the other side of the globe. Technically this is not correct, but who cares!

To clarify for begginners, axis aligned is like wrapping a lollie, the top and bottom of the texture represent the poles. The closer to the top and bottom, the more scrunched up the texture becomes. If you fly over the pole, you will have a hard time concealing slight errors where the seams don't match perfectly. A good reason for having either glaciated or watery poles!

The texture wrapping is like a blanket covering the sphere. The facing side is always the middle, even if the source texture is at the edge, it just wraps as a rectangle. This is not technically correct, but unless you look at the globe from two places at once you wont notice the anomalies. The highest level texture, which represents continents, will be effectively squished into a circle, but for random terrain it doesn't really matter.

Getting high level of detail


To keep things simple, I use a single height map. To do this, I need to zoom it based on how far I can see. A camera that can see the entire planet in shot should have half the worlds texture in its height map. A camera on the surface should only need a heightmap that spans the horizon. Of course a theoretical horizon is smaller than a hilly one. A fisheye technique will allow us to see a reasonable distance when we get close to the surface.

The heightmap inset shows zooming in based on camera altitude. The fisheye effect increases until by the final shot nearby texels are only a meter or so across. In the far right shot, I have landed on a "5" which is about 500 meters across earth scale.

The smallest texture "5" is 1 / 65536th the size of "1". That works out to about 500 meters across if the planet is earth size. So a 1024 x 1024 perlin texture will have a highest level of detail of half a meter. To be able to see a minimum of 100 km while standing on the surface, you'd need 0.5 meter texels in the center of the height map, fisheyeing out to 1000 times that at the edge of the heightmap. That's a steep power curve, and we only want to approach it as we get close. By the time my camera is at the surface, the close texels represent less than a meter, and the texels at the edge of the heightmap are over a kilometer across. I touched on the difficulties of correcting the fisheye above, it's really important that you achieve this, otherwise the entire system is flawed. I believe there isn't much left in terms of floating point accuracy, so I wouldn't want to have to zoom in to look at the world as an ant, or have jupiter sized planets.

The complications don't stop there, you have to adjust normal calculations based on the size of the texels. Rather than recalculating, you might want to store the texel size as a map in itself. I remember someone saying a fetch takes about as long as 6 maths operations, so it might be worthwhile doing this. Don't forget a texture map means you don't have to recalculate every frame when not moving. Another complication is a lot of these calculations rely on knowing the altitude, but the planet isn't at height radius, it's height radius plus the heightmap, so you need to sample the cameras height.

Terrain mesh

So too, the vertices of the grid are moved to give level of detail. Rather than tile a sphere, I use a single grid that has small triangles close and big triangles far away. The closer the camera is to the surface, the more the vertices gravitate towards the camera.

The VTF grid vertices are gravitated towards the camera. The trick is to only describe a spherical section that can be seen and not waste vertices over the horizon. In the first shot, we see a hemisphere that is rotated to face the camera. In the second shot culling is turned off so you can see the edge of the grid just wrapping past the horizon. The funny curve in that shot is an optical illusion. In the third shot, the smallest triangles are less than a meter across world scale.

The 'grid' is actually a circle, with the close triangles smaller, additional to the LOD effect, which amplifies this. You need to watch for rolling with VTF systems like this. What that means is as the vertices move around, they give a rolling sea effect as the vertices roll over any corners. It's not so noticable for smooth terrains, but you'll see what I mean when you try and do cliffs. To avoid this, you need to snap the vertices to a whole vertice position based on the LOD size for that triangle. Not so simple with my system, but you can reduce the effect considerably.

The grid shown above has 6,400 triangles in it, and it's segmented, so when you get close you can frustum cull by limiting the indice range. I haven't actually implemented that in this system, but my landbound one uses it so I know it's quite easy.


With pregenerated terrain, getting variety is easy, just use lots of perlin textures. We can't do that here, so we need to be careful not to create repeating patterns.

A level two texture has caused repeating island patterns. To avoid this, I use one of the colour channels in the higher level texture to shuffle the height around.

I'm particularily careful at the second level texture because it is still big enough to create continents in its own right. If I don't allow this, coastal geography will be texelated as the only decider is the largest texture. Instead, you need to use methods to shuffle the deck and chop up any repeating patterns. I use two colour channels for height, one for water and the other for modifying the next texture smaller. Additional variety textures can be used to seed erratics like craters and volcanoes etc.

Clouds and atmosphere

At the moment I'm just using a perlin texture. It doesn't look that great, I need to create fronts and storms and stretch out some cirrus clouds etc. I haven't decided on atmosphere yet, it's just a backface sphere/ skydome.

Stock standard 2D perlin clouds.

Fractal Mountains

Fractals are a great way to seed mountains. You've probably noticed most terrain generators use fractals. They certainly give nice craggy rocks. Unfortunately, with the fisheyeing and zooming, my system doesn't really suit them, unless I do it early before composing the height map. That would mean an additional series of render passes. Consequently, I simply see it as a dead end because getting the terrain is only the first step. I can already do that, it's moving beyond a barren world that is hard. That's my opinion anyway, there's plenty of other sites that talk about it so I don't want to dwell.

Of course should NVidia provide hardware support for a diamond square texture mipmap system I'd go out and buy their new card today!


The GPU flood fill is great for lakes, but much of a river is travelling downhill slightly. To get the GPU to path a river and give me back info I need to render it is a problem I am still tackling. If a river is dished, how does the pixel shader know what the water height should be in the middle of the river? It can't practically box scan every texel checking heights, because the body of water could be large. Even if you did sample neighbouring texels, it would have to know which edges are tangental to the flow.

Instead, I render using occlusion queries and several flood fill effects. My algorithm is improving, but still needs work. I alternate between billboard floods, and occlusion query floods. Because you need to render each pixel as a vertice for ocllusion queries, I am trying to keep the number of these renders down. A bit of heuristics would be good, but for now I am simply flooding the texture as a billboard for a set number of iterations, then doing the query. If a texel is flooded and it is below the step up height of the current water level, then this means a dam has overflown, or the river is running downhill. To check this, I do an overflow render every iteration too. So thats 3 different effects alternating to flood. Even still, this process is too slow for realtime river pathing. Being able to zoom in on the area in question would be a big boon I think, but still haven't solved this without risking flooding out of shot.

This system has problems, so I don't want to go any further explaining it until I have a practical working solution. To give you an idea though, generating rivers for a 2048 x 2048 heightmap using the CPU could take at least half a day (haven't tested this fully, but that's about right). An unoptimised GPU river generation routine (doing occlusion tests every operation etc) would take around 45 minutes. Optimised, I can do this under 10 minutes. Sounds a lot, but for 2048 x 2048, thats 4 million vertices rendered for each occlusion test! When I can get the time down to under a minute, I can realistically do background river generation. Don't forget, this is island/continent wide, so unless you are flying a spaceship around, this should be adequate.

Of course, you could simply ignore all the natural dams etc in the landscape and just path perlin rivers and lakes, but it just irks me too much. How can you have box valleys everywhere without them all turning into lakes.

I'm working on a different river system. Standby for details soon


I have decided to only use textured flora. Bump mapped textures for bushes and shrubs and parallax textures for trees. This will preclude ground level camera angles, but I am thinking the compromise will be worth it. I'm working on this now and should have something up soon.

Fine tuning

Earth is a good way to fine tune things like snow from altitude versus lattitude.


I haven't solved everything. This system has proven itself to me to be workable for basic terrain though, without rivers, or with semi sensible rivers (igonoring natural dams and arbitarily pathing though some valleys but not others). I plan on tackling flora soon, not meshes but parralax textures, suitable for top down viewing at say 100m above ground.