Friday, October 23, 2015

Infinity: Battlescape Kickstarter

Something that might well be of interest to fellow members of the procedural content  generation community is the recently launched KickStarter for Infinity: Battlescape





What started as the ambitious and massively impressive Infinity project many years ago has grown and evolved over time to now underpin the I-Novae technology and it's great to see it hopefully taking a massive step closer to seeing the light of day as a fully functioning commercial production when so many such other such projects wither and fizzle out.

I am happy to back their efforts and hope by posting here that I may in some tiny way spread the word.

Sunday, May 17, 2015

Start Your Engines

Taking a little break from virtual populations I thought it would be interesting to continue the spirit of this project and try out another new technology I've not dirtied my hands with before.  Physics (or more specifically mechanics) has been a staple of game development for about ten years now since ground breaking releases such as Half Life 2 and it's popular community modification Garry's Mod in 2004 but even though such technology has appeared in countless products in the last decade I've never actually had to implement it myself so I thought it was about time.

Writing a performant and stable physics system from scratch is a complex and exacting process but fortunately there are a variety of freely available physics solutions that can be leveraged on the PC, one of the most well known of which is the Bullet physics library so that's where I started.  My end goal for the initial physics integration was to create a vehicle of some sort that could be driven around on my virtual planet so features pertaining to that objective were the priority.  After a little research however I found that the vehicle simulation features of Bullet were quite limited so I looked around for an alternative.  On second look I found that the Nvidia PhysX SDK offered a more complete vehicle simulation model so I refocussed my efforts on that.



Although physics can be complex, fortunately PhysX comes with a useful set of samples that were a great aid getting started.  Thanks to these it wasn't too difficult to get the SDK linking in and the basic simulation objects for a vehicle up and running.  I then hit a wall however as originally I had hoped I could provide some sort of callback to perform the required ray tests for the vehicle against the terrain heightfield but as far as I can tell PhysX simply does not offer this functionality.

What it does provide is a heightfield shape primitive that would normally be a logical fit for vehicle simulation but as these are square and essentially flat while my plant is neither having to make use of them made implementing my vehicle more of a challenge than I had originally intended.

In addition to the problem of mapping square physics heightfield grids to my spherical planet the other major problem is the same one facing virtually every other system involved in planet scale production - namely that of floating point precision.  For speed PhysX in line with most real time physics simulation systems operates on single precision floating point numbers by default, but these of course rapidly lose precision when faced with global distances.  PhysX can be recompiled to use double precision floating point which would probably sort out the issue but the performance would suffer with such numerically intensive algorithms and as I would like to leave the door open for making more extensive use of physics in the future I didn't really want to go down that route.

The alternative is to establish a local origin for the physics simulation that is close enough to the viewpoint that objects moving and colliding under physics control do so smoothly and precisely even when single precision floating point is used.  The key here is that all numbers in the physics simulation are relative to this origin so they never get too big, although for this to work of course the origin itself needs to be updated to stay close enough to the viewpoint as it moves around the planet's surface.  As mentioned before my planet is constructed from a subdivided Icosahedron so has as it's basis 20 large triangular patches each of which is subdivided eight times to give 65,536 triangles.  As the viewpoint moves closer to the surface these root patches are in turn progressively subdivided each root patch turning into four new patches each made up of the same number of triangles but covering one quarter of the surface area thus increasing the visual fidelity.

My planet is represented by two radii, the Inner Radius which defines how far the lowest possible point on the terrain (the deepest underwater trench say) is from the centre of the planet and the Outer Radius which defines how far the highest possible point is from the centre of the planet (the top of the highest mountain say).  Between these two distances lies the planet's crust that is being represented by the terrain geometry.  To produce an approximation of an Earth sized planet I am currently using an inner radius of 6360 Km and an outer radius of 6371 Km giving a possible crust range of 11 Km or about 36,090 feet - significantly less than the corresponding distance on Earth were you to measure from the bottom of the Challenger Deep to the top of Mount Everest but I don't have much interest in simulating deep sea environments at the moment so I'm focussing on the more humanly explorable (and visible) portions of our planet for now.

The table below shows the relative size of terrain patches and the renderable triangles they contain at each level of subdivision.  As the level increases you can see the number of patches and triangles needed to represent the entire planet grows exponentially, terminating at patch level 18 which would require 90 quadrillion triangles to be rendered!

The patch and triangle sizes and counts for each level of icosahedron subdivision
This Icosahedron topology provides a useful basis for physics origin placement as it naturally divides the planet's surface into discrete units, the decision of which subdivision level to use is all that is necessary.  The trade-off here is that using deeper subdivision levels produces more accuracy as it restricts the distance any physics object can be from the origin but necessitates moving the origin more frequently because of this limited range.  It also has to be borne in mind that generating heightfield data for the physics system is not instant so the maximum travel speed of any vehicle required to drive upon the ground needs to be restricted such that nearby data can be generated fast enough to be ready by the time the vehicle gets there.  In practice I don't think this should be a problem for any real world style vehicle, and fast moving vehicles such as planes can use a different less detailed system.

For now I've chosen to use patch subdivision level 13 as the basis for my physics origin, the reason being that I wanted to obtain a physics heightfield resolution of approximately 1 metre per sample which a 1024x1024 heightfield shape provides for that level (the table shows the patch length of a level 13 patch (L13) is approximately 1 Km).  When the physics system is initialised the origin is set to the centroid of the L13 patch under the viewpoint then as the viewpoint moves about and leaves that patch the origin is teleported to the centroid of the new L13 patch above which it now resides.  Any active physics objects need to have the opposite translation applied to keep them constant in planet space despite the relocated origin.

To avoid there being a delay each time the  viewpoint moves from one L13 patch to another while the physics heightfield for the new patch is generated as mentioned above the heightfield data for the adjacent patches is pre-generated.  Rather than there being eight adjacent patches as you would have for a square grid, the underlying icosahedral topology results in no less than twelve adjacent patches to each central one as shown below:

Colours indicating the 12 patches adjacent to a central one
This means that in addition to producing a heightfield for the patch under the viewpoint twelve more need to be generated ready for when the viewpoint moves.  Once this move to an adjacent patch does occur then any new patches not already generated from the twelve adjacent to the new central patch are generated and so on.  As all this data generation takes place on background threads the main rendering frame rate is unaffected.

Having a 1024x1024 PhysX heightfield shape for each L13 patch is one thing, but the square essentially 2D heightfield has to be orientated properly to fit onto the spherical planet.  This is done by establishing a local co-ordinate system for each L13 patch, the unit vector from the planet centre to the centroid of the patch is defined as the Y axis while the X and Z are produced by using the shortest 3D rotation that maps the planet's own Y axis to the local patch one.  Although this does produce a varying local co-ordinate space across the surface of the planet it doesn't actually matter as long as the inverse transformation is used when populating the heightfield - this ensures the heightfields always match the global planet geometry.  One final adjustment to make sure the entire planet can be traversed is to modify the direction of gravity with each origin relocation so it always points towards the centre of the planet.

So now there is rolling generation of heightfields as the viewpoint moves around, but even though they are roughly the same size, the 1 Km square heightfields don't really fit onto the triangular patches very well purely because of their shape:

Distant view of the physics heightfields for the centre patch and it's 12 neighbours.  The radical difference between the triangular mesh patches and the square physics heightfield shapes causes a huge amount of overlap
A close up of this debugging view highlights how much of the geometry is covered by two or more physics heightfields - computationally inefficient and problematic for stable collisions
This overlap of physics geometry is both inefficient and prone to producing collision artifacts due to the inexact intersections of nearly coplanar geometry.  Fortunately PhysX has the ability to mark heightfield triangles as "holes" making them play no part in collisions.  Marking any heightfield triangles that fall entirely outside of the owning L13 patch as holes produces a much better fit with minimal overlap between adjacent heightfields.

Distant view of the physics heightfields again but this time quads falling outside of the geometry patch are marked as holes and not rendered showing a much better fit to the underlying mesh topography.
A close up of this second version shows minimal overlay between adjacent physics heightfields
The final step to achieving my goal is the vehicle itself.  I'm no artist and don't have a procedural system to create vehicles at the moment so I turned to the internet and found a free 3D model for a suitable vehicle that was licensed for personal use - a Humvee seemed appropriate considering that all driving is off-road at the moment in the absence of roads.

Downloaded Humvee model in Maya. At around 26K triangles it's about right for the  balance of detail and real-time performance I want.
This gave me something to render visually but of course the physics model has to be set up appropriately to match with the wheels the right size and in the right place, the suspension travel and resistance set to reasonable values along with the mass of all components and the various drive-train parameters set up to give a suitable driving experience for example.

In total there are quite a few major parameters controlling the physics vehicle model with many more more advanced controls lurking in the background.  The all important feel of driving the vehicle relies on the interplay of these many variables so it is important that they can be experimented with as easily as possible to allow tweaking.  To this end I put the parameters into a JSON file which can be reloaded with the press of a key while driving around allowing engine power, chassis mass, suspension travel and the like to be altered and the change evaluated immediately for fine tuning.

Example of the JSON for the Humvee simulation model.  There are plenty of more advanced parameters the PhysX vehicle model exposes but for my purposes these are sufficient
By making it data driven it also allows me to easily set up different vehicle models simulating everything from fast bouncy beach buggies to lumbering military vehicles or even civilian buses or lorries which sounds like a lot of fun.

Fortunately when playing with all this PhysX includes a really handy visual debugger tool that lets you see exactly what the physics simulation thinks is going on - which can often be quite different to what is being rendered if you have bugs in the mix.

The PhysX Visual Debugger Tool showing the simplified representation of the Humvee represented in the simulation
With PhysX in and working, heightfield shapes set up to collide against and a vehicle to drive hooked up to basic keyboard inputs for steering, acceleration, brake and handbrake I had everything I needed but while fun to drive around I felt it looked somehow wrong...which after a little thought I realised was because the vehicle didn't have a shadow on the terrain.  Without this it didn't really feel that it was sitting on the ground more floating some distance above it, and it certainly wasn't clear when it caught air leaping over bumps on the ground.

To fix this I added some fairly rudimentary shadow mapping allowing the vehicle mesh to cast a shadow.  There are many techniques to be found for shadow mapping each producing better or worse results in different situations in return for different degrees of complexity.  To get something up and running as quickly as possible I implemented plain old PCF shadows using a six-tap Poisson disc based sampling pattern.  Even though one of the simplest techniques out there, I was happy with the improvement it provided in making the vehicle feel far more 'grounded':

Without shadows the Humvee looks like it's floating above the terrain somewhere even though it's actually sitting upon it - an effect known as Peter-Panning in shadow mapping parlance
Casting a shadow onto the terrain makes it obvious the vehicle is sat upon the ground while self-shadowing goes a long way to help make it look more part of the scene in general
So that's it, I achieved what I wanted in making a drivable vehicle to tour around my burgeoning procedural planet, as a next step I think some roads might be nice to drive along...

Wednesday, March 11, 2015

Population Explosion

I described in an earlier post my algorithm for placing the capital city in each of my countries, expanding on that I wanted to gather some more information about the make up of each country so I can make more informed descisions about further feature creation and placement.

One key metric is the population for the country as knowing how many people live in each country is key in deciding not just how many settlements to place but also how big they should be.  Until this is known much of the rest of the planetary infrastructure cannot be generated as so much of it is dependent upon the needs of these conurbations and their residents.

World population density by country in 2012.
(Image from Wikipedia)
A completely naive solution would be to simply choose a random number for each country but this would lead to future decisions that are likely to be completely inappropriate for both the constitution of the country's terrain and it's geographical location.  A step up from completely random would be to weight the random number by the country's area so the population was at least proportional to the size of the country but again this ignores key factors such as the terrain within the country - a mountainous or desert country for example is likely to have a lower population density than lush lowlands.

To try to account for the constitution of the country's physical terrain rather than just use the area I instead create a number of sample points within the country's border and test each of these against the terrain net.  As mentioned in a previous post, intersecting against the triangulated terrain net produces points with weights for up to three types of terrain depending on the terrains present at each corner of the triangle hit.  By summing the weights for each terrain type found on each sample point's intersection I end up with a total weight for each type of terrain giving me a picture of the terrain make across the entire country.  I can tell for example that a country is 35% desert, 45% mountains and 20% ocean.

This is of course just an estimate due to the nature of sampling but the quality of the estimate can easily be controlled by varying the distance between sample points at the expense of increased processing time.  Given a set number of samples however the quality of the estimate can be maximised by ensuring the points chosen are as evenly distributed across the country as possible.

The number of samples chosen for the country is calculated by dividing it's area by a fixed global area-per-sample value to ensure the sampling is as even across the planet's surface as possible - currently I'm using one sample per 2000 square kilometres.  Once the number is known the area of each triangle in the country's triangulation is used to see how many sample points should be placed within that triangle.  Any remainder area left over after that many samples worth of area has been subtracted is propagated to the area of the next triangle - this also applies if the triangle is too small to have even a single sample point allocated to it to make sure it's area is still accounted for.

If a triangle is big enough to have sample points allocated within it those points are randomly positioned using barycentric co-ordinates in a similar manner to how the capital cities were placed.  There is nothing in this strategy to prevent samples from adjacent triangles from falling close to each other but in general I am happy that it produces an acceptably even distribution with a quality/performance trade-off  that can easily be controlled by varying the area-per-sample value.

So given the proportion of each type of terrain making up a country how do I turn that into a population value?  I assign a number of people per square kilometre to each type of global terrain, multiply that by the area of the country then by the weight for that type of terrain to get the number of people living in that type of terrain in the country then finally sum these values for each terrain type to get the total number of people in the entire country.

I've produced these people-per-Km2 values largely by using the real world figures for population density found on Wikipedia as a basis.  Using figures a year or two old, the Earth has an approximate average population density of just 14 people per square kilometre when you count the total surface area of the planet but this rises to around 50 people per square kilometre when you count just the land masses.  

As shown on the infographic above however the huge variety factors influencing real world population density lead to some areas of the planet being very sparsely populated with only a couple of people per square kilometre while other areas have 1000+.  There isn't enough information present in my little simulation to reflect this diversity yet however so I'm currently working with a far more restricted diversity based around the real world mean.  The values I am currently using are:

Figures for how many "people" to create per square kilometre for each terrain type 
So given the fairly random distribution of terrain on my planet as it now stands what sort of results drop out of this?  Currently the terrain looks like this:

The visual planetary terrain composition, currently the ocean terrain type has a slightly higher weighting so there is more water surface area than any other single terrain type  
while making 100 countries produces the figures shown here

Terrain composition for a number of the countries along with their resultant population densities by total area and by landmass area
As can be seen basing the terrain population densities on real world values has generated a planetary population not too different to our own but as my planet is currently 24% water rather than the 70% or so found on Earth the actual densities are probably on the low side.  The global terrain composition is currently made up like this:

Proportion of the planet covered by each terrain type and resultant contribution to the planet's population

What is interesting is that the sampling strategy ensures that the population count properly reflects the proportion of the planet that is land mass - you can see countries such as Sralmyras which are 32% water have a lower population density by area of just 13 while Kazilia which is 7% water has a density of 17 people per Km2.

With a population count my plan is to now use that in conjunction with the terrain composition profile to derive a settlement structure for each country so I know how many villages, towns and cities to create.  Watch this space.

Saturday, March 7, 2015

Derivative Mapping

I've been somewhat unhappy with the lighting on my terrain for a while now especially the way much of the time it looks pretty flat and uninteresting.  It's using a simple directional light for the sun at the moment along with a secondary perpendicular fill light as a gross simulation of sky illumination plus a little bit of ambient to eliminate the pure blacks.  Normals are generated and stored on the terrain vertices but even where the landscape is pretty lumpy they vary too slowly to provide high frequency light and shade and with no shadowing solution currently implemented there's not much visual relief evident leading to the unsatisfying flatness of the shading.

While I plan to add shadowing eventually I don't want to open that can of worms quite yet so I had a look around for a simpler technique that would add visual interest.  I thought about adding normal mapping but the extra storage required for storing tangent and binormal information on the vertices put me off as my vertices are already fatter than I would like.  While looking in to this however I came across a blog post by Rory Driscoll from 2012 discussing Derivative Mapping, itself a follow up to Morten Mikkelsen's original work on GPU bump mappng.  This technique offers a similar effect to normal mapping but uses screen space derivatives as a basis for perturbing the surface normal using either the screen space derivatives calculated from a single channel height map texture or the pre-computed texture space derivatives of said height map stored as a two channel texture.

This was appealing to me not just because I hadn't used the technique before and was therefore interested just to try it out, but also from an implementation point of view I would not have to pre-compute or store anything on the vertices to define the tangent space required for normal mapping saving a considerable amount of memory given the density of my geometry.  It also solves the problem of defining said tangent space consistently across the surface of a sphere, a non-trivial task in itself.

Thanks to the quality of the posts mentioned above it was a relatively quick and easy task to drop this in to my terrain shader. I started with the heightfield based version as with the tools I had available creating a heightfield was easier than a derivative map but while it worked the blocky artifacts caused by the constant height derivatives where the heightmap texels were oversampled were very visible especially as the viewpoint moved closer to the ground.  I could have increased the frequency of mapping to reduce this but when working at planetary scales at some point they are always going to reappear.  To get round this I moved on to the second technique described where the height map derivatives are pre-calculated in texture space and given to the pixel shader as a two channel texture rather than a single channel heightmap.  The principle here is that interpolating the derivatives directly produces a higher quality result than interpolating heights then computing the derivative afterwards.  I had to write a little code to generate this derivative map as I didn't have a tool to hand that could do it but it's pretty straightforward.

Although this takes twice the storage and a little more work in the shader the results were far superior in my context with the blocky artifacts effectively removed and the effect under magnification far better.


A desert scene as it was before I started.  With the sun overhead there is very little relief visible on the surface
The same view with derivative mapping added.  The surface close to the viewpoint looks considerably more interesting and the higher frequency surface normal allows steeper slope textures to blend in
As you can see here the difference between the two versions is marked with the derivative mapped normals showing far more variation as you would expect.  The maps I am using look like this:


The tiling bump map I am using as my source heightfield
The derivative map produced from the heightfield.  The X derivatives are stored in the red channel and the Y in the green.


Here is another example this time in a lowlands region:


A lowlands scene before derivative mapping is applied
The same scene with derivative mapping.
Particularly evident here is the additional benefit of higher frequency normals where it is the derivative-mapped normal not the geometric one that is being used to decide between the level ground texture (grass, sand, snow etc.) and the steep slope texture (grey/brown rock) on a per-pixel basis.  This produces the more varied surface texturing visible in the derivative mapped version of the scene above.

Finally, here are a couple more examples, one a mountainous region the other a slightly more elevated perspective on a mixed terrain:


Mountain scene before derivative mapping
The same scene with the mapping applied, the mountain in the foreground shows a particularly visible effect 
A mixed terrain scene without mapping
The same scene with the derivative mapping applied.  The boundaries between the flat and steeply sloped surface textures in particular benefit here with the transitions being far smoother
To make it work around the planet a tri-planar mapping is used with the normal for each plane being computed then blended together exactly as for the diffuse texture.  For a relatively painless effect to implement I am very pleased with the result, the ease of use however is entirely down to the quality of the blog posts I referenced so definitely thanks go to Rory and Morten for that.

Monday, March 2, 2015

In Deep Water

I've been thinking about water, particularly how the oceans of a planet affect the evolution of the people living on it, their choices for habitation, agriculture and infrastructure.  Of course there are many other factors that influence these things but oceans seem like a good place to start, and with 71% of the Earth covered by them there is certainly plenty of reference material about.

I had a couple of thoughts about how to procedurally generate the water masses for my planet, the most straightforward and probably most commonly used is to simply decide upon an elevation level to define as the sea level with everything generated by the noise functions under that counting as water.  You can then render the water at that elevation and the GPU's Z buffer will sort out the intersection with the land.  While I still want the simplicity and benefits of an established sea level, I wanted to have a look at making water region generation more integral to the overall planet's procedural system rather than it being treated essentially as a post process.  

Eventually it would be nice to have bodies of water at different elevations so mountain lakes, tarns and similar could be created preferably with rivers and streams connecting them to each other, waterfalls and larger bodies of water but that's all for the future.

The TL;DR version: this video shows the effect I'm going to describe here as it now stands:



To make water body creation part of the terrain system the heightfield generation itself has to be aware of the presence of water so there needs to be a way to define where the water should go.  My first attempt at this was using the country control net as I thought the country shapes would make pretty decent seas and by combining a couple of adjacent ones some reasonable oceans.  By creating the countries as normal then simply flagging some of them as water such regions can be established, the heightfield generator can then ray-cast the point under consideration against the country net and if it finds it's within a water "country" use a noise function appropriate for sea beds that will return low altitude points below the global sea level.

When I tried this however a couple of points became apparent, firstly having to ray cast against the country net's triangulation slows down the heightfield generator which is significant when it has to be run so frequently to generate the terrain and while optimisation might mitigate some of this cost I felt a more significant problem was the regularity of the water zones created.  With each being formed from a single country there were for example no islands generated which is quite a big drawback and for me pretty much discounted this approach.

Instead of the country net then how about using the terrain net instead?  By adding a terrain type of Ocean that generates a plausible seabed heightfield that lies below sea level to the existing terrain types such as mountain, hilly and desert deep water regions could be formed using the same controllable system employed for those other type of terrain.  The turbulence function described previously that perturbs the terrain type determination will also then affect the water region borders creating some interesting swirls and inlets.

The blending of mountainous or hilly terrain into the seabed generator around the transition areas also produces plenty of islands of various sizes


Islands formed by perturbing the ray cast against the terrain net
There are some drawbacks to using the terrain net instead of the country one however, there is nothing for example to prevent an entire country from ending up under water or possibly more problematically the vast majority of a country could be under water leaving an implausibly small section remaining that simply would not really exist in the real world.  On balance however I thought this is the better of the two systems so am going with this for now.


Rendering Water

Rendering of realistic water is a long standing challenge for real time graphics and one that I've looked at myself from time to time in my various forays into terrain generation and rendering. For this project I thought a good place to start with generating the actual geometry for the water surface would be to use essentially the same system as I already use for the terrain, namely a subdivided icosahedron.

The existing triangular patch generator can easily be extended to determine whether any vertices in the patch are below sea level or not, and if so a second set of vertices for the ocean level surface patch is generated.  These vertices represent the triangular area of the surface of a sea level radius sphere centred on the centre of the planet and encompassed by the patch in question. Although the plan is to have realistic waves on the water these will be generated by the vertex shader displacing the vertices so a smooth sphere is enough to start with.


Wireframe of the water surface before being perturbed in the vertex shader using the displacement of the simulated water surface
The level of detail system described previously can also be leveraged to decide which water geometry patches to render by simply running it in parallel on both the terrain patches and the water ones - a different distance scalar can also be used for the water patches enabling lower detail geometry to be used for the water potentially improving performance as long as the visual quality doesn't suffer too much.

Even though the identical geometric topology allows the water patches to use the same index buffer as the terrain there is an additional and more significant optimisation opportunity here that would save significant memory.  With each water patch representing what is essentially an identical shaped section of the sphere's surface at that level of detail they could actually all be drawn using a single vertex buffer with a suitable transformation matrix to put it at the correct position and with the correct orientation.  Setting this up is a little fiddly so I'm leaving it for a future task but should memory become an issue it's likely to be one of the first things I'll revisit.

As for the actual water visuals, the movement and appearance of water is notoriously difficult to simulate especially as the real thing behaves so radically differently in different situations - shallow water is totally different to deep for example and white water rapids completely at odds with languid meandering rivers.  To make such a difficult problem manageable I chose to focus on just one aspect and when talking at planetary scales deep water seemed like the logical choice - the oceans created as described above will dominate any other water features I add in the future.

There are quite a few references and demos for deep water rendering but the one I chose was the NVIDIA "Ocean" demo from their now superceded graphics SDK which is in turn based on such classic work as Jerry Tenssendorf’s paper “Simulating Ocean Water”. I liked this demo as it is well documented and being GPU based heavily targeted at real time graphics.

This demo uses a couple of compute shaders to perform the necessary FFT followed by a couple of pixel shaders to produce two 2D texture maps, one storing the displacement for the vertices over the square patch of water and the other storing the 2D gradients from which surface normals can be calculated and a 'folding' value useful for identifying the wave crests.


High LOD wireframe showing how the smooth sphere vertices have been displaced to create the waves.  Note that normally a lower LOD version is used as the majority of the shading effect comes from the normals computed in the pixel shader rather than the wave shapes produced in the geometry.
The displacement map is fed in to the water geometry's vertex shader to displace the vertices creating the waves while the gradient/folding texture is fed in to the pixel shader to allow per-pixel normals to be created for shading and wave crest effects to be added.

Using these textures as the primary inputs, there are a number of shading effects taking place in the pixel shader here to give this final ocean effect.  Although the NVIDIA demo produces a nice output I decided to largely ignore the final pixel shader as it's use of a pre-generated cube map for the sky colour along with a simulated sun stripe was a bit too hard coded for my needs.  Instead I fiddled around some and came up with my own shading system.

Firstly the gradient texture is used to compute a surface normal which is then used with the view vector to calculate the reflection vector (using the HLSL reflect function).  I then create a ray from the surface point along that reflection vector and run it through the same atmospheric scattering code that is used for rendering the skybox, this gives me the colour of the sky reflected from that point and ensures that the prevailing sky conditions at that location at that time of day are represented.  This is important to ensure effects such as sunsets for example are represented in the water but even at less dramatic times of the day gives the water somer nice variegated shades of blue.


Bright early morning sun reflected in the water
The last traces of sunset bounce off the ocean surface, an effect that would be difficult to achieve with direct illumination
Capturing the sun's contribution in this unified manner is especially useful as it's size and colour varies so much based upon the time of day, trying to represent that as a purely specular term on water can be a challenge often ending up with a sun "stripe" that doesn't match the rendered sun - especially near the horizon.

The folding value from the gradient texture is used to add a foam effect at the top of the waves to make the water look a bit choppier.  The foam texture itself is a single channel grayscale image with the degree of folding controlling which proportion of the grayscale is used.  A low folding value for example would cause just the brightest shades of the foam texture to be used while a high one would cause most or all of the greyscale to be present producing a much stronger foam component to the final effect.  A global "choppyness" value is also used to drive the water simulation which affects how perturbed the surface normals are in addition to introducing more foam - this value can be changed dynamically to vary the ocean from a millpond to a roiling foamy mass: 


A fairly low "Choppyness" value produces pleasing wave crests and moderate surface peturbation
A higher "Choppyness" produces more agitated wave movement, sharper surface relief and considerably more foam.
In addition to foam at the wave crests I also wanted foam in evidence where the water meets the land, to accomplish this a copy of the depth buffer is bound as a pixel shader input and the depth of the pixel being rendered compared against it.  This produces a depth value representing the distance between the water surface and the terrain already rendered to that pixel.  This depth delta is added not just to the foam amount to render but is also used to drive the alpha component letting the water fade out in the shallows where it meets the land.  This can be seen in both the images above where the water meets the land.

The benefit of using the screen space depth delta rather than a pre-computed depth value stored on the vertices is that it reacts dynamically to both the movement of the vertices driven by the water surface displacement map and to anything else that penetrates the water surface.  The latter can't be seen just yet other than where the water geometry meets the terrain as I don't have any such features but in the future should I have ships, jetties or gas/oil rigs the alpha/foam effect will simply work around where they intersect the water helping them feel more grounded in-situ.


Problems of scale

As mentioned above one of the fundamental problems with water rendering is that it behaves and appears so radically different depending on it's situation, but another problem with rendering water especially with planetary scale viewpoints is how it appears from different distances. The 512x512 surface simulation grid I'm using looks good close up but simply tiling it produces unsightly repeating patterns when viewed from larger distances.



Not only does the limited simulation area become very apparent but the higher frequency of the surface normal variation produces very visible aliasing in both the reflection vector used to compute the water colour and the wave crest effect producing unsightly sparkling in the rendered image.

Rather than simply increase the simulation area which would produce just a limited improvement and incur increased simulation cost instead I vary the frequency at which I sample the simulated surface with distance.  The pixel shader uses the HLSL partial derivative instructions to determine an approximate simulation texture to screen pixel ratio then scales the texture co-ordinates to obtain an equally approximate 1:1 mapping.  This effectively causes the simulation surface to cover increasingly large areas of the globe as the viewpoint moves further away.

This is in no way physically accurate but produces a more pleasing visual effect than the aliasing, and by blending between adjacent scales a smooth transition can be achieved to hide what would otherwise be a very visible transition line between scales - an effect very similar to a mip line when trilinear or anisotropic filtering is not being used.  Take a look at the second half of the video above to see how this looks in practice as the viewpoint moves from near the surface all the way out into space.

There is more to the effect than just scaling the simulated water surface to cover larger and larger areas though, as while this eliminates most of the tiling artifacts it also inevitably makes all water features larger which can look increasingly unrealistic. To alleviate this undesirable consequence certain aspects of the effect are toned down with increasing distance from the viewpoint.  The first is the per-pixel normal calculated from the simulation's gradient texture which has the gradient's effect scaled down to produce less variation with distance making the resultant variations in reflected sky colour more subtle while the second is the foam wave crest effect which is also scaled down and ultimately removed with distance:


More distant view showing how the wave crests peter out and the water adopts a smoother aspect with distance

Combining these helps make distant water a bit more appropriately indistinct.  


Making it all 3D

The final challenge with the water was taking the two dimensional result of the simulated water surface and applying it to my three dimensional planet.  There are a variety of ways to map 2D squares to the surface of a sphere but each has major trade-offs involving distortion in some way or other.  I decided to keep it simple and use the same triplanar projection system used for texturing the terrain - i.e. using the 3D world space position of the point being shaded to sample the texture in the XZ, XY and ZY planes then using the surface normal to blend between them.

The only trick here is to make sure the directions of the normals from the gradient map are consistent so they are oriented correctly for the plane they are being applied to, getting this wrong and the sun and sky will not reflect properly in the water.


Next Steps

I'm pretty happy with the water effect now, it has some artifacts still but I feel I've spent enough time on it for now and the effect is good enough not to irritate me.  Next I think is integrating it's placement more into the geographic terrain generator especially in the area of trying to make interesting coastlines along with processing it's impact on the countries themselves.

Saturday, February 7, 2015

Capital Cities

Following on from Country Generation, I thought I would take a look at placing some capital cities in my newly established countries as a precursor to more general settlement placement encompassing everything from hamlets and villages up to towns and other major cities.

As a quick recap, here is my planet with the country boundaries highlighted:



So the challenge here is to pick an essentially random  location within the country boundaries that is suitable for it's capital city.  As I already have a triangulation of the country to be used when deciding within which country a point on the planet's surface lies, I can use this to place the city.  Rather than picking random points on the planet in the hope of eventually finding one that lies within the country of interest however it makes more sense to pick points within the triangulation to start with.

The simplest way therefore would be to just pick a random triangle from the country's triangulation then pick a random point within it.  The first is trivial while the second can also be easily accomplished using barycentric co-ordinates to convert three random numbers in the range [0, 1] into a point within the triangle.

While this works, I didn't want capital city placement to be completely random, for my purposes I didn't want a capital city that was too close to the country boundaries but with the naive approach described above there is nothing to stop a capital being right on the border.  This is purely a subjective restriction and I'm sure somewhere in the world there is a capital very close to a border but it just didn't feel right to me.

Unfortunately determining the distance between a point within a country and it's border isn't a trivial task, requiring the distance to each line segment making up the border to be calculated and the minimum taken.  Determining what is an acceptable distance is also not as easy as it might be due to the wildly varying shapes of the countries.  An acceptable distance for a generally circular country might be impossible to satisfy for a different country that was long and thin.

There isn't much that can be done to alleviate the first issue other than test as few candidate points as possible and possibly use some sort of spacial index to reduce the number of border segments being processed - while it's a bit brute force however it's not prohibitively expensive so improving it isn't top priority.

Without any particular knowledge of a country's shape however determining an acceptable distance from the border to place the capital city is a bit more of a challenge.  Rather than try to work out some sort of heuristic from the border vertices I chose instead to use a sampling approach.  For each country a number of random points are generated using the simple algorithm above and the distance from each to the country's border calculated.  The point that is furthest from the border is chosen as the position of the capital city.



The number of points to test is a little trial and error - sample too few and you might not pick a point that's far enough away from the border to be acceptable while sample too many and you will inevitably end up with a point pretty much in the middle of every country which is too uniform and also unacceptable.

I ended up taking 25 sample points for each country which I think gives a decent spread of capital positions while still not generating any too close to the borders.

I am hoping the system can be extended to start placing secondary and tertiary settlements by adding other factors to the heuristic such as distance from already placed settlements (scaled by the relative sizes of each settlement) along with some physical factors such as the candidate site's proximity to rivers or bodies of water to encourage settlements in coastal areas and the general flatness of the terrain to make lowlands and valley floors more appealing than steep mountain sides.

For reference I thought it would be worth including an illustration of the triangulation generated from the country boundaries, as you can see the simple ear-clipping algorithm used tends to produce long thin triangles.  These don't affect the functionality of the system but at some point in the future I might be adding these triangles to some sort of spacial organiser to improve query performance in which case a triangulation that was a bit more evenly distributed across the surface might be better, but it will do for now.



Then of course there is generation of the actual cities themselves complete with road network and buildings, so there is as ever plenty to do!

Saturday, January 17, 2015

Terrain Visuals

My posts on this particular project so far have been about preparation for generating a plausible political landscape complete with infrastructure rather than the physical landscape most of my other projects have focussed on but before I go any further I think it necessary to have at least enough of a physical landscape to make placement and routing of the more man made features interesting..putting things on a flat coloured sphere is only going to hold my attention for so long I expect!

While I've previously worked with voxels to facilitate truly three dimensional terrain including caves and the like with the focus of this project elsewhere the simplicity of a heightfield based terrain makes sense.  When doing this in the past I've used a distorted subdivided cube as the basis of the planet - taking each point and normalising it's distance from the planet centre to produce a sphere - which has some benefits for ease of navigation around each "face" and for producing a natural mapping for 2D co-ordinates but the stretching produced towards where the cube faces meet is unpleasant and besides, I've done that before.

Caveat: Before I go any further however I should point out that nothing I am doing here is rocket science - if you have ever done your own terrain projects in the past the odds are nothing here is news to you but I'm presenting what I'm doing as it will hopefully form the basis for more interesting things to come or if you're thinking about a terrain project for the first time perhaps it will be of use.


The Base Polyhedron

For this project therefore I've chosen to base my planet on a regular Icosahedron, a 20 sided polyhedron where each face is an equilateral triangle.  The geometry produced by subdividing this shape is more regular than the distorted cube as you are essentially working with equilateral triangles at all times.  Each progressive level of detail takes one triangle from the previous level (or one of the 20 base triangles from the Icosahedron at the first level) and turns it into four by adding new vertices at the midpoints of each edge.  Each of the new vertices is then scaled to put it back on the unit sphere:


Subdividing an equilateral triangle into four for the next level of detail produces three new vertices and four triangles to replace the original one.
This is about as simple a scheme as you can get and can be found in many terrain projects, but simplicity is what I'm after here so it's perfectly suited.  As modern graphics hardware likes to be given large numbers of triangles with each draw call rather than render single terrain triangles the basic unit I render (which I'm calling A Patch) is a triangle that's already been subdivided eight times, producing an equilateral triangle shaped chunk of geometry that has 256 triangles along each edge - a total of 65,536 triangles which should feed the GPU nicely especially as they are drawn as a single tri-strip.  Were I to draw the entire planet at my lowest level of detail then I would be drawing 20 of these patches giving around 1.3 million triangles which empirical testing has shown my GPU eats up without pausing for breath.  Good start.


Level of Detail (LOD)

While a million triangles might not give a decent GPU pause for thought these days, what this does show is that with each level of subdivision multiplying the triangle count by four you do get to some pretty high triangle counts after just a few levels of subdivision.  With planets being pretty big as well, those million triangles still don't give a terribly high fidelity rendition of the physical terrain - on an Earth sized planet the vertices are still more than 26 Km apart at this level,

As the viewpoint moves towards the planet's surface then I need to subdivide further and further to get down to a usable level of detail, but by then the triangle count would probably be too much to even store in memory never mind render were I to try to render the entire planet at that detail so of course some sort of level of detail scheme is required.

In keeping with my general approach the scheme I'm using is simple, when rendering I start by adding the 20 base patches mentioned above to a working set of candidates for rendering, then while that set isn't empty I take patches out of it and first test them for visibility against the view frustum then again against the planet's horizon in case they are effectively on the back side of the planet.  If either of those tests fail I can throw the patch away, but if both pass I then calculate how far away from the viewpoint it's centroid is and if less than a certain factor of the patch's radius I subdivide it into four patches at the next level of detail and add those into the set of patches for further consideration.  If the patch is not near enough to the viewpoint to be subdivided I add it to the set of patches to render.

The end result is a set of patches where near to the viewpoint are increasingly subdivided patches - i.e higher LOD each covering a smaller area at a higher fidelity - while moving away from the viewpoint are progressively lower LOD level patches each covering larger areas of the terrain at lower fidelity.  The ultimate goal is to provide a roughly uniform triangle size on screen across all rendered patches, note that this is intentionally a pretty basic system with no support for more advanced features such as geomorphing so the LODs pop when they transition but it's good enough for my purposes.


Colour-coded patches of different LODs, note how the triangle size is different in each colour with the smaller triangles closest to the viewpoint
A useful feature of the system is that by changing the threshold factor used to decide whether to subdivide a patch or not a simple quality control can be established.  Increase the threshold and patches will subdivide at greater distances increasing the number of patches and therefore triangles rendered using higher detail representations increasing fidelity but lowering performance while decreasing the threshold has the opposite effect - patches subdivide closer to the viewpoint causing fewer high detail patches to be rendered reducing the triangle count and improving performance at the expense of graphical fidelity


At low detail (a large transition distance) the geometry is relatively coarse.  900K triangles are being rendered here
Moving the transition distance towards the viewpoint gives a medium detail level with more higher detail patches being rendered making the triangles near the viewpoint smaller to improve fidelity.  1.8 Million triangles are being rendered here
Closer still and we have a high detail level with even more high detail patches being used.  5.7 Million triangles are being rendered here with the ones near the viewpoint almost a pixel in size.
In theory this value could be changed on the fly as the frame rate varies due to rendering load to try to maintain an acceptable frame rate, but I just set it to a decent fixed value for my particular PC/GPU for now.

As anyone who has worked with terrain LODs before will tell you however this is only part of the solution, where patches of different LODs meet you can get cracks in the final image due to there being more vertices along the edge of the higher LOD geometry than along the edge of the lower LOD geometry.  If one of these vertices is at a significantly different position to the equivalent interpolated position on the edge of the lower LOD geometry there will be a gap.  Note that as the higher LOD geometry is typically closer to the viewpoint you only tend to see these in practice where the vertex is lower than the interpolated position otherwise the crack is facing away from the viewpoint and is hidden.
Cracks between patches of different LODs due to the additional vertices on the high detail patch not lining up with the edge they subdivide on the lower detail patch.  (Note I've zoomed in on the effect here to highlight it, whilst still visible normally the cracks are much smaller on screen)
There are various ways to tackle this problem, one I've used before is to have different geometry or vertex positions along the edges of higher LOD patches where they meet lower LOD patches so the geometry lines up, but this adds complexity to the patch generation and rendering, and in it's simplest form at least limits the LODing system to only support changing one level of LOD between adjacent patches.

For this project I've taken a different approach and rather than changing the connecting geometry I generate and render narrow skirts around the edges of each patch.  These are the width of one triangle step, travel down towards the centre of the planet and are normally invisible but where a crack would exist their presence fills that gap hiding it with pixels that while not strictly correct are good enough approximations to be invisible to the eye.  The pixels aren't correct as they are textured with the colours from the skirt's connecting edge on the patch so are a stretchy smear of vertical colour bars but like I say they are close enough, especially as the gaps are usually just a pixel or two in size when using reasonably dense geometry.
With one triangle wide skirts on the patches the gaps are filled with pixels close enough to the surface of the patches in colour that they become invisible to the eye.
Comparing the before and after in wireframe makes the strip of new geometry along the join clear:


Wireframe without the skirts shows why the cracks appear
Wireframe when rendering the patch skirts shows the additional strip of geometry pointing straight down from the edge of the lower detail patch.  The higher detail patch also has a skirt but here they are back facing and culled by the GPU.
This does of course add extra geometry to each patch (1536 triangles in my case with 256 tris along each patch edge) which I am always rendering regardless of the LOD transition situation to enable them to be sent with the same single draw call per patch, but this is a small GPU overhead and doesn't appear to have any practical effect on performance.


Floating Point Precision

Anyone who's had a go at rendering planets will almost certainly have encountered the hidden evil of floating point precision.  Our GPUs work primarily in single precision floating point which is fine for most purposes but when we're talking about representing something tens of thousands of kilometres across down to a resolution of a centimetre or two the six or so significant digits offered is simply not enough.  This typically shows up as wriggly vertices or texture co-ordinates as the viewpoint moves closer to the surface, certainly not desirable.

To avoid this I store the single precision vertices for each patch not relative to the centre of the planet but relative to the centre of the patch itself.  The position of this per-patch centre point is stored in double precision for accuracy which is then used in conjunction with the double precision viewpoint position and other parameters to produce a single precision world-view-projection matrix with the viewpoint at the origin.

That final point is significant, by putting the viewpoint at the world origin for rendering I ensure the maximum precision is available in the all important region closest to the viewer.


The Heightfield

So now I have a reasonable way to render my planet's terrain geometry viewed anywhere from orbital distances down to just a metre or so from the ground I need to generate a heightfield representative enough of real world terrain to make placement of settlements and their infrastructure plausible.  I've played around with stitching together real world DEM data before but have never been too happy with the results, especially when trying to blend chunks together or hide their seams, so for this project I'm sticking to a purely noise based solution for the basic heightfield.

Relying so heavily on noise made me wary however or re-using my trusty home grown noise functions so instead I've opted to use the excellent LibNoise library which produces quality results, is very flexible, is fast and importantly for this multi-core world is thread safe - the last being particularly important  
as I have a huge number of samples to generate.

A common feature of purely noise based heightfields however that you see frequently on terrain projects is that they tend to look very homogeneous with very similar hills and valleys everywhere you look.  To make my planet more interesting I wanted to try to get away from this so had a think about how to introduce more variety.

The solution I came up with was to implement a suite of distinct heightfield generators each using a combination of noise modules to produce a certain type of terrain.  As a starting point I went with mountainous, hilly, lowlands, mesas and desert, but how to decide which terrain type to use where?  One solution would be to use yet another noise function to drive the selection, but I thought it would be interesting to try something else, and ideally a solution that gave me more control over the placement of different terrains while still providing variety.  The control aspect would allow me to make general provisions for placement such as having deserts more around the equator or mountains around the poles which feels useful. This doesn't violate my basic tenet of procedural generation - procedural isn't the same as random remember.


An example of the "Mountainous" terrain type, generated by a ridged multi-fractal
The "Hilly" terrain type uses the absolute value of a couple of octaves of perlin noise to try to simulate dense but lower level hills
The "Mesas" terrain type again uses octaves of noise but thresholds the result to produce low lying flatlands punctuated by steep flat topped hills
Instead of noise then I chose to use a control net similar to the country one detailed in my previous post, but instead of country IDs this time I put a terrain type on each control point - I also use the direct triangulation of the control net rather than the dual.  Shooting a ray from the planet's surface against this terrain control net gives me three terrain types from the vertices of the intersected triangle while taking the barycentric co-ordinates of the intersection point gives blending weight for combining them.


The terrain type control net.  Each vertex has a terrain type assigned to it so any point on the surface can be represented as a weighted combination of up to three terrain types.
Taking the result of each of these terrain type's heightfield generators, scaling by their blending weights then adding them together gives the final heightfield value.  There is an obvious optimisation here too where a triangle has the same terrain type at two or more of it's vertices - the heightfield generator for that terrain type can be run just once and the blending weights simply added together.


Here the "Hilly" terrain generator's result in the foreground is blending into the "Mountainous" terrain generator's result in the middle distance
I quite liked the idea of being able to craft each type of terrain separately so the look of the lowlands can for example be tweaked without affecting the mountains which can sometimes be tricky with monolithic noise systems, the frequency of terrain changes and their locations can also be controlled by changing the density of the terrain control net or playing with the assignment of terrain types to the control points.  For now I'm just assigning types randomly to each point but you could for example use the latitude and longitude of the point as an input to the choice or assign initial points then do some copying between adjacent points to produce more homogeneous areas where desired.


Texturing

An interesting heightfield is all well and good and whiled I've stated that physical terrain is not really the focus of this project I do want it to look good enough to be visually pleasing.  To this end I did want to spend at least some time on the texturing for the terrain so my desert could look different to my lowlands and my mesas different to my alpine mountains.  Past experience has shown me that one of the more problematic areas with texturing is where the different types of terrain meet, producing realistic transitions without an exorbitantly expensive pixel shader can be tricky.  The texturing also needs to work on all aspects of the geometry regardless of which side of the planet it's on or how steep the slope.

The latter problem is solved by using a triplanar texture blend, using the world space co-ordinates of the point to sample the texture once each in the XY, ZY and XZ planes and blending the results together using the normal.  This avoids the stretching often seen in textures on heightfields especially when not mapped on to spheres when a single planar projection is used but does of course take three times as many texture sample operations.  If you look closely you can also see the blend on geometry furthest from aligning with the orthogonal planes but as terrain textures tend to be quite noisy this isn't generally a problem - and is certainly less objectionable than the single plane stretching.

Using a single planar mapping in the XY plane produces noticeable texture stretching on the slopes

Changing to the XZ plane fixes most of those but produces worse stretching in other places
Third choice is the ZY plane but again we're really just moving the stretching artifacts around
The solution is to perform all three single plane mappings and use the world space normal to perform a weighted combination of all three.  Although this requires three times as many texture reads the stretching artifacts are all but gone.
As for which texture to use at each point I've adopted something of a brute force approach.  By keeping the palette of terrain textures relatively limited I am able to store a weight for each one on each vertex, in my case eight textures so the weights can be stored as two 32bit values on each vertex.  The textures I've chosen are designed to give me a decent spread of terrain types, so they include for example two types of grassland, snow, desert sand and rocky dirt.

Example basic textures for dirt, desert/beach sand and grass
By storing a weight for each texture independently I avoid any interpolation problems where textures meet as every texture is essentially present everywhere.  It does of course mean that the pixel shader in theory will need to perform up to eight triplanar texture mappings each requiring three texture reads (twice that if normal mapping is supported) but in practice most pixels are affected by only a couple of textures.  If I was doing this in a more serious production environment I would be thinking about having several variations of the pixel shader each specialised for a certain number and combination of textures to avoid potentially expensive conditionals or redundant texture reads but as this is just a play project one general purpose shader will suffice.


Blending the terrain type textures based purely upon the terrain control net.  The blending works but the linear interpolation artifacts along the net's triangulation boundaries are obvious and unsightly
While the textures blend together here the shape of the terrain control net is still quite obvious.  To improve matters I fell back to a technique I first trialled while playing with the country generation code, by perturbing the direction of the ray fired from the surface against the terrain control net more variety can be included while still offering the controllability of the net.

Applying a few octaves of turbulence to the ray gives a much more organic effect:
Using some turbulence to perturb the ray from the planet's surface before intersecting it with the terrain control net produces a far more satisfyingly organic and pleasing effect
I'm particularly happy with how this looks from orbital distances but it will suffice closer to the surface for now as well.


A Note on Atmospheric Scattering

You may have noticed in these screenshots that I've also added some atmospheric scattering.  For terrain rendering this is one of the most effective ways of both improving the visuals and adding a sense of scale - taking it away makes everything look pretty flat and uninteresting by comparison:


Without the atmospheric scattering the perspective and scale is unclear
With atmospheric scattering the sense of scale and distance is much improved
In a previous project I implemented the system published in Precomputed Atmospheric Scattering by Eric Bruneton and Fabrice Neyret which produced excellent results, but in the spirit of trying new things I thought for this project I would try Accurate Atmospheric Scattering by Sean O'Neil found in the book GPU Gems 2.  This looked like it might be a bit lighter weight in performance terms and easier to tweak for custom effects.

I found the O'Neil paper relatively straightforward to implement which was refreshing after some papers I've struggled through in the past and it didn't take too long to get something up and running.  The only thing I changed was to move all the work into the pixel shader rather than doing some in the vertex shader.

There were two reasons for this, the first being that when applied to the terrain the geometry is quite dense so the performance cost implication should be minimal and it would avoid any interpolation artifacts along the triangle edges.  The second is pretty much the opposite however as rather than rendering a geometric sky sphere around the planet onto which the atmosphere shader is placed I wanted to have the sky rendered as part of the skybox.  

The skybox is simply a cube centred on the viewpoint rendered with a texture on each face to provide a backdrop of stars and nebulae, as this was already filling all non-terrain pixels adding the atmosphere to that shader seemed to make sense but as the geometry is so low detail computing some of the scattering shader in the vertex shader didn't seem like a good plan as I felt visual interpolation artifacts were bound to result.

Bear in mind that this only really makes sense if the viewpoint is generally close enough to the terrain that the skybox fills the sky - i.e. within the atmosphere itself.  Were I planning to have the viewpoint further away for appreciable time so the planet would be smaller on screen I would instead add additional geometry for the sky to prevent incurring the expensive scattering shader on pixels that are never going to be covered by the atmosphere.

To make a smooth transition as the viewpoint enters the atmosphere a simple alpha blend is used to fade the starfield skybox texture out as the viewpoint sinks into the atmosphere:

From high orbit the stars and nebulae on the skybox cubemap are clear

Descending into the atmosphere the scattering takes over and the stars start to fade out
Much closer to the surface the stars are only barely visible.  Keeping descending and they disappear altogether leaving a clear blue sky.
I think that pretty much covers the terrain visuals, probably in more detail than I initially intended but I think it's useful to get the fundamentals sorted out. I reckon what I have so far should be an interesting enough basis to work forwards from.