Tuesday, November 9, 2010

A Note on Co-ordinate Systems

Before delving deeper into terrain construction I thought a brief note on co-ordinate systems would be worthwhile.  Stellar bodies come in all shapes and sizes but as I live on planet Earth like most people basing my virtual planet upon this well known baseline makes a lot of sense.

Now the Earth isn’t quite a perfect sphere so it’s radius varies but it’s volumetric mean radius is about 6371 Km so that’s a good enough figure to go on.   Most computer graphics use single precision floating point numbers for their calculations as they are generally a bit faster for the computer to work with and are universally supported by recent graphics cards, but with only 23 bits for the significand they have only seven significant digits which can be a real problem when representing data on a planetary scale.

Simply using a single precision floating point vector to represent the position of something on the surface of our Earth sized planet for example would give an effective resolution of just a couple of meters, possibly good enough for a building but hardly useful for anything smaller and definitely insufficient for moving around at speeds lower than hundreds of kilometers per hour.  Trying to naively use floats for the whole pipeline and we quickly see our world jiggling and snapping around in a truly horrible manner as we navigate.

Moving to the use of double precision floating point numbers is an obvious and easy solution to this problem however as with their 52 bit significand they can easily represent positions down to millionths of a millimetre at planetary scale which is more than enough for our needs.  With modern CPUs their use is no longer as prohibitively expensive as they used to be in performance terms either with some rudimentary timings showing only a 10%-15% drop in speed when switching core routines from single to double precision.  Also the large amounts of RAM available now make the increased memory requirement of doubles easily justified, the problem of modern GPUs using single precision however remains as somehow we have to pipe our double resolution data from the CPU through the single precision pipe to the GPU for rendering.

My solution for this is simply to have the single precision part of the process, namely the rendering, take place in a co-ordinate space centred upon the viewpoint rather than the centre of the planet.  This ensures that the available resolution is being used as effectively as possible as when the precision falls off on distance objects these are by nature very small on screen where the numerical resolution issues won’t be visible.

To make this relocation of origin possible, I store with each tile it’s centre point in world space as a double precision vector then store the vertex positions for the tile’s geometry components as single precision floats relative to this centre.  Before each tile is rendered, the vector from the viewpoint to the centre of the tile in double precision space is calculated and used to generate the single precision complete tile->world->view->projection space matrix used for rendering.

In this way the single precision vertices are only ever transformed to be in the correct location relative to the viewpoint (essentially the origin for rendering) to ensure maximum precision.  The end result is that I can fly from millions of miles out in space down to being inches from the surface of the terrain without any numerical precision problems.





There are of course other ways to achieve this, using nested co-ordinate spaces for example but selective use of doubles on the CPU is both simple and relatively efficient in performance and memory costs so feels like the best way.

Of course numerical precision issues are not limited to the positions of vertices, another example is easily found with the texture co-ordinates used to look up into the terrain textures.  On the one hand I want a nice continuous texture space across the planet to avoid seams in the texture tiling but as the texture co-ordinates are only single precision floats there simply isn’t the precision for this.  Instead different texture scales have to be used at different terrain LOD levels and the results somehow blended together to avoid seams – I’m not going to go to deeply in to my solution for that here as I think it’s probably worth a post of it’s own in due course.

Friday, November 5, 2010

Continuing the theme of procedurally generated planets, I’ve started a new project I’ve called Osiris (the Egyptian god usually identified as the god of the Afterlife, the underworld and the dead) which is a new experiment into seeing how far I can get having code create a living breathing world.

Although my previous projects Isis and Geo were both also in this vein, I felt that they each had such significant limitations in their underlying technology that it was better to start again with something fresh.  The biggest difference between Geo and Osiris is that where the former used a completely arbitrary voxel based mesh structure for its terrain Osiris uses a more conventional essentially 2D tile based structure.  I decided to do this as I was never able to achieve a satisfactory transition effect between the level of details in the voxel mesh leaving ugly artifacts and, worse, visible cracks between mesh sections – both of which made the terrain look essentially broken.

After spending so much time on the underlying terrain mesh systems in Isis and Geo I also wanted to implement something a little more straightforward so I could turn my attention more quickly to the procedural creation of a planetary scale infrastructure – cities, roads, railways and the like along with more interesting landscape features such as rivers or icebergs.  This is an area that really interests me and is an immediately more appealing area for experimentation as it’s not an area I have attempted previously.  Although a 2D tile mesh grid system is pretty basic in the terrain representation league table, there is still a degree of complexity to representing an entire planet using any technique so even that familiar ground should remain interesting.


The first version shown here is the basic planetary sphere rendered using mesh tiles of various LOD levels.  I’ve chosen to represent the planet essentially as a distorted cube with each face represented by a 32x32 single tile at the lowest LOD level.  While the image below on the left may be suitable as a base for Borg-world, I think the one on the right is the basis I want to persue...



While mapping a cube onto a sphere produces noticeable distortion as you approach the corners of each face, by generating terrain texturing and height co-ordinates from the sphere’s surface rather than the cube’s I hope to minimise how visible this distortion is and it feels like having what is essentially a regular 2D grid to build upon will make many of the interesting challenges to come more manageable.  The generation and storage of data in particular becomes simpler when the surface of the planet can be broken up into square patches each of which provides a natural container for the data required to simulate and render that area.

At this lowest level of detail (LOD) each face of the planetary cube is represented by a single 32x32 polygon patch.  At this resolution each patch covers about 10,000 km of an Earth sized planet’s equator with each polygon within it covering about 313 km.  While that’s acceptable when viewing the planet from a reasonable distance in space as you get closer the polygon edges start to get pretty damn obvious so of course the patches have to be subdivided into higher detail representations.

I’ve chosen to do this in pretty much the simplest way possible to keep the code simple and make a nice robust association between sections of the planet’s surface and the data required to render them.  As the view nears a patch it gets recursively divided into four smaller patches each of which is 32x32 polygons in it’s own right effectively halving the size of each polygon in world space and quadrupling the polygonal density.





Here you can see four stages of the subdivision illustrated – normally of course this would be happening as the view descended towards the planet but I’ve kept the view artificially high here to illustrate the change in the geometry.  With such a basic system there is obviously a noticeably visible ‘pop’ when a tile is split into it’s four children – this could be improved by geo-morphing the vertices on the child tile from their equivalent positions on the parent tile to their actual child ones but as the texturing information is stored on the vertices there is going to be a pop as the higher frequency texturing information appears anyway .  Another option might be to render both tiles during a transition and alpha-blend between them, a system I used in the Geo project with mixed results.

LOD transitions are a classic problem in landscape systems but I don’t really want to get bogged down in that at the moment so I’m prepared to live with this popping and look at other areas.  It’s a good solid start anyway though I think and with some basic camera controls set up to let me fly down to and around my planet I reckon I’m pretty well set up for future developments.