Posts Tagged ‘ embodied GIS

ARkit and Archaeology – Hougoumont Farm, Waterloo

For the last 3 years I have had the absolute privilege of being one of the archaeological directors of the current excavations of the battlefield of Waterloo. As part of the incredible project Waterloo Uncovered (http://www.waterloouncovered.com) – we have been taking wounded serving and veteran soldiers, students and professional archaeologists  to the battlefield to conduct the first major systematic excavation of parts of the battlefield that shaped the history of Europe in 1815.

We only have two weeks in the field each year, which means there is not a lot of time to doing anything but excavate, record and backfill (see the dig diaries and videos of all we got up to here). However, this year I managed to find the final afternoon to play with the new Apple ARkit and see what potential there is for archaeological sites.

The short answer is that there is a lot of potential! I have discussed Augmented Reality and archaeology to the nth degree on this blog and in other places (see here for a round-up) – but with the beta release of ARkit as an integrated part in iOS11, Apple may have provided the key to making AR more accessible and easier to deploy. I tried out two experiments using some of the data we have accrued over the excavations. Sadly I didn’t have any time to finesse the apps – but hopefully they should give a hint of what could be done given more time and money  (ahem, any prospective post-doc funders – my contact details are on the right).

Exploring the lost gardens of Hougoumont

The first video shows a very early experiment in visualising the lost gardens of Hougoumont. The farm and gardens at Hougoumont were famously defended by the Allied forces during the battle of Waterloo (18th June 1815). Hougoumont at the time was rather fancy, with a chateau building, large farms buildings and also a formal walled garden, laid out in the Flemish style. One of the participants this year, WO2 Rachel Willis, is currently in the process of leaving the army and studying horticulture at the Royal Horticultural Society. She was very excited to look at the garden and to see if it was possible to recreate the layout – and perhaps even at some point start replanting the garden. To that end she launched herself into the written accounts and contemporary drawings of Hougoumont and we visited a local garden that was set out in a similar fashion. Rachel is in the process of colouring and drawing a series of Charlie Dimmock style renditions of the garden plans for us to work from – but more on that in the future.

Similar gardens at Gaasbeek Castle

Extract from Wm. Siborne’s survey of the gardens at Hougoumont

As a very first stab at seeing what we might be able to do in the future, I quickly loaded up one of Rachel’s first sketches into Unity and put a few bushes and a covered walkway in. I then did some ARkit magic mainly by following tutorials here, here, and here. Bear in mind that at the time of writing, ARkit is in beta testing, which means you need to install Xcode Beta, sign up for and install the iOS 11 beta program for the iPhone and also run the latest beta version of Unity. It is firmly at the bleeding edge and not for the faint-hearted! However, those tutorial links should get you through fine and we should only have to wait a few months and it will be publicly released.  The results of the garden experiment are below:

As can be seen, the ARkit makes it very simple to place objects directly into the landscape OUTSIDE – something that has previously only really been possible reliably using a marker-based AR plugin (such as Vuforia). Being able to reliably place AR objects outside (in bright sunshine) has been somewhat of a holy grail for archaeologists, as unsurprisingly we often work outside.  I decided to use a ‘portal’ approach to display the AR content, as I think for the time being it gives the impression of looking through into the past – and gives an understandable frame to the AR content. More practically, it also means it is harder to see the fudged edges where the AR content doesn’t quite line up with the real world! It needs a lot of work to tidy up and make more pretty, but not bad for the first attempt – and the potential for using this system for archaeological reconstructions goes without saying! Of course as it is native in iOS and there is a Unity plugin, it will fit nicely with the smell and sound aspects of the embodied GIS – see the garden, hear the bees and smell the flowers!

Visualising Old Excavation Trenches

Another problem we archaeologists have is that it is very dangerous to leave big holes open all over the place, especially in places frequented by tourists and the public like Hougoumont. However, ARkit might be able to help us out there. This video shows this year’s backfilled trenches at Hougoumont (very neatly done, but you can just still see the slightly darker patches of the re-laid wood chip).

Using the same idea of the portal into the garden, I have overlaid the 3D model one of our previous trenches in its correct geographic location and scale, allowing you to virtually re-excavate the trench and see the foundations of the buildings underneath, along with a culverted drain that we found in 2016. It lines up very well with the rest of the buildings in the courtyard and will certainly help with understanding the further foundation remains we uncovered in 2017. Again, it needs texturing, cleaning and bit of lighting, but this has massive potential as a tool for archaeologists in the field, as we can now overlay any type of geolocated information into the real world. This might be geophysical data, find scatter plots or, as I have shown, 3D models of the trenches themselves.

These are just very initial experiments, but I for one am looking forward to seeing where this all goes. Watch this space!

 

CAAUK 2016 – Embodied GIS and applied Multi-Sensory Archaeology

I recently attended the CAAUK 2016 meeting in Leicester, a great couple of days with a few really interesting papers.

As usual, the rather excellent Dougs Rocks-Macqueen was on hand to record the talks. His videos can be found here – he records all sorts of diverse archaeological conferences, so it is well worth clicking the subscribe button on his account.

In case anyone is interested, I have embedded the video of my talk below – where I discuss the Embodied GIS, using examples from my previous research including Voices Recognition and the Dead Man’s Nose.

Learning by Doing – Archaeometallurgy

This post will be a little off my normal topics, in that there will no augmented reality and no computers (although I did make some nice 3D models that I’ll link to later). It is about technology, but mostly about prehistoric technology.

I have spent the last four days on a prehistoric metallurgy weekend, run by Fergus Milton and Dr. Simon Timberlake at Butser Ancient Farm in Hampshire. The aim of the course was to introduce us to the basics of prehistoric metallurgy and then teach us the practical skills so that we could take the process all the way from breaking the ore to casting an axe. I decided to take part in the course, not because I am focusing on the techniques of Bronze Age metallurgy, but because the site that I am looking at on Bodmin Moor was very likely to have been created to work the nearby tin sources and I wanted to know how they would have done it and what it would have felt like. I have read quite a bit around the subject, and have a good idea of the steps involved, but it wasn’t enough. As with all of my work, I am interested in the human experience of a landscape or an activity and find it is necessary to get my hands dirty to see and feel what smelting is like – something you can’t get from just reading about it.

The course was quite archaeology focused, and being at Butser Ancient Farm meant there was also a large element of experimentation – rather than just demonstration. We were encouraged to try out different ideas and set up experiments based on our own research aims. The best part for me was that we made every part of the furnace and refractories (tuyeres, crucibles, collecting pots, etc.) ourselves – we even hand-stitched our own bellows.

Hand-stitched leather bellows

Drying out the refractories

After making our refractories we set to digging the furnaces, my group decided to dig a bank furnace and a bowl furnace. As can be seen from this 3D model the bank furnace is unsurprisingly dug down into a bank of earth with a horizontal passage dug into the shaft to hold the tuyere and bellows.

In contrast, the bowl furnace is a simple bowl dug out of the ground lined with a thin layer of clay, with a slightly sloping passage to hold the bellows and tuyere.

In order to fire the furnaces up, all that is needed is a small fire in the bottom of the furnace which is slowly covered with charcoal until the furnace is entirely full. Obviously the bellows need to be continually pumped to get some oxygen into the fire under the charcoal.

Bowl furnace in action

The ore is prepared for smelting using a beneficiation mortar (in our case we used a granite mortar which was probably originally used for grinding flour). Essentially it is as easy as smashing a few rocks and then grinding them down to powder using a stone hammer. This, perhaps weirdly, is the part of the process I was most interested in. I believe that the Bronze Age inhabitants of Leskernick Hill were collected and crushing cassiterite (tin-stone) on-site and I wanted to see how hard it was to do and how long it would take. Simon had some streamed Cornish cassiterite with him and so I got to have a go at crushing it to fine powder. It was remarkably easy and took very little time and effort to go from the rock itself to the powder ready for smelting. The mortar we were using had smooth sides and so the tinstone kept skating up the sides and escaping onto the floor, but perhaps this might have been prevented if we were using a mortar with straighter sides.

As can be seen from the 3D model above, once the ore was crushed we loaded it into a hand-made crucible, ready for smelting. This crucible was filled with a mixture of cassiterite dust and malachite (copper-bearing ore) dust in an attempt to co-smelt them creating a ‘one-step bronze’. The mortar is stained green in this case from crushing up the malachite. Unfortunately on this experiment the hand-made crucible cracked in the furnace and so the one-step bronze leaked out and we eventually found it at the bottom of the furnace. We had also put a layer of crushed malachite directly into the furnace, which smelted away nicely and mingled with the leaked bronze to create a big lump of slightly tinned copper.

A lovely lump of smelted copper (with a tiny bit of tin)

Working my way through the entire process of metallurgy (minus the mining/collecting of the ore or the making of the charcoal) made me appreciate actually how surprisingly easy the whole thing is – and equally what rather unremarkable archaeological remains it produces. This is especially true of our bowl furnace, which when burnt out looked almost exactly like a hearth, complete with burnt ceramic material that one could easily mistake for simple prehistoric pottery. It makes me wonder how many smelting sites may have been misidentified as hearths. After this weekend I would happy to build a small furnace in my back garden and smelt some copper, and I wonder if the smelting furnaces of the Bronze Age were similar, small bowl furnaces in or around the family home.

We undertook a total of 5 smelts and a couple of castings over the weekend, with varying levels of success. Even with the professionals there (Simon and Fergus) things did not always go to plan (crucibles broke, furnaces didn’t heat up enough, molten metal was spilled on the ground) but this, for me, was the key to the whole experience. While the entire process was much easier than I had first imagined, there was still effort involved in smelting a relatively small amount of metal. These mistakes and accidents would have happened in antiquity as well and so even when a whole smelt of tin vapourised to nothing due to the furnace being too hot, I didn’t really regret the 2 hours spent bellowing and in fact felt a little closer to the frustration that might have been felt by the inhabitants of Bronze Age Leskernick Hill. Although I know the chemistry behind the smelting process (just about!) I was dumbstruck by the magical process of turning rock to metal. We literally sprinkling crushed malachite into the furnace and covered it with charcoal, heated it and then found a lump of copper at the bottom of the furnace. It was quite a powerful experience, and one I am sure would not have been lost on the early prehistoric smelters.

This whole weekend has made me realise that just as it is important to walk the hills of Bodmin Moor in order to really get a feeling for what it is like to inhabit the place, it is equally important to build a furnace, crush ore and smelt it to metal in order to find out what it is like to inhabit the activities as well. Of course experimental archaeologists have been doing this for years, but just one weekend of it has already changed the way I am thinking about some of my evidence and will almost certainly have a big influence on at least one chapter of my PhD.

Archaeology, GIS and Smell (and Arduinos)

I have had quite few requests for a continuation of my how-to series, for getting GIS data into an augmented reality environment and for creating an embodied GIS. I promise I will get back to the how-tos very soon, but first I wanted to share something else that I have been experimenting with.

Most augmented reality applications currently on the market concentrate on visual cues for the AR experience, overlaying things on a video feed, etc. There are not a lot that I have found that create or play with smells – and yet smell is one of the most emotive senses. In the presentation of archaeology this has been long known and the infamous and varied smells of the Jorvik Centre are a classic example of smell helping to create a scene. The main reason for this lack of experimentation with smells is presumably the delivery device. AR is quite easy to achieve now within the visual realm mainly because every smartphone has a video screen and camera. However, not every phone has a smell chamber – never mind one that can create the raft of different smells that would be needed to augment an archaeological experience. As a first stab at rectifying this, then, I present the Dead Man’s Nose:

The Dead Man’s Nose

The Dead Man’s Nose (DMN) is a very early prototype of a smell delivery device that wafts certain smells gently into your nose based on your location. The hardware is built using an Arduino microcontroller and some cheap computer parts along with any scent of your choice. The software is a very simple webserver that can be accessed via WiFi and ‘fire off’ smells via the webserver’s querystring. This means that it can easily be fired by Unity3D (or any other software that can access a webpage) – so it fits very nicely into my embodied GIS setup.

How does it work?

This little ‘maker hack’ takes it inspiration from projects such as ‘My TV Stinks‘, ‘The Smell of Success‘ and Mint Foundry’s ‘Olly‘. Essentially, I followed the instructions for building an Olly (without the 3D housing) and instead of using an Ethernet shield for the Arduino – I connected it to a WiFi shield and from there joined it to an ad-hoc WiFi network created by my Macbook. With the Macbook, iPad and the DMN on the same network it is very easy to send a message to the DMN from within the Unity gaming engine. As the iPad running the Unity application knows where I am in the world (see the previous blog) it means that I can fire off smells according to coordinates (or areas) defined in a GIS layer. Therefore, if I have an accurate ‘smellscape’ modeled in GIS, I can deploy that smellscape into the real world and augment the smells in the same way that I can augment the visual elements of the GIS data.  The code is very simple for both ends, I am just using the a slightly adjusted sample WiFi shield code on the Arduino end and a small script on the Unity end that pings the webserver when the ‘player’ moves into a certain place on the landscape. When the webserver is pinged, it starts the fan and that wafts the smell around. From a relatively simple setup, it provides the possibility of a very rich experience when using the embodied GIS.

A Field Test

The first thing to do was to find the smells to actually augment using the Dead Man’s Nose. It turns out there are a lot of different places to buy scents, but luckily in this case archaeologists came to the rescue – an article in the excellent Summer 2012 edition of Love Archaeology e-zine pointed me to the website of Dale Air who have over 300 aromas ranging from the mundane (Crusty Bread) to the completely weird (Dragon’s Breath). I purchased a set of samples (Barbeque, Dirty Linen, Woodsmoke, Farmyard, among others) and was ready to go. I was quite surprised, but they do actually smell pretty much as described, especially the Dirty Linen.

As I was just experimenting, the housing for the DMN was very simple (a cardboard box) and there was only one choice of smell and that was sellotaped to the outside of the box…

The Dead Man’s Nose, in a box with a BBQ scent attached

The prototype was then loaded into a bag (in this case a simple camera bag), which was slung around my neck. I popped the top of the BBQ scent open and then whenever the fan started whirring the sweet, slightly acrid smell of Barbequing meat was gently wafted to my nostrils.

The Dead Man’s Nose in a nosebag, ready to go

Using my embodied GIS of the roundhouses on Leskernick Hill, Bodmin Moor, I set the DMN to fire off a smell of lovely Barbeque whenever I got within 20m of a roundhouse. I set the fan to run slowly at first and get faster as I got closer to the ‘source’ of the smell. The DMN performed admirably, as I walked within range of the houses I heard the tell-tale whirr of the fan and the next moment I had the lovely scent of cooking ribs. Future models will allow for more than one smell at a time (I just need a couple more computer fans) and also a better housing, a bit of 3D printing is in order!

Now I can use the iPad to view the roundhouses overlaid onto the video feed, plug in my headphones and hear 3D sounds that get louder or quieter depending on where I am in the settlement and also I can augment different smells as I walk around. Not only can I walk around the modern day Bronze Age landscape and see the augmented roundhouses, hear the Bronze Age sheep in the distance, I can also smell the fires burning and the dinner cooking as I get closer to the village….

If there is interest I can put together a how-to for creating the system, but for now I am going to carry on experimenting with it – to refine the delivery and the housing and to clean up the code a little bit.

Embodied GIS HowTo: Part 1a – Creating RTIs Using Blender (an aside)

This is a bit of an aside in the HowTo series, but nevertheless it should be a useful little tutorial and as I was given a lot of help during the process it is only right to give something back to the community! So this HowTo shows you how to take the 3D model you created in Part1 and create a Reflectance Transformation Imaging (RTI) Image from it. Now if you don’t know what that is then here is the definition from the biggest advocates of the technique for archaeology, Cultural Heritage Imaging (chi) :

RTI is a computational photographic method that captures a subject’s surface shape and color and enables the interactive re-lighting of the subject from any direction.

What this means basically in GIS terms is that you have a fully interactive hillshade to play with and can change the angle of the light on-the-fly. No more need to create hundreds of hillshades with the sun at different angles – this is an all-in-one approach and is way more interactive. It is a really awesome technique for analysing rock-art, artefacts and even documents and can be used to reveal tiny little details that might not be obvious just by examining the object normally. It has also been used by Tom Goskar and Paul Cripps to interactively re-light some LiDAR data that Wessex Archaeology have of Stonehenge (see their paper in  here). RTI images are created by surrounding the subject with a dome of lights that are turned on one by one and a photograph taken each time. Every photograph needs a shiny ball (usually a snooker ball) in it – which enables the software to record the angle of each light, and by using some complex maths is then used to merge together all of the images (for a fuller and probably more accurate explanation see Mudge et al 2006).

This technique can also be used virtually (as Tom and Paul have done) by recreating the dome of lights in a 3D modeling package and shining them on a virtual object (often a laser scan) or a chunk of LiDAR data. I am going to show you exactly the same technique that Tom and Paul used, except where they used Vue I’m going to be using blender to create the virtual dome. I have also supplied the .blend file and the python script used – so you should be able to do it all yourself.

Right first things first, open up blender and load the .blend file that you saved from Part 1 – if you haven’t got one then you’ll need a 3D model of some description within blender – the concepts will work the same on any 3D model, but I am presuming for this tutorial that you have a chunk of a Digital Elevation Model.

  1. Luckily for us blender has a method of easily creating a dome of lights – this is because during the 3D modelling process a light dome is often used to create a warmer, more realistic feeling for ambient lighting (see Radiosity) so we can use this to our advantage. Press Shift+A to create a new mesh and create an ‘icosphere’.  To get enough lights we’ll need to subdivide the icosphere once so change the subdivisions (on the left hand panel) to be 3.
  2. For the purposes of this tutorial I am presuming that you have a chunk of 10kmx10km DEM – therefore in order to light it properly we need to create a dome that will cover the whole thing. Change the dimensions of your dome to be X:15km, Y:15km and Z:10km – you can change this to be as spherical as you want – these settings worked for me. You will also want to move it to the middle of your DEM – so change the location to be X:5km, Y:5km and Z:0.

    Building the icosphere

  3. Now we have a sphere (albeit squashed) we are going to want to cut the bottom of it to give us our dome. To do this enter Edit Mode (by pressing TAB). Now change your view – so that you are viewing from the front – you either press 1 on your numeric keypad or use the menu View->Front. Press the ‘A ‘ key to clear your selection and then press the ‘B’ key to begin selecting by a border. Draw a box around the bottom of the icosphere and it should select those faces and vertices. Once selected press the ‘X’ key and delete the vertices. Depending on the size of your sphere you may have to zoom forward a little bit to select and delete the faces on the far side of the icosphere. Keep doing this until you are left with a tidy dome sitting above your DEM.

    Deleting the bottom of the sphere

  4. Once you have your dome we are ready to start adding lights to it. First off if you have any other Lamps in the scene – delete those, so we don’t get confused at later stage. Once deleted, come out of Edit Mode (TAB) and use Shift+A to add a new Lamp. I use a Sun lamp [helps with my year-round tan… ahem.. sorry] – you could have an experiment with other types of lamps too – but the Sun seems to work well. Move the Sun to be at the centre of your DEM (X:5km, Y:5km,Z:0). Rotate the Sun so that it’s Y axis is at 180 degrees.
  5. In the little panel on the right you want to select the new Sun by clicking on it, then holding down Shift click the icosphere, so you should now have both selected (you can tell because their little icons light up) – now hover you mouse in the centre of the viewport and press Ctrl+P and parent the Sun to the icosphere. The Sun should now become a child of the icosphere in your objects panel (if you expand the icosphere in the panel you will see the Sun as part of its hierarchy).

    Parenting the Sun

  6. The Sun is now the son of the parent, therefore, we want to multiply the number of them and set them on each vertex – blender has a great function for this (DupliVerts) – click on the Object properties of the icosphere, scroll down to Duplication, click Verts and  click the Rotation checkbox. You should see a whole host of Suns appear. They should be in the right place on each vertex, but if not you can move the Sun to the centre of the DEM (by clicking on the icosphere in the hierarchy panel and then clicking on the Sun – see Step 4).

    Duplicating the Suns

  7. As we are using Suns the direction that they are pointing doesn’t really matter – however, if you are using other types of lamp – spots for instance – you will need to make sure they are pointing in the right direction. [NOTE: if you need to do this here is how, if you are using Suns disregard this step – select the icosphere, enter Edit Mode (TAB) and then choose Normals: Flip Direction from the Mesh Tools panel on the left. That will ensure the lamps are pointing inside the dome. Go back into Object Mode (TAB)].
  8. Now we have a lovely dome of Suns, we need to detach them from the icosphere, so we can manipulate them individually. This is pretty easy – select the icosphere and then Press Ctrl+Shift+A and you should see the Suns all detach themselves into individual objects (you will see about 90 or so Suns in the hierarchy panel on the right). At this stage you are free to delete or turn off the icosphere as we won’t be needing it anymore.

    Blinded by the light

  9. Next we need to set up our camera. Images for RTIs are normally taken by a camera set at the top of the dome, pointing directly downwards. Select your camera (there should be one by default in your scene – if not then you can Add one using Shift+A). Change the camera’s location to be directly above the centre of your DEM at the apex of the dome (in my case X:5km, Y:5km, Z:10km). Blender cameras automatically point downards – so there should be no need to add any rotation (if you have any rotation already set change all the values to 0). Before we render out a test image, we’ll need to adjust our camera viewport and clipping range. Press 0 on the numeric keypad or use the menu View->Camera to take a look and see what the camera is seeing. You will likely just get a grey box – this is because the camera is clipping the distance it can see. Select the camera and go to the settings in the right panel – Set the end Clipping range to 10km and you should see your DEM appear.

    Adjusting the camera settings

  10. Now you are going to want to adjust the Sensor size, to make sure your whole DEM is in the shot – for my 10km DEM the sensor had to be set to 70.
  11. Try a test render (press F12 or go via the menu Render->Render Image). You should be presented with a lovely render of your DEM, currently lit from all the angles.

    DEM test render

  12.  Press F11 again to hide the render view – at this stage you might want to increase the energy setting on your Suns – to get a bit more light on the DEM. Our suns are all still linked together – so you can change the energy setting by clicking on the top Sun in your hierarchy, clicking the Sun object properties (the little sun icon in the object properties panel) and changing the Energy as required (I recommend energy level 5). This should change the energy of all the suns.
  13. Once you are happy with the energy levels we can render out a test sequence, by using a small python script that turns each sun on individually and renders out an image. Change the bottom panel to be a Text Editor panel (see image).

    Selecting the text editor

  14. Click the [New+] button in the Text Editor panel and cut and paste the following code into the window
    import bpy, bgl, blf,sys
    sceneKey = bpy.data.scenes.keys()[0]
    filepath = "PUT YOUR ABSOLUTE FILEPATH HERE"
    # Loop all objects and try to find the Lamps
    print('Looping Lamps')
    l=0
    # first run through all of the lamps turning them off
    for obj in bpy.data.objects:
        if ( obj.type =='LAMP'):
            obj.hide_render = True
            l = l + 1
    print('You have hidden ' + str(l) + "lamps")
    
    # now we can go through and
    # individually turn them on
    # and render out a picture
    for obj in bpy.data.objects:
        if ( obj.type =='LAMP'):
            print (obj.name)
            obj.hide_render = False
            bpy.data.scenes[sceneKey].render.image_settings.file_format = 'JPEG'
            bpy.data.scenes[sceneKey].render.filepath = filepath + '//lamp_' + str(obj.name)
            # Render Scene and store the scene
            bpy.ops.render.render( write_still=True )
            obj.hide_render = True
    
  15. Adjust line 3 – so that you have a filepath that fits your system. This is where it will save out the images – but beware if the folder doesn’t exist it will go ahead and create it – so make sure you type carefully. When you are ready to go – click the Run Script button and it should happily go away and render your images for you. If you have problems when running the code the errors should appear in the console. [NOTE FOR MAC USERS: If you are on a Mac to get the console requires you to start blender from a Terminal window. Save your .blend, close blender. Open Terminal.app then change directory to the blender application by running “cd /Applications/blender.app/Contents/MacOS/” (change the path to fit where you installed blender), then run “./blender”. Any console messages will now appear in the Terminal window]
  16. This will give us a nice set of images (one for each camera) that we can use later to create our RTIs.
  17. You may recall from the beginning of this HowTo that in order to create an RTI image we also need to use a shiny snooker ball. Luckily we can create ones of these with blender as well. Use Shift+A to create a Metaball-> Ball. Make the Ball dimensions 1kmx1kmx1km and move it to the centre of your view (say X:5km, Y:5km, Z:2.5km).
  18. Now we want to make the ball really shiny and black – so apply a material to the ball (using the Material button in the object properties). Set the Diffuse intensity to be 0.0, the Intensity to be 1.0, the Hardness to it top value (511) and click the little Mirror checkbox. That should give us a nice hard shiny black ball for the RTI software to deal with.

    How to get a shiny black ball

  19. Now we want to render out a set of images with only the ball in it so that we can ‘train’ the RTI software. You will want to turn the render off on your DEM Plane (press the little camera button next to it in your hierarchy view), so that when you output the images you will only be rendering the ball.
  20. Change the filepath in the script in your Text Editor panel so that you will be saving the ball images to a different folder (otherwise you will just overwrite your DEM images). Then hit Run Script and you should get a set of rendered images of the ball ready for importing into the RTI software.
  21. You now have the 2 sets of images ready to create your final RTI image!
  22. I am not going to go through the minute detail of the steps to create the RTI image, as Cultural Heritage Imaging have already written a detailed how to. So the next step is to download the RTI Builder software and the reference guide from this page and go through the steps outlined within their reference manual.
  23. A couple of notes on the process, you are going to want to run the first RTI build using the ball images as the input images (put them in a folder called jpeg-exports/ within your RTI project directory). This will create an RTI of the ball – and will produce a .lp file in the assembly-files/ folder of your RTI project directory.
  24. Once you have produced the .lp file from your ball images, you can then use this .lp file to create a RTI image of your DEM itself. Start a new RTI project and choose Dome LP File (PTM Fitter) on the first page – this will direct you through and allow you to specify the .lp file from your ball project, and the images of the DEM that you rendered from blender. As we have already trained the program using the ball images, it should now just happily go through and create the RTI image from your DEM renders.

    AFTER you have run the RTI Builder through on your ball images – use this mode to specify the .lp file

  25. That’s it – here is how mine turned out (a little dark, so probably need more energetic suns)…

    The final RTI image

You can download my lightdome.blend file that has a 15km x 15km light dome in it – if you don’t want to make your own. If you used this tutorial, post some screenshots of your own RTI images in the comments – I’m interested to see what people get up to! If you have any questions or need further help, don’t hesitate to ask below. Thanks go to Tom Goskar, Paul Cripps and Grant Cox for help and advice in setting up the virtual RTI dome.

Embodied GIS HowTo: Part 1 – Loading Archaeological Landscapes into Unity3D (via Blender)

Recently I have been attempting to move closer to what I have coined embodied GIS (see this paper)- that is the ability to use and create conventional GIS software/data and then view it in the real world, in-situ and explore and move through that data and feedback those experiences. As is clear from the subject of this blog I am using Augmented Reality to achieve this aim, and therefore am using a combination of 3D modeling software (blender), gaming-engine software (Unity3D) and conventional GIS software (QGIS). Where possible I have been using Free and Open Source Software (FOSS), to keep costs low – but also to support the community and to show that pretty much anything is possible with a FOSS solution.

One of the main hurdles to overcome when trying to combine these approaches is to figure out the workflow between the 2D/2.5D GIS software, the 3D gaming-engine environment and then finally overlaying all of that information onto the real world. There are many points during the process when data integrity can be lost, resolution of the original data can be affected and decisions on data-loss have to be made. I hope that this blog post (and the subsequent howtos on the next stages of the process) will enable people to identify those points and also to step people through the process so you can do it with your own data.

The first step toward embodied GIS is to move from the GIS software into the gaming engine. There are many ways to do this, but I have used QGIS, some command line GDAL tools and then blender. Over the next few posts I will show how you import elevation data, import/place archaeological information and then view the finished data via the web and also in the landscape itself.

This first post presumes you have at least a working knowledge of GIS software/data.

First you will need a Digital Elevation Model of your landscape. I am using Leskernick Hill on Bodmin Moor as my case study. I have the Ordnance Survey’s Landform PROFILE product which is interpolated from contours at 1:10,000 – resulting in a digital DTM with a horizontal resolution of 10m. To be honest this is not really a great resolution for close-up placement of data, but it works fine as a skeleton for the basic landscape form. The data comes from the OS as a 32bit TIFF file – the import process can’t deal with the floating-point nature of the 32bit TIFF and therefore we need to convert the TIFF to a 16-bit TIFF using the gdal tools. To install GDAL on my Mac I use the KyngChaos Mac OSX Frameworks. Binaries for other platforms are available here. Once you have GDAL installed, running the following command will convert the 32bit to a 16bit TIFF –

gdal_translate -ot UInt16 leskernick_DTM.tif  leskernick_DTM_16.tif

This is the first stage where we are losing resolution of the original data. The conversion from a floating point raster to an integer-based raster means our vertical resolution is being rounded to the nearest whole number – effectively limiting us to a 1m vertical resolution minimum. This is not too much of a problem with the PROFILE data as the vertical resolution is already being interpolated from contour lines of between 10m and 5m intervals – however, it can lead to artificial terracing which we will tackle a bit later. It is a bit more of a problem with higher-resolution data (such as LiDAR data) as you will be losing actual recorded data values – however with the PROFILE data we are just losing the already interpolated values from the contours.

Once the TIFF is converted then you will need to setup a local grid within your GIS software. Unity doesn’t handle large game areas that well – and will start the gamespace at 0,0 – therefore when we import our data it makes things much easier if we also can import our data relative to a 0,0 coordinate origin then to real-world coordinates. This is much easier than it sounds – and just involves using a false easting and northing for your data. In my case I made a simple shapefile of a 10k x 10k square that covered my study area the bottom left coordinates of the square (in the Ordnance Survey GB coordinate system (EPSG:27700)) were 212500, 75000. This means that the coordinates of any data I import into Unity will need to have 212500 subtracted from their eastings and 75000 subtracted from their northings. We can either do this programmatically or ‘in our heads’ when placing objects on the Unity landscape (more on this later in the howtos). It is an advantage having a relatively small study area and also having data in a planar/projected map projection – as the conversion will not need to take account of projections of earth curvature (as it would in a geographic projection such as LatLongs).

Therefore, you can choose to reproject/spatially adjust all of your data using the false eastings and northings within your GIS software – which makes the import a little easier. Or you can do it on an individual layer dataset basis as and when you import into Unity (which is what I do).

Once you have sorted out the GIS side of things, you will need to import the raster into blender – and build the 3D landscape mesh. I’ll try and explain this step-by-step but it is worth finding your way around blender a little bit first (I recommend these tutorials). Also, please bear in mind you may have slightly different window set-up to mine, but hopefully you will be able to find your way around. Please feel free to ask any questions in the comments below.

  1. Open up blender – you should see the default cube view. Delete the cube, by selecting it in the panel to the right – then press ‘X’ and click delete
  2. Now we want to make sure our units are set to metres – do this by clicking the little scene icon in the right-hand panel and then scrolling down to the Units drop-down and click the Metric button.

    Changing units to metric

  3. Now add a plane – using Shift+A Add->Mesh->Plane (or use the Add menu). This will create a Plane of 2mx2m. We want this Plane to be the size of our DEM (in world units) so change the dimensions to be the same, in my case I set X to be ’10km’ and Y to be ’10km’. If you don’t have the dimensions panel on the right, click the ‘N’ key to make it appear.

    Setting the Plane Dimensions

  4. You will notice that your plane has disappeared off into the distance. We need to adjust the clipping values of our viewport. Scroll down the panel with the Dimensions in it until you see the View dropdown. You will see a little section called ‘Clip:’ – change the End value from 1km to say 12km. Now if you zoom out (pinch to zoom out on a trackpad or use the mouse scroll wheel) you will see your Plane in all its very flat glory.
  5. Before we start the interesting bit of giving it some elevation – we need to make sure it is in the right place. Remember that we are using false eastings and northings, so we want the bottom corner of our Plane to be at 0,0,0. To do this first set the 3D cursor to 0,0,0 (in the right-hand panel just beneath where you set the viewport clip values). Now click the ‘Origin’ button in the left-hand Object Tools panel, and click Origin to 3D cursor (the shortcut Shift+Ctrl+Alt+C)
  6. You will also want to make sure the bottom left of the Plane is at 0,0,0. As the origin handle of the Plane is in the middle, for a 10x10km DEM you will need to move the X 5km and the X 5km, by changing the location values in the right-hand properties panel. That should ensure your bottom left corner is sitting nicely at 0,0,0.

    Setting the location

  7. Our Plane currently only has 1 face – meaning we are not going to be able to give it much depth. So now we need to subdivide the Plane to give it more faces – think of this a bit like the resolution of a raster – the more faces the more detailed the model will be (at the cost of file size!). Enter Edit Mode (by pressing Tab). You will see the menu change in the Left Panel – and it will give you a set of Mesh Tools.
  8. Click the Subdivide button – you can choose how much you want to subdivde but I usually make it to be around the same resolution as my DEM. So for a 10k square with 10m resolution we will want a subdivided plane with approx 1,000,000 faces. In Blender terms the closest we can get is 1,048576 faces. This is a BIG mesh – so I would suggest that you do one at high resolution like this – and then also have a lower resolution one for using as the terrain [see the terrain howto – when written!].

    Subdividing the Plane

  9. We now want to finally give the Plane some Z dimension. This is done using the Displace Modifier. First come out of Edit mode – by pressing TAB. Now apply a material to the Plane, by pressing the Material button on the far right panel and hitting the [+New] button.

    The Material and Texture Buttons

  10. Now add a texture to the new material by hitting the Texture button and again hitting the [New+] button. Scroll down the options and change the Type to ‘Image or Movie’. Scroll down further and change the Mapping coordinates from Generated to UV. Now click the Open icon on the panel and browse to the 16bit Tiff you made earlier. The image will be blank in the preview – but don’t worry blender can still read it.

    Applying the Image Texture

  11. Once you have applied the texture – click the Object Modifiers button and choose the Displace Modifier from the Add Modifiers dropdown.

    Object Modifiers Button

  12. When you have the Displace Modifier options up choose the texture you made by clicking the little cross-hatched box in the Texture section and choosing ‘Texture’ from the dropdown. First change the Midlevel value to be ‘0m’. Depending on your DEM size you may start seeing some changes in your Plane already. However, you will probably need to do some experimentation with the strength (the amount of displacement). For my DEM the strength I needed was 65000.203. This is a bit of weird number – but you can check the dimensions of the plane as you change the strength (see screenshot) you want the z value to be as close as possible to 255m (this basically means you will get the full range of the elevation values as the 16bit Tiff has 255 colour values. These should map to real-world heights on import into Unity. You may want to do some checking of this later when in Unity).

    Changing the Strength

  13. Hopefully by this stage your landscape should have appeared on your Plane and you can spin and zoom it around as much as you like…
  14. At this stage you are going to want to save your file! Unity can take a .blend file natively, but let’s export it as an FBX – so we can insert it into Unity (or any 3D modelling program of your choice). Go to File->Export->Autodesk FBX and save it somewhere convenient.

Well done for getting this far! The final steps in this HowTo are simply inserting the FBX into Unity. This is very easy, but I will be presuming you have a bit of knowledge of Unity.

  1. Open Unity and start a new project. Import whichever packages you like, but I would suggest that you import at least the ones I have shown here – as they will be helpful in later HowTos.

    Creating a new Unity Project

  2. Now simply drag your newly created FBX into Unity.  If you have a large mesh the import will probably take quite a long time – for large meshes (greater than 65535 vertices) you will also need the latest version of Unity (>3.5.2) which will auto split the large mesh into separate meshes for you. Otherwise you will have to pre-split it within blender.
  3. Drag the newly imported FBX into your Editor View and you will see it appear – again you can zoom and pan around, etc. Before it is in the right place, however, you will need to make sure it it the correct size and orientation. First change the scale of the import from 0.01 to 1 – by adjusting the Mesh Scale Factor. Don’t forget to scroll down a little bit and click the apply button. After hitting apply you will likely have to wait a bit for Unity to make the adjustments.

    The FBX in Unity

  4. Finally you will need to rotate the object once it is in your hierarchy on the y axis by 180 (this is because Blender and Unity have different ideas of whether Z is up or forward).

    Set the Y rotation

  5. You should then have a 1:1 scale model of your DEM within Unity – the coordinates and heights should match your GIS coordinates (don’t forget to adjust for the false eastings and northings). In my case the centre of my DEM within real-world space is 217500, 80000. The adjustment for the false eastings and northings would be performed as follows:-

actual_coord - false_coord = unity_coord
therefore 217500 - 212500 = 5000 and 80000 - 75000 = 5000
therefore the Unity Coordinates of the centre of the area = 5000,5000

To double-check it would be worth adding an empty GameObject at a prominent location in the landscape (say the top of a hill) and then checking that the Unity coordinates match the real-world coordinates after adjustment for the false values.

I hope that helps a few people, there are a couple of other tutorials using different 3D modelling software on this topic so it is worth checking them out too here and here and one for Blender here.

In the next HowTo I’ll be looking at the various different ways of getting vector GIS data into Unity and adding in different 3D models for different GIS layers so stayed tuned!