Archive for the ‘ Unity3D ’ Category

The ARtefactKit – Heritage Jam 2017 Winner

Somehow the Heritage Jam run by the University of York has come round around again and gone. As I outlined in this post the Heritage Jam is an opportunity for people to get together and create new heritage visualisations relating to a specific theme. The theme this year was ‘Bones of Our Past’ – as I couldn’t be there in person, I decided to go ahead and put something together for the online competition.

It turns out my entry won first place! I built something that I have wanted to experiment with for a quite a while – an Augmented Reality application that allows you to take a real artefact (in this case a bone) and compare it to a virtual reference collection. By using your phone you can augment a ‘virtual lab’ onto your kitchen table and then use the app to call up a number of different bones from different animals until you can find one that matches.

The AR aspect of it adds something more to the ‘normal’ online virtual reference collections – by allowing you to augment the models in the correct scale in front of you and then twist and turn each one side by side.

In addition, as I am interested in multi-sensory things, I also added in the sounds and smells of the animals – as well as a virtual portal into a 360 degree video of a deer herd in action.

Finally, it has a link through to a set of Open Data from Open Context showing where else in the world similar types of bones have been found.

You can watch the visualisation here:

Please check out the full visualisation and explanation here: http://www.heritagejam.org/new-blog/2017/10/27/the-artefactkit-stu-eve

As with all of these ‘jam’ projects, the app is just a prototype and is quite messy in terms of overall look and feel – but I think it has potential to be quite useful. Now I just need some funding!

ARkit and Archaeology – Hougoumont Farm, Waterloo

For the last 3 years I have had the absolute privilege of being one of the archaeological directors of the current excavations of the battlefield of Waterloo. As part of the incredible project Waterloo Uncovered (http://www.waterloouncovered.com) – we have been taking wounded serving and veteran soldiers, students and professional archaeologists  to the battlefield to conduct the first major systematic excavation of parts of the battlefield that shaped the history of Europe in 1815.

We only have two weeks in the field each year, which means there is not a lot of time to doing anything but excavate, record and backfill (see the dig diaries and videos of all we got up to here). However, this year I managed to find the final afternoon to play with the new Apple ARkit and see what potential there is for archaeological sites.

The short answer is that there is a lot of potential! I have discussed Augmented Reality and archaeology to the nth degree on this blog and in other places (see here for a round-up) – but with the beta release of ARkit as an integrated part in iOS11, Apple may have provided the key to making AR more accessible and easier to deploy. I tried out two experiments using some of the data we have accrued over the excavations. Sadly I didn’t have any time to finesse the apps – but hopefully they should give a hint of what could be done given more time and money  (ahem, any prospective post-doc funders – my contact details are on the right).

Exploring the lost gardens of Hougoumont

The first video shows a very early experiment in visualising the lost gardens of Hougoumont. The farm and gardens at Hougoumont were famously defended by the Allied forces during the battle of Waterloo (18th June 1815). Hougoumont at the time was rather fancy, with a chateau building, large farms buildings and also a formal walled garden, laid out in the Flemish style. One of the participants this year, WO2 Rachel Willis, is currently in the process of leaving the army and studying horticulture at the Royal Horticultural Society. She was very excited to look at the garden and to see if it was possible to recreate the layout – and perhaps even at some point start replanting the garden. To that end she launched herself into the written accounts and contemporary drawings of Hougoumont and we visited a local garden that was set out in a similar fashion. Rachel is in the process of colouring and drawing a series of Charlie Dimmock style renditions of the garden plans for us to work from – but more on that in the future.

Similar gardens at Gaasbeek Castle

Extract from Wm. Siborne’s survey of the gardens at Hougoumont

As a very first stab at seeing what we might be able to do in the future, I quickly loaded up one of Rachel’s first sketches into Unity and put a few bushes and a covered walkway in. I then did some ARkit magic mainly by following tutorials here, here, and here. Bear in mind that at the time of writing, ARkit is in beta testing, which means you need to install Xcode Beta, sign up for and install the iOS 11 beta program for the iPhone and also run the latest beta version of Unity. It is firmly at the bleeding edge and not for the faint-hearted! However, those tutorial links should get you through fine and we should only have to wait a few months and it will be publicly released.  The results of the garden experiment are below:

As can be seen, the ARkit makes it very simple to place objects directly into the landscape OUTSIDE – something that has previously only really been possible reliably using a marker-based AR plugin (such as Vuforia). Being able to reliably place AR objects outside (in bright sunshine) has been somewhat of a holy grail for archaeologists, as unsurprisingly we often work outside.  I decided to use a ‘portal’ approach to display the AR content, as I think for the time being it gives the impression of looking through into the past – and gives an understandable frame to the AR content. More practically, it also means it is harder to see the fudged edges where the AR content doesn’t quite line up with the real world! It needs a lot of work to tidy up and make more pretty, but not bad for the first attempt – and the potential for using this system for archaeological reconstructions goes without saying! Of course as it is native in iOS and there is a Unity plugin, it will fit nicely with the smell and sound aspects of the embodied GIS – see the garden, hear the bees and smell the flowers!

Visualising Old Excavation Trenches

Another problem we archaeologists have is that it is very dangerous to leave big holes open all over the place, especially in places frequented by tourists and the public like Hougoumont. However, ARkit might be able to help us out there. This video shows this year’s backfilled trenches at Hougoumont (very neatly done, but you can just still see the slightly darker patches of the re-laid wood chip).

Using the same idea of the portal into the garden, I have overlaid the 3D model one of our previous trenches in its correct geographic location and scale, allowing you to virtually re-excavate the trench and see the foundations of the buildings underneath, along with a culverted drain that we found in 2016. It lines up very well with the rest of the buildings in the courtyard and will certainly help with understanding the further foundation remains we uncovered in 2017. Again, it needs texturing, cleaning and bit of lighting, but this has massive potential as a tool for archaeologists in the field, as we can now overlay any type of geolocated information into the real world. This might be geophysical data, find scatter plots or, as I have shown, 3D models of the trenches themselves.

These are just very initial experiments, but I for one am looking forward to seeing where this all goes. Watch this space!

 

CAAUK 2016 – Embodied GIS and applied Multi-Sensory Archaeology

I recently attended the CAAUK 2016 meeting in Leicester, a great couple of days with a few really interesting papers.

As usual, the rather excellent Dougs Rocks-Macqueen was on hand to record the talks. His videos can be found here – he records all sorts of diverse archaeological conferences, so it is well worth clicking the subscribe button on his account.

In case anyone is interested, I have embedded the video of my talk below – where I discuss the Embodied GIS, using examples from my previous research including Voices Recognition and the Dead Man’s Nose.

Guest Blog on ASOR

I have just submitted a guest blog post on the American Schools of Oriental Research (ASOR) blog for their ongoing special series on Archaeology in the Digital Age. It’s an introduction to Augmented Reality for Archaeology and also includes some sneak peeks of the results of some of my own AR fieldwork on Bodmin Moor. The original post can be found at http://asorblog.org/?p=4707.

Archaeology, GIS and Smell (and Arduinos)

I have had quite few requests for a continuation of my how-to series, for getting GIS data into an augmented reality environment and for creating an embodied GIS. I promise I will get back to the how-tos very soon, but first I wanted to share something else that I have been experimenting with.

Most augmented reality applications currently on the market concentrate on visual cues for the AR experience, overlaying things on a video feed, etc. There are not a lot that I have found that create or play with smells – and yet smell is one of the most emotive senses. In the presentation of archaeology this has been long known and the infamous and varied smells of the Jorvik Centre are a classic example of smell helping to create a scene. The main reason for this lack of experimentation with smells is presumably the delivery device. AR is quite easy to achieve now within the visual realm mainly because every smartphone has a video screen and camera. However, not every phone has a smell chamber – never mind one that can create the raft of different smells that would be needed to augment an archaeological experience. As a first stab at rectifying this, then, I present the Dead Man’s Nose:

The Dead Man’s Nose

The Dead Man’s Nose (DMN) is a very early prototype of a smell delivery device that wafts certain smells gently into your nose based on your location. The hardware is built using an Arduino microcontroller and some cheap computer parts along with any scent of your choice. The software is a very simple webserver that can be accessed via WiFi and ‘fire off’ smells via the webserver’s querystring. This means that it can easily be fired by Unity3D (or any other software that can access a webpage) – so it fits very nicely into my embodied GIS setup.

How does it work?

This little ‘maker hack’ takes it inspiration from projects such as ‘My TV Stinks‘, ‘The Smell of Success‘ and Mint Foundry’s ‘Olly‘. Essentially, I followed the instructions for building an Olly (without the 3D housing) and instead of using an Ethernet shield for the Arduino – I connected it to a WiFi shield and from there joined it to an ad-hoc WiFi network created by my Macbook. With the Macbook, iPad and the DMN on the same network it is very easy to send a message to the DMN from within the Unity gaming engine. As the iPad running the Unity application knows where I am in the world (see the previous blog) it means that I can fire off smells according to coordinates (or areas) defined in a GIS layer. Therefore, if I have an accurate ‘smellscape’ modeled in GIS, I can deploy that smellscape into the real world and augment the smells in the same way that I can augment the visual elements of the GIS data.  The code is very simple for both ends, I am just using the a slightly adjusted sample WiFi shield code on the Arduino end and a small script on the Unity end that pings the webserver when the ‘player’ moves into a certain place on the landscape. When the webserver is pinged, it starts the fan and that wafts the smell around. From a relatively simple setup, it provides the possibility of a very rich experience when using the embodied GIS.

A Field Test

The first thing to do was to find the smells to actually augment using the Dead Man’s Nose. It turns out there are a lot of different places to buy scents, but luckily in this case archaeologists came to the rescue – an article in the excellent Summer 2012 edition of Love Archaeology e-zine pointed me to the website of Dale Air who have over 300 aromas ranging from the mundane (Crusty Bread) to the completely weird (Dragon’s Breath). I purchased a set of samples (Barbeque, Dirty Linen, Woodsmoke, Farmyard, among others) and was ready to go. I was quite surprised, but they do actually smell pretty much as described, especially the Dirty Linen.

As I was just experimenting, the housing for the DMN was very simple (a cardboard box) and there was only one choice of smell and that was sellotaped to the outside of the box…

The Dead Man’s Nose, in a box with a BBQ scent attached

The prototype was then loaded into a bag (in this case a simple camera bag), which was slung around my neck. I popped the top of the BBQ scent open and then whenever the fan started whirring the sweet, slightly acrid smell of Barbequing meat was gently wafted to my nostrils.

The Dead Man’s Nose in a nosebag, ready to go

Using my embodied GIS of the roundhouses on Leskernick Hill, Bodmin Moor, I set the DMN to fire off a smell of lovely Barbeque whenever I got within 20m of a roundhouse. I set the fan to run slowly at first and get faster as I got closer to the ‘source’ of the smell. The DMN performed admirably, as I walked within range of the houses I heard the tell-tale whirr of the fan and the next moment I had the lovely scent of cooking ribs. Future models will allow for more than one smell at a time (I just need a couple more computer fans) and also a better housing, a bit of 3D printing is in order!

Now I can use the iPad to view the roundhouses overlaid onto the video feed, plug in my headphones and hear 3D sounds that get louder or quieter depending on where I am in the settlement and also I can augment different smells as I walk around. Not only can I walk around the modern day Bronze Age landscape and see the augmented roundhouses, hear the Bronze Age sheep in the distance, I can also smell the fires burning and the dinner cooking as I get closer to the village….

If there is interest I can put together a how-to for creating the system, but for now I am going to carry on experimenting with it – to refine the delivery and the housing and to clean up the code a little bit.

Embodied GIS HowTo: Part 1 – Loading Archaeological Landscapes into Unity3D (via Blender)

Recently I have been attempting to move closer to what I have coined embodied GIS (see this paper)- that is the ability to use and create conventional GIS software/data and then view it in the real world, in-situ and explore and move through that data and feedback those experiences. As is clear from the subject of this blog I am using Augmented Reality to achieve this aim, and therefore am using a combination of 3D modeling software (blender), gaming-engine software (Unity3D) and conventional GIS software (QGIS). Where possible I have been using Free and Open Source Software (FOSS), to keep costs low – but also to support the community and to show that pretty much anything is possible with a FOSS solution.

One of the main hurdles to overcome when trying to combine these approaches is to figure out the workflow between the 2D/2.5D GIS software, the 3D gaming-engine environment and then finally overlaying all of that information onto the real world. There are many points during the process when data integrity can be lost, resolution of the original data can be affected and decisions on data-loss have to be made. I hope that this blog post (and the subsequent howtos on the next stages of the process) will enable people to identify those points and also to step people through the process so you can do it with your own data.

The first step toward embodied GIS is to move from the GIS software into the gaming engine. There are many ways to do this, but I have used QGIS, some command line GDAL tools and then blender. Over the next few posts I will show how you import elevation data, import/place archaeological information and then view the finished data via the web and also in the landscape itself.

This first post presumes you have at least a working knowledge of GIS software/data.

First you will need a Digital Elevation Model of your landscape. I am using Leskernick Hill on Bodmin Moor as my case study. I have the Ordnance Survey’s Landform PROFILE product which is interpolated from contours at 1:10,000 – resulting in a digital DTM with a horizontal resolution of 10m. To be honest this is not really a great resolution for close-up placement of data, but it works fine as a skeleton for the basic landscape form. The data comes from the OS as a 32bit TIFF file – the import process can’t deal with the floating-point nature of the 32bit TIFF and therefore we need to convert the TIFF to a 16-bit TIFF using the gdal tools. To install GDAL on my Mac I use the KyngChaos Mac OSX Frameworks. Binaries for other platforms are available here. Once you have GDAL installed, running the following command will convert the 32bit to a 16bit TIFF –

gdal_translate -ot UInt16 leskernick_DTM.tif  leskernick_DTM_16.tif

This is the first stage where we are losing resolution of the original data. The conversion from a floating point raster to an integer-based raster means our vertical resolution is being rounded to the nearest whole number – effectively limiting us to a 1m vertical resolution minimum. This is not too much of a problem with the PROFILE data as the vertical resolution is already being interpolated from contour lines of between 10m and 5m intervals – however, it can lead to artificial terracing which we will tackle a bit later. It is a bit more of a problem with higher-resolution data (such as LiDAR data) as you will be losing actual recorded data values – however with the PROFILE data we are just losing the already interpolated values from the contours.

Once the TIFF is converted then you will need to setup a local grid within your GIS software. Unity doesn’t handle large game areas that well – and will start the gamespace at 0,0 – therefore when we import our data it makes things much easier if we also can import our data relative to a 0,0 coordinate origin then to real-world coordinates. This is much easier than it sounds – and just involves using a false easting and northing for your data. In my case I made a simple shapefile of a 10k x 10k square that covered my study area the bottom left coordinates of the square (in the Ordnance Survey GB coordinate system (EPSG:27700)) were 212500, 75000. This means that the coordinates of any data I import into Unity will need to have 212500 subtracted from their eastings and 75000 subtracted from their northings. We can either do this programmatically or ‘in our heads’ when placing objects on the Unity landscape (more on this later in the howtos). It is an advantage having a relatively small study area and also having data in a planar/projected map projection – as the conversion will not need to take account of projections of earth curvature (as it would in a geographic projection such as LatLongs).

Therefore, you can choose to reproject/spatially adjust all of your data using the false eastings and northings within your GIS software – which makes the import a little easier. Or you can do it on an individual layer dataset basis as and when you import into Unity (which is what I do).

Once you have sorted out the GIS side of things, you will need to import the raster into blender – and build the 3D landscape mesh. I’ll try and explain this step-by-step but it is worth finding your way around blender a little bit first (I recommend these tutorials). Also, please bear in mind you may have slightly different window set-up to mine, but hopefully you will be able to find your way around. Please feel free to ask any questions in the comments below.

  1. Open up blender – you should see the default cube view. Delete the cube, by selecting it in the panel to the right – then press ‘X’ and click delete
  2. Now we want to make sure our units are set to metres – do this by clicking the little scene icon in the right-hand panel and then scrolling down to the Units drop-down and click the Metric button.

    Changing units to metric

  3. Now add a plane – using Shift+A Add->Mesh->Plane (or use the Add menu). This will create a Plane of 2mx2m. We want this Plane to be the size of our DEM (in world units) so change the dimensions to be the same, in my case I set X to be ’10km’ and Y to be ’10km’. If you don’t have the dimensions panel on the right, click the ‘N’ key to make it appear.

    Setting the Plane Dimensions

  4. You will notice that your plane has disappeared off into the distance. We need to adjust the clipping values of our viewport. Scroll down the panel with the Dimensions in it until you see the View dropdown. You will see a little section called ‘Clip:’ – change the End value from 1km to say 12km. Now if you zoom out (pinch to zoom out on a trackpad or use the mouse scroll wheel) you will see your Plane in all its very flat glory.
  5. Before we start the interesting bit of giving it some elevation – we need to make sure it is in the right place. Remember that we are using false eastings and northings, so we want the bottom corner of our Plane to be at 0,0,0. To do this first set the 3D cursor to 0,0,0 (in the right-hand panel just beneath where you set the viewport clip values). Now click the ‘Origin’ button in the left-hand Object Tools panel, and click Origin to 3D cursor (the shortcut Shift+Ctrl+Alt+C)
  6. You will also want to make sure the bottom left of the Plane is at 0,0,0. As the origin handle of the Plane is in the middle, for a 10x10km DEM you will need to move the X 5km and the X 5km, by changing the location values in the right-hand properties panel. That should ensure your bottom left corner is sitting nicely at 0,0,0.

    Setting the location

  7. Our Plane currently only has 1 face – meaning we are not going to be able to give it much depth. So now we need to subdivide the Plane to give it more faces – think of this a bit like the resolution of a raster – the more faces the more detailed the model will be (at the cost of file size!). Enter Edit Mode (by pressing Tab). You will see the menu change in the Left Panel – and it will give you a set of Mesh Tools.
  8. Click the Subdivide button – you can choose how much you want to subdivde but I usually make it to be around the same resolution as my DEM. So for a 10k square with 10m resolution we will want a subdivided plane with approx 1,000,000 faces. In Blender terms the closest we can get is 1,048576 faces. This is a BIG mesh – so I would suggest that you do one at high resolution like this – and then also have a lower resolution one for using as the terrain [see the terrain howto – when written!].

    Subdividing the Plane

  9. We now want to finally give the Plane some Z dimension. This is done using the Displace Modifier. First come out of Edit mode – by pressing TAB. Now apply a material to the Plane, by pressing the Material button on the far right panel and hitting the [+New] button.

    The Material and Texture Buttons

  10. Now add a texture to the new material by hitting the Texture button and again hitting the [New+] button. Scroll down the options and change the Type to ‘Image or Movie’. Scroll down further and change the Mapping coordinates from Generated to UV. Now click the Open icon on the panel and browse to the 16bit Tiff you made earlier. The image will be blank in the preview – but don’t worry blender can still read it.

    Applying the Image Texture

  11. Once you have applied the texture – click the Object Modifiers button and choose the Displace Modifier from the Add Modifiers dropdown.

    Object Modifiers Button

  12. When you have the Displace Modifier options up choose the texture you made by clicking the little cross-hatched box in the Texture section and choosing ‘Texture’ from the dropdown. First change the Midlevel value to be ‘0m’. Depending on your DEM size you may start seeing some changes in your Plane already. However, you will probably need to do some experimentation with the strength (the amount of displacement). For my DEM the strength I needed was 65000.203. This is a bit of weird number – but you can check the dimensions of the plane as you change the strength (see screenshot) you want the z value to be as close as possible to 255m (this basically means you will get the full range of the elevation values as the 16bit Tiff has 255 colour values. These should map to real-world heights on import into Unity. You may want to do some checking of this later when in Unity).

    Changing the Strength

  13. Hopefully by this stage your landscape should have appeared on your Plane and you can spin and zoom it around as much as you like…
  14. At this stage you are going to want to save your file! Unity can take a .blend file natively, but let’s export it as an FBX – so we can insert it into Unity (or any 3D modelling program of your choice). Go to File->Export->Autodesk FBX and save it somewhere convenient.

Well done for getting this far! The final steps in this HowTo are simply inserting the FBX into Unity. This is very easy, but I will be presuming you have a bit of knowledge of Unity.

  1. Open Unity and start a new project. Import whichever packages you like, but I would suggest that you import at least the ones I have shown here – as they will be helpful in later HowTos.

    Creating a new Unity Project

  2. Now simply drag your newly created FBX into Unity.  If you have a large mesh the import will probably take quite a long time – for large meshes (greater than 65535 vertices) you will also need the latest version of Unity (>3.5.2) which will auto split the large mesh into separate meshes for you. Otherwise you will have to pre-split it within blender.
  3. Drag the newly imported FBX into your Editor View and you will see it appear – again you can zoom and pan around, etc. Before it is in the right place, however, you will need to make sure it it the correct size and orientation. First change the scale of the import from 0.01 to 1 – by adjusting the Mesh Scale Factor. Don’t forget to scroll down a little bit and click the apply button. After hitting apply you will likely have to wait a bit for Unity to make the adjustments.

    The FBX in Unity

  4. Finally you will need to rotate the object once it is in your hierarchy on the y axis by 180 (this is because Blender and Unity have different ideas of whether Z is up or forward).

    Set the Y rotation

  5. You should then have a 1:1 scale model of your DEM within Unity – the coordinates and heights should match your GIS coordinates (don’t forget to adjust for the false eastings and northings). In my case the centre of my DEM within real-world space is 217500, 80000. The adjustment for the false eastings and northings would be performed as follows:-

actual_coord - false_coord = unity_coord
therefore 217500 - 212500 = 5000 and 80000 - 75000 = 5000
therefore the Unity Coordinates of the centre of the area = 5000,5000

To double-check it would be worth adding an empty GameObject at a prominent location in the landscape (say the top of a hill) and then checking that the Unity coordinates match the real-world coordinates after adjustment for the false values.

I hope that helps a few people, there are a couple of other tutorials using different 3D modelling software on this topic so it is worth checking them out too here and here and one for Blender here.

In the next HowTo I’ll be looking at the various different ways of getting vector GIS data into Unity and adding in different 3D models for different GIS layers so stayed tuned!

AR and Archaeology: Opportunities, Challenges and the Trench of Disillusionment

I have just come back from giving a guest seminar to the Archaeological Computing Research Group at the University of Southampton and thought I would put up a post with the gist of it. It was really an introduction to Augmented Reality in Archaeology, but was also inspired by the recent article in Wired. In his article Clark Dever explains that AR is currently languishing in the Trough of Disillusionment.

The (Archaeological) Hype Cycle

What this means is that according to the Gartner Hype Cycle AR as a technology has already reached it’s peak of marketing, expectation and excitement and hasn’t really delivered much. Instead of providing the world with a technology to allow the seamless integration of the real and the virtual, we are left with a few applications that provide a way to overlay virtual information onto a video screen, which are mostly used to direct us to the nearest Starbucks.

I am afraid that I have to agree with Clark Dever, and I feel seem the same about AR. I follow a large number of AR blogs and tweeters and all everyone seems to report on is new apps that basically overlay info onto a screen with no relationship to the real world. A good example is Falcon Gunner, a Star Wars based app which places you in the seat of a gunner on the Millennium Falcon. Whilst it is a really fun game [who doesn’t like shooting down TIE fighters!?] the ‘AR mode’ has absolutely no connection to the real world and basically overlays the game with a transparent background so that it looks like TIE fighters are flying over your sofa. While this is kind of interesting for about 5 minutes, what I really want is the TIEs to interact with the real world – I want them to hide behind the sofa and fly out at me – or fly into a cupboard, hide and wait until I’m not looking and then attack me. I want to feel like I am part of the Star Wars galaxy and it is part of my front room.

Star Wars Arcade: Falcon Gunner (http://jhaepfenning.wordpress.com/2011/06/30/toilets-are-obsolete-a-falcon-gunner-review/)

Heritage applications are bread and butter for AR, one of the first things that comes to mind when talking about AR is how cool it would be to see what the world used to look like. Indeed, archaeological AR apps are actually some of the better apps that are trying to meld the virtual with the real. For instance, the Museum of London’s Streetmuseum app does a good job of pulling in virtual content (in their case pictures/paintings) and overlaying them into their ‘real’ place in the world.

MoL Streetmuseum (image from: http://www.bullseyehub.com/blog/2011/01/top-6-mobile-apps-for-culture-events/)

But, again, this app just overlays the image in (roughly) the right place – there is no way to enter into the image or interact with it, or have people walking around it, through it, behind it. Instead it is really the equivalent of using your GPS to query a database and get back a picture of where you are. Or indeed going to the local postcard kiosk buying an old paper postcard of, say, St. Paul’s Cathedral and then holding it up as you walk around the cathedral grounds.

In my opinion, AR will continue to languish in the Trench of Disillusionment until we can address the following issues:

  1. The technology needs to be used intelligently.Adding on an ‘AR view’ to an app that simply overlays the app on your video feed is not enough. In addition, simply putting GPS locations into a ‘3D’ space and giving them an icon is equally flawed. Especially when those locations are far away and should be obscured [occluded] by the buildings in the way. It is much easier to navigate to these things using a map (saves you trying to walk through buildings) – and I am not entirely sure how much the AR mode adds to it. We need to think of ways that AR is going to add information or provide a new type of information, not just be a different (and less useful) way of displaying the same old information.

    Layar's 'AR View' - note the points that are on different streets (some kilometres away), and should be occluded by the buildings.

  2. The AR algorithms need to recognise the real world. Sorry to keep banging on about this, but if the AR content is not respecting the real world (i.e. being occluded by it or wrapping round it or interacting with it in some way) then you lose the point and the feel of the augmentation. We should be using the real world as a template for the AR experience, taking as much of it as possible and then gently melding the virtual world with it – not harshly slapping virtual content on and simply making it move with the motion of the accelerometers. Advances are currently being made toward this, via the use of depth cameras (such as the Kinect) and also computer-vision based algorithms (such as SLAM and SfM). Metaio, the developer of the popular Junaio AR app, are clearly making big leaps in this area as this video shows. We are a little way off this being commercially available, but it shows that the big companies are finding ways to make the meld more seamless.
  3. AR needs to be seamless (and cheap!). The current normal delivery of AR requires either a head-mounted display (HMD) or a smartphone/tablet. Whilst an AR experience will always need some kind of mediation in order to provide the experience, these devices need to be less bulky and also cheaper in order for them to become accessible to a normal person. In archaeology, the majority of the AR apps are likely to involve tourism, or visits to archaeological/historic sites or museums and therefore the delivery technology needs to be cheap and robust, and ubiquitous enough to enable the AR content to be experienced. Perhaps the fabled real-life Google Googles that have been promised by the end of the year will go someway to making this happen.
  4. We need to wrest the technology away from advertisers. Up until now, a lot of AR content has just been a way for marketeers to sell us stuff. That’s fine and its the way of the world. In fact it obviously drives a lot of the technological advances, because after all who is paying for all this stuff? But we need to be careful that we are also doing good research with AR that does not just have the aim of making the killer app to sell loads of stuff. As archaeologists we are in a unique position where we can advance knowledge and use AR to show people our research in-situ or use it as an aid to field practice, rather than just to present out results. As our discipline moves towards attempting to gain a more embodied experience of the past, AR is the perfect technology to aid in that embodiment and to let us experience visions/sounds/smells of past events in the places that they happened. It can be used to help us think about the past as we are excavating it, and may even aid in/change our interpretations as we go along. We don’t have to be led by the nose with the technology and instead we need to bend it to our will, make use of it intelligently for our discipline. Otherwise we are simply going to end up with Matsuda’s dystopic vision of AR Advertising Hell.

While in danger of pushing the metaphor of the Archaeological Hype Cycle to breaking point let me sum up:

AR is like one of those archaeological excavations where you are promised the world and then when you break ground it doesn’t quite deliver. You see the amazing Barrow of Inflated Expectation that promises archaeological finds and fame beyond your wildest dreams, you engage the press, start a website, hit every social media site possible and get everyone (including your funders and institution) excited beyond belief. Then you cut a slot through through the barrow and realise that it isn’t filled with the grave goods of a lost Bronze Age King, instead there is very little in the Trench at all. The press get bored, your website hit-rate plummets, the previously frequent on-site blogging reduces to once a month and your institution starts worrying about your REF submission. You languish in your trench, wondering how you can rescue the project. But then you remember you have taken whole load of environmental samples, the few scraps of wood you recovered are good enough for dendro-analysis, you analyse the complex stratigraphy very carefully and realise it is a unique sequence… 2 or 3 years of careful post-excavation analysis by just a few team members follows, the hard-graft of making the project really work begins to come to fruition and you are left with a mature project that has real results and is pushing the field of archaeology forward. That is where we are with AR now. We need to get our heads down and do that hard-graft, start thinking what we can take from the hype of AR and build it into something that works, helps us during our field practice and dissemination and hopefully pushes archaeological knowledge forward, rather than just being more eye-candy.

Please leave some comments if you can think of or have examples of applications for AR in archaeology or heritage studies that could get us out of the Trench, it would be great to get a discussion going. I have uploaded an HTML version of my Southampton seminar here. Please note, it was exported from Keynote, and therefore the embedded movies only seem to work when viewed in Safari.

Augmenting a Roman Fort

The following video shows something that I have been working on as a prototype for a larger landscape AR project.

As you can see, by using the Qualcomm AR SDK and Unity3D it is possible to augment some quite complex virtual objects and information onto the model Roman fort. I really like this application, as all I have done is take a book that you can buy at any heritage site (in the UK at least) and simply changed the baseboard design so that the extra content can be experienced. Obviously there was quite a lot of coding behind the scenes in the app and 3D modelling, but from a user point of view the AR content is very easy to see – simply print out the new baseboard, stick it on and load up the app.

For me that is one of the beautiful things about AR, you still have the real world, you still have the real fort that you have made and can play with it whether or not you have an iPad or Android tablet or what-have-you. All the AR does is augment that experience and allow you to play around with virtual soldiers or peasants or horses instead of using static model ones. It also opens up all sorts of possibilities for adding explanations of building types, a view into the day-to-day activities in a fort, or even for telling stories and acting out historical scenarios.

The relative ease of the deployment of the system (now that I have the code for the app figured out!) means this type of approach could be rolled out in all sorts of different situations. Some of my favourite things in museums, for instance, are the old-school dioramas and scale-models. The skill and craftsmanship of the original model will remain, but it could be augmented by the use of the app – and made to come alive.

The model of Housesteads fort in the Housesteads museum

The same is true of modern day prototyping models or architectural models. As humans we are used to looking at models of things, and want to be able to touch them and move them around. Manipulating them on a computer screen just doesn’t somehow seem quite right. But the ability to combine the virtual data, with the manipulation and movement of the real-life model gives us a unique and enhanced viewpoint, and can also allow us to visualise new buildings or exisiting buildings in new ways.

A particularly important consideration when creating AR content is to ensure that it looks as believable or ‘real’ as possible. The human eye is very good at noticing things that seem out of the ordinary or “don’t feel quite right”. One of the main ways to help with creating a believable AR experience  is to ensure the real-world occludes the virtual objects. That is the virtual content can be seen to move behind the real-world objects (such as the soldiers walking through the model gateway). Also it should be possible to interact with the real-world objects and have that affect the virtual content (such as touching one of the buildings and making the labels appear). This will become particularly important as I move into rolling the system out into a landscape instead of just a scale-model. As I augment the real world with virtual objects, those objects have to interact with the real-world as if they are part of it – otherwise too many Breaks in Presence will occur and the value of the AR content is diminished. An accurate 3D model of the real-world is quite a bit harder to create than that of a paper fort, but if I can pull it off, the results promise to be quite a bit more impressive…

 

ARK and Augmented Reality

Recently I have been working away in the Unity gaming engine using it to make some Augmented Reality applications for the iPhone and iPad. It is surprisingly successful and with at least 3 different ways of getting 3D content to overlay on the iOS video feed (Qualcomm, StringAR and UART) the workflow is more open than ever. I have been attempting to load 3D content at runtime, so that dynamic situations can be created as a result of user interaction – rather than having to have all of the resources (3D models, etc.) pre-loaded into the app. This not only saves on file size of the app, it also means that the app can pull real-time information and data that can be changed by many people at once. However, in order to do that I needed some kind of back-end database…

For those of you that know me, you will know that as well as doing my PhD I work on the development of the open-source archaeological database system known as the Archaeological Recording Kit (ARK). It seemed like a logical step to combine these two projects and use ARK as the back-end database. So that is what I went and did and at the same time created a rudimentary AR interface to ARK. The preliminary results can be seen in the video below:

This example uses the Qualcomm AR API, and ARK v1.0. Obviously at the moment it is marker-based AR (or at least image recognition based), the next task is to incorporate the iDevices’ gyroscope to enable the AR experience to continue even when the QR code is not visible.