Future Fauna (Nya Djur) is an interactive art installation in Augmented Reality at The Swedish Museum of Natural History in Stockholm. The visitors interact with the taxidermy specimens in the showcases, and see the long since deceased animals return to life.
The visitors can play with and feed the animals and, if they treat them well, breed them. Virtual animals are decoupled from the strict laws of nature and can interbreed across species. Strange beings populate the exhibition rooms; owls with antlers, foxes with eagle wings, wolfs with moose bodies.
What role does human perceptions of beauty play in our interaction with non-human beings? What happens with nature in virtual worlds outside human control? What cryptozoological potentials comes with a reality freed from the implacable laws of nature?
You can use the app without visiting the museum. In the first info prompt you get an option to start with 9 animals. You breed the animals by cuddling with them.
This is also an exploration of the possibilities with using Augmented Reality in museums. What happens when the visitors can interact with otherwise static installations? AR opens up for a deeper engagement with the exhibitions.
The 3D models are created by scanning specimens in the archive of the museum.
I used Occipital Structure Sensor for the 3D mesh, AgiSoft PhotoScan for the texture, cleaned the models in 3D-Coat, rigged and weighted them in Cinema 4D. The animations are made by Phillip Guerrero and reapplied to the scanned animals.
The project is built in Unity, and uses ARKit and ARCore.
The basement of the museum is filled with taxidermy animals, a large dusty archive of life. Beautiful African antelopes and towering bears, as well as ragged old cats and a reindeer broken in half. Nothing is to be thrown away in case it contains valuable genetic material, so we in the future can recreate the beings we now make extinct.
The mixed animals are made by a genetic algorithm that randomly selects the genes of the parents and creates a new being with the respective limbs. The users themselves chooses what animals to breed, but cannot choose in what way they are combined.
Our initial aim was to use a neural network for this process, in collaboration with computer engineer Hiroharu Kato, creator of the worlds first Neural Renderer. For several reasons this method unfortunately was not viable and we had to resort to a simpler method. This is an initial test of a Tiger reimagined by Hiroharu’s neural network.
The animals follow a NavMesh that is based on the floor plan of the room. This force the animals to avoid the same obstacles as you, and creates the illusion of a shared space. The NavMesh is re-instantiated at each showcase, which solves drifting and misalignment by ensuring realignment as soon as the experience reaches its next step, creating a seamless experience.
The exhibition has 9 active showcases, hence the virtual world is made up by 9 different virtual spaces, each with its own origo as point of instantiation to be aligned with the real space. When swapping one virtual space for another the animals are translated to the accurate point in the new space to not disappear across the room.
A printed marker mounted on the side of the showcase constitute the key between the virtual and the real space. The computer recognises the marker as the instantiation point of the virtual room.
The markers are designed to be as easily readable by the computer as possible, with lot of detail, a broad histogram and lack of repetition, while also being easily identifiably by humans, as well as aesthetically fitting within the exhibition space and the general graphic profile of the museum.