Since the Moverio BT-300 AR glasses run Android 5.1 using an Atom processor, it is possible to run Unity projects on them. The starting point is the instructions here on setting up Unity for the Android platform. One problem with this is that the android command is not included in Android Studio apparently so Unity builds will fail. So, to get Unity builds for Android to work, it is necessary to download and unzip the command line tools from the bottom of this page. This will create a directory tree that includes a tools directory. This should be used to replace the original tools directory in the Android Studio install, usually found at:
Incidentally, that is also the path that Unity needs to know in order to perform its builds.
There is a Unity plugin that provides support for 3D on the BT-300. For instructions on how to use the plugin, read:
Assets > MoverioBT300UnityPlugin > MoverioController > README
The plugin includes a scene called MoverioTutorial that can be used as a starting point. It demonstrates some of the features of the plugin.
After the package name has been set in Player > Other Settings, it should then be possible to build, deploy and run on the BT-300 directly from Unity. I had a few problems with the tutorial with regard to SDK functionality but the Unity part seemed to work well (although I had to set 3D mode and disable the 2D camera manually sometimes). I am sure that I am doing something wrong – I’ll update the post when I work out what is happening.
Since the BT-300 is an Android device, I thought it would be fun to try out WebRTC on it using the Android WebRTC sample here. Right now it’s WebRTC everything chez richard so this was a natural thing to test on the BT-300.
It worked just fine. The screen capture above shows the browser display from a desktop running Chrome. I am looking at the display hence the funky effect. The photo below is an attempt to show what it looks like in the BT-300.
In this case, the desktop camera is watching me desperately trying to juggle my camera and the BT-300 to get this image which is the left eye display from the BT-300. I must stress that the real quality of the image on the BT-300 is vastly superior to how it looks here – it’s really tricky to get any sort of photo in fact.
Building the Android sample takes a long time – in fact downloading the code takes for ever as it is about 16G of download! One extra point is that, after the gsync line, it is necessary to enter:
in order to get all the prerequisites installed. Apart from that, everything was straightforward.
The arrival of a pair of BT-300 Smart Glasses gave me an opportunity to take another daft photo of myself wearing a wearable. My eyes don’t really look like that – that’s just where the (presumably) semi-silvered mirror surface is for each eye. Projectors at the sides generate images that are combined with the real light coming in to form a composite AR image.
Following on from an earlier post on Enhanced Reality, it occurred to me that separating the stereo cameras (and microphones) from the ER headset creates a new way of achieving telepresent remote participation – Telepresent Enhanced Reality or TER. I was actually trying out a simpler version a while back when I had a camera on a pan/tilt platform slaved to an Oculus DK2 VR headset. A real TER setup would require stereo cameras and multiple microphones on a pan/tilt/roll mount. The user would have a VR headset and the pose of the pan/tilt/roll mount would mirror movements of the user’s head.
An interesting use would be for conferences where some of the participants are in a conventional conference room but wearing AR/MR/ER headsets (eg HoloLens). Positions in the room for the remote participants would each have a stereo camera/microphone remote. The local participants would obviously be able to see each other but instead of the camera/microphone hardware, they would see avatars representing the remote users. These avatars could be as sophisticated or as simple as desired. Remote participants would see (via the stereo cameras) the conference room and local participants and would also see the remote participant avatars which replace the physical camera/microphone hardware at those locations. Alternatively, these could be suitably equipped telepresence robots (or even cameras mounted on small drones) which would also allow movement around the room. Really anything that has the essential hardware (stereo cameras, microphones, pan/tilt/roll capability) could be used.
Given that everyone has AR/MR capability in this setup, something like a conventional projected presentation could still be done except that the whole thing would be virtual – a virtual screen would be placed on a suitable wall and everyone could look at it. Interaction could be with simulated laser pointers and the like. Equally, every position could have its own simulated monitor that displays the presentation. Virtual objects visible to everyone could be placed on the table (or somewhere in the room) for discussion, annotation or modification.
Obviously everyone could be remote and use a VR headset and everything could then be virtual with no need for hardware. However, the scheme described preserves some of the advantages of real meetings while at the same time allowing remote participants to feel like they are really there too.
I’ve been working through some of the HoloLens tutorials and thought that the Holograms 230 tutorial was pretty amusing. The screen capture shows a solar system being projected in space. The spatial mapping mesh can be seen conforming to objects in view. The poster just to the left of the sun isn’t real – it’s one of the things that you can place on a wall to demonstrate this capability.
Just ordered a Phab 2 Pro smartphone so that I can experiment with its Tango-enabled capabilities. I firmly believe that AR/MR/ER devices will soon become as important as conventional smartphones are today, touching almost every aspect of life.
Collaboration using AR is a fascinating area with many potential applications. The HoloToolkit is a very handy resource in general and includes the HoloToolkit.Sharing library to assist with collaboration. The HoloToolkit-Unity actually contains built versions of the Server, SessionManager and Profiler but it seemed like a good idea to build from scratch.
There are a few pre-requisites:
- Windows SDK 10.0.10240
- Windows SDK 10.0.10586
- Common Tools for Visual C++
- Windows 8.1 SDK and Universal CRT SDK
- Java 8 SDK
An easy way to get the Windows SDKs is to run the BuildAll.bat script which will exit with an error if something is missing. Then use the solution file for the element that failed to start VS2015. VS2015 will then install the missing components. The Java SDK needs to be installed manually and requires environment variables JAVA_BIN that points to the JDK bin directory and JAVA_INCLUDE that points to the JDK include directory. The BuildAll.bat script should then complete successfully.
The Server is run using SharingService.exe and the user needs administrator permission to install as a service. This can be done by opening a command window in administrator mode and running the command from that for example. It’s actually useful to run the server using the -local flag (as a command line program) as then it’s easy to see status and error messages. The SessionManager displays current server state including connected clients.