Category Archives: Augmented reality

How many XR headsets is too many?

Probably this many. The pile consists of:

Advertisements

Speeding up ARKit development with Unity ARKit Remote

Anything that speeds up the development cycle is interesting and the Unity ARKit Remote manages to avoid having to go through Xcode every time around the loop. Provided the app can be run in the Editor, any changes to objects or scripts can be tested very quickly. The iPhone (in this case) runs a special remote app that passes ARKit data back to the app running in the Editor. You don’t see any of the Unity stuff in the phone itself, just the camera feed. The composite frames are shown in the Editor window as above.

Using ARKit with ExpoKit, React Native, three.js and soon (hopefully) WebRTC


This rather unimpressive test scene, captured from an iPhone, is actually quite interesting. It is derived from a simple test app using Expo that makes it easy to use React Native, ARKit and three.js to generate native iOS (and Android although not with ARKit of course) apps. Expo provides a nice environment where a standard app supports rapid development of javascript apps on top of the underlying native support. This test app worked will in Expo within a very short space of time.

The only problem is that I also want to support WebRTC in the app. There is a React Native WebRTC implementation but as far as I can tell it requires that the app be detached from Expo to ExpoKit so that it can be included in Xcode. Unfortunately, that didn’t work as AR support didn’t seem to be included in the automatically generated project.

To include ARKit support requires that the Podfile in the project’s ios directory be modified to add AR support. The first section should look like this:

source 'https://github.com/CocoaPods/Specs.git'
platform :ios, '9.0'
target 'test' do
  pod 'ExpoKit',
   :git => "http://github.com/expo/expo.git",
   :tag => "ios/2.0.3",
   :subspecs => [
     "Core",
     "CPP”,
     "AR"
   ]
...

Basically “AR” is added as an extra subspec. Then ARKit seems to work quite happily with ExpoKit.

ZenFone AR – Tango and Daydream together

The ZenFone AR is a potentially very interesting device, combining both Tango for spatial mapping and Daydream capability for VR headset use all in one package. This is a step up from the older Phab 2 Pro Tango phone in that it can also be used with Daydream (and looks like a neater package). Adding Tango to Daydream means that it is possible to do inside-out spatial tracking in a completely untethered VR device. It should be a step up from ARKit in its current form which relies on just inertial and VSLAM tracking from what I understand. Still, the ability for ARKit to be used with existing devices is a massive advantage

Maybe in the end the XR market will divide up into those applications that don’t need tight spatial locking (where standard devices can be used) and those that do require tight spatial locking (demanding some form of inside-out tracking).

Mixed reality: does latency matter and is it immersive anyway?

I had a brief discussion last night about latency and its impact on augmented reality (AR) versus virtual reality (VR). It came up in the context of tethered versus untethered HMDs. An untethered HMD either has to have the entire processing system in the HMD (as in the HoloLens) or else use a wireless connection to a separate processing system. There’s a lot to be said for not putting the entire system in the HMD – weight, heat etc. However, having a separate box and requiring two separate battery systems is annoying but certainly has precedent (iPhone and Apple Watch for example).

The question is whether the extra latency introduced by a wireless connection is noticeable and, if so, is it a problem for AR and MR applications (there’s no argument for VR – latency wants to be as close to zero as possible).

Just for the record, my definition of virtual, augmented and mixed reality is:

  • Virtual reality. HMD based with no sense of the outside world and entire visual field ideally covered by display.
  • Augmented reality. This could be via HMD (e.g. Google Glass) or via a tablet or phone (e.g. Phab 2 Pro). I am going to define AR as the case where virtual objects are overlaid on the real world scene with no or partial spatial locking but no support for occlusion (where a virtual object correctly goes behind a real object in the scene). Field of view is typically small for AR but doesn’t have to be.
  • Mixed reality. HMD based with see-through capability (either optical or camera based) and the ability to accurately spatially lock virtual objects in the real world scene. Field of view ideally as large as possible but doesn’t have to be. Real time occlusion support is highly desirable to maintain the apparent reality of virtual objects.

Back to latency and immersion. VR is the most highly immersive of these three and is extremely sensitive to latency. This is because any time the body’s sensors disagree with what the eyes are seeing (sensory inconsistency) is pretty unpleasant, leading rapidly to motion sickness. Personally I can’t stand using the DK2 for any length of time because there is always something or some mode that causes a sensory inconsistency.

AR is practically insensitive to latency since virtual objects may not be locked at all to the real world. Plus the ability to maintain sight of the real world seems to override any transient problems. It’s also only marginally immersive in any meaningful sense – there very little telepresence effect.

MR is virtually the same as AR when it comes to latency sensitivity and is actually the least immersive of all three modes when done correctly. Immersion implies a person’s sense of presence is transported to somewhere other than the real space. Instead, mixed reality wants to cement the connection to the real space by also locking virtual objects down to it. It’s the opposite of immersion.

Real world experience with the HoloLens tends to support the idea that latency is not a terrible problem for MR. Even when running code in debug mode with lots of messages being printed (which can reduce frame rate to a handful of frames per second) isn’t completely awful. With MR, latency breaks the reality of virtual objects because they may not remain perfectly fixed in place when the user’s head is moving fast. But at least this doesn’t generate motion sickness, or at least not for me.

There is a pretty nasty mode of the HoloLens though. If the spatial sensors get covered up, usually because it is paced on a table with things blocking them, the HoloLens can get very confused and virtual objects display horrendous jittering for a while until it settles down again. That can be extremely disorientating (I have seen holograms rotated through 90 degrees and bouncing rapidly side to side – very unpleasant!).

On balance though, it may be that untethered, light weight HMDs with separate processor boxes will be the most desirable design for MR devices. The ultimate goal is to be able to wear MR devices all day and this may be the only realistic way to reach that goal.