Lenovo just announced the Mirage Solo VR headset with Google’s WorldSense inside-out tracking capability. The result is an untethered VR headset which presumably has spatial mapping capabilities, allowing spatial maps to be saved and shared. If so, this would be a massive advance over ARKit and ARCore based AR which makes persistence and collaboration all but impossible (the post here goes into a lot of detail about the various issues related to persistence and collaboration with current technology). The lack of a tether also gives it an edge over Microsoft’s (so-called) Mixed Reality headsets.
Google’s previous Tango system (that’s a Lenovo Phab 2 Pro running it above) did have much more interesting capabilities than ARCore but has fallen by the wayside. In particular, Tango had an area learning capability that is missing from ARCore. I am very much hoping that something like this will exist in WorldSense so that virtual objects can be placed persistently in spaces and that spatial maps can be shared so that multiple headsets see exactly the same virtual objects in exactly the same place in the real space. Of course this isn’t all that helpful when used with a VR headset – but maybe someone will manage a pass-through or see-through mixed reality headset using WorldSense that will enable persistent spatial augmentation using a headset with hopefully reasonable cost for ubiquitous use. If it was also able to perform real time occlusion (where virtual objects can get occluded by real objects), that would be even better!
An interesting complement to this is the Lenovo Mirage stereo camera. This is capable of taking 180 degree videos and stills suitable for use with stereoscopic 3D displays, such as the Mirage headset. Suddenly occurred to me that this might be a way of hacking a pass-through AR capability for Mirage before someone does it for real :-). This is kind of what Stereolabs are doing for existing VR headsets with their ZED mini except that this is a tethered solution. The nice thing would be to do this in an untethered way.
This Samsung Odyssey Windows MR headset just arrived and it is really quite good. The earlier developer’s HP headset didn’t have the motion controllers so a HoloLens clicker (or Xbox controller) had to be repurposed for meaningful interaction. The motion controllers are really kind of fun and it’s totally spooky to watch the virtual joysticks move all by themselves when you adjust the real joysticks. The built in sound is another great advantage. It makes the headset somewhat bulky but the benefit is great spatial sound. The images are pretty good too although you do have to get the headset positioned correctly for optimum quality. Once you do, there’s not too much chromatic aberration in a fairly reasonable central area. The distance between the lenses is also adjustable which is another assist in getting good visual quality. The display certainly has the usual screen door effect but it isn’t really very offensive and resolution seems very acceptable. On the negative side, the display does not flip up (well it does once if you push hard enough 🙂 ) which is a bit of a negative while developing software where it is sometimes handy to go back and forth to a desktop display.
It’s kind of fun to open up the desktop and look at the MR Portal there so you can get the classic video feedback effect. I tried watching some movie trailers – not too bad. I then tried a game called Rock and Rails. Yes, well, that didn’t last too long. Instant vertigo and motion sensitivity – these things are just too immersive!
Anyway, a worthy addition to to the growing pile of headsets here.
Probably this many. The pile consists of:
Yes, I am drinking a beer right now – it has been a long day. Mostly I seemed to spend it nursing Windows through its upgrade to the latest Insider Preview (16257) and begging the Insider Preview website to allow me to download the Insider Preview SDK which seemed to require all kinds of things done right and the wind blowing in the right direction at the same time.
The somewhat bizarre screen capture above is from a scene I created in the default room. The hologram figures are animated incidentally. What I mostly failed to do was to get existing HoloLens apps to run on the MR headset as Unity kept on reporting errors when generating the Visual Studio project for the apps, after having performed every other stage of the build process correctly. Very odd. I did manage to get a very simple scene with a single cube working ok, however.
Then I went back to the production version of Windows (15063) and tried things there. Ironically, my HoloLens app worked (apart from interaction) on the MR headset using Unity 5.6.2.
Clearly this particular Rome wasn’t built in a day – a lot more investigation is needed.
Just got my hands on an HP Windows Mixed Reality headset. Now setting up my Windows dev machine to dual boot so that I can have a standard production Windows version for normal work and an insider Program fast ring version to work with this headset. Based on experience, setting up the Insider Preview could take a while.
Fascinating and thought provoking article here about iRobot’s reported plan to monetize the spatial maps created by Roombas. Time and time again in my career (including right now) there has been a need for accurate spatial maps. Once only accessible to high-end robots outfitted with Lidars, now almost anything that moves is capable of generating and refining spatial maps.
This fits very nicely with the idea that mixed reality glasses will become ubiquitous. Imagine walking into a new space and getting a spatial map automatically downloaded from the cloud. No need to ask where the restrooms are any more! This kind of capability would be of benefit to almost any enterprise. For example, check into a hotel and the spatial map with directions to your room gets downloaded to your glasses.
There are three parts to this puzzle – mapping, storage and delivery. Once all these become ubiquitous, not having access to this data or MR glasses will seem very odd indeed. Of course, selling data about private houses is not something that should be allowed without the owner’s explicit permission but making the data available to the owner would have tremendous value. There’s going to be a whole new type of specialist – the virtual interior designer. Unless you need to interact with something physically, why bother having the real object rather than a virtual version of it?
Of course there’s always the chance that some company gets the data and has some software that can detect if your floor plan has space for one of their products. In some kind of bizarre world the product could appear virtually in the space with a link to where you could buy it. This would be real/virtual product placement! What a ghastly prospect :-(.
Some information from Microsoft here about the next generation of HoloLens. I am a great fan of only using the cloud to enhance functionality when there’s no other choice. This is especially relevant to MR devices where internet connectivity might be dodgy at best or entirely non-existent depending on the location. Putting some AI inference capability right on the device means that it can be far more capable in stand-alone mode.
There seems to be the start of a movement to towards putting serious but low power-consuming AI capability in wearable devices. The Movidius VPU is a good example of this kind of technology and probably every CPU manufacturer is on a path to include inference engines in future generations.
While the HoloLens could certainly use updating in many areas (WiFi capability, adding cellular communications, more general purpose processing power, supporting real-time occlusion), adding an inference engine is certainly extremely interesting.