Fascinating and thought provoking article here about iRobot’s reported plan to monetize the spatial maps created by Roombas. Time and time again in my career (including right now) there has been a need for accurate spatial maps. Once only accessible to high-end robots outfitted with Lidars, now almost anything that moves is capable of generating and refining spatial maps.
This fits very nicely with the idea that mixed reality glasses will become ubiquitous. Imagine walking into a new space and getting a spatial map automatically downloaded from the cloud. No need to ask where the restrooms are any more! This kind of capability would be of benefit to almost any enterprise. For example, check into a hotel and the spatial map with directions to your room gets downloaded to your glasses.
There are three parts to this puzzle – mapping, storage and delivery. Once all these become ubiquitous, not having access to this data or MR glasses will seem very odd indeed. Of course, selling data about private houses is not something that should be allowed without the owner’s explicit permission but making the data available to the owner would have tremendous value. There’s going to be a whole new type of specialist – the virtual interior designer. Unless you need to interact with something physically, why bother having the real object rather than a virtual version of it?
Of course there’s always the chance that some company gets the data and has some software that can detect if your floor plan has space for one of their products. In some kind of bizarre world the product could appear virtually in the space with a link to where you could buy it. This would be real/virtual product placement! What a ghastly prospect :-(.
This really is a fascinating piece of technology. Check out the last 20 seconds of the video if nothing else.
A Movidius Neural Compute Stick just turned up in a delightfully retro style box. Won’t have time to do anything with it until the weekend unfortunately but very interested to see what it can do. It’s another enabler of the movement to add inference to low power mobile devices without relying on a cloud server.
Some information from Microsoft here about the next generation of HoloLens. I am a great fan of only using the cloud to enhance functionality when there’s no other choice. This is especially relevant to MR devices where internet connectivity might be dodgy at best or entirely non-existent depending on the location. Putting some AI inference capability right on the device means that it can be far more capable in stand-alone mode.
There seems to be the start of a movement to towards putting serious but low power-consuming AI capability in wearable devices. The Movidius VPU is a good example of this kind of technology and probably every CPU manufacturer is on a path to include inference engines in future generations.
While the HoloLens could certainly use updating in many areas (WiFi capability, adding cellular communications, more general purpose processing power, supporting real-time occlusion), adding an inference engine is certainly extremely interesting.
Very cool video sequence.
The ZenFone AR is a potentially very interesting device, combining both Tango for spatial mapping and Daydream capability for VR headset use all in one package. This is a step up from the older Phab 2 Pro Tango phone in that it can also be used with Daydream (and looks like a neater package). Adding Tango to Daydream means that it is possible to do inside-out spatial tracking in a completely untethered VR device. It should be a step up from ARKit in its current form which relies on just inertial and VSLAM tracking from what I understand. Still, the ability for ARKit to be used with existing devices is a massive advantage
Maybe in the end the XR market will divide up into those applications that don’t need tight spatial locking (where standard devices can be used) and those that do require tight spatial locking (demanding some form of inside-out tracking).
Fascinating video of a HoloLens being used in a real back surgery – presumably the video was mostly shot using Spectatorview or something similar. I have seen other systems where mocap type technology is used to get more precision in the pose of the HoloLens but this system doesn’t seem to do that. Not that I am a surgeon but I doubt that the HoloLens can replace the usual fluoroscope since that gives real time feedback on the location of things like needles with respect to the body (yes, I have been on the literal sharp end of this!). However, if the spatial stability of the hologram is good enough, I am sure that it greatly helps with visualization.
As one of the many people with dodgy backs, I am always interested in anything that can improve outcomes and minimize risk and side-effects. If the HoloLens can do that – brilliant!