I just loved this piece in IEEE Spectrum. There has never been a better application for TensorFlow than sorting Lego bricks.
Nice catch! We are having some trees removed from our property – this machine is just amazing.
Yes, I am drinking a beer right now – it has been a long day. Mostly I seemed to spend it nursing Windows through its upgrade to the latest Insider Preview (16257) and begging the Insider Preview website to allow me to download the Insider Preview SDK which seemed to require all kinds of things done right and the wind blowing in the right direction at the same time.
The somewhat bizarre screen capture above is from a scene I created in the default room. The hologram figures are animated incidentally. What I mostly failed to do was to get existing HoloLens apps to run on the MR headset as Unity kept on reporting errors when generating the Visual Studio project for the apps, after having performed every other stage of the build process correctly. Very odd. I did manage to get a very simple scene with a single cube working ok, however.
Then I went back to the production version of Windows (15063) and tried things there. Ironically, my HoloLens app worked (apart from interaction) on the MR headset using Unity 5.6.2.
Clearly this particular Rome wasn’t built in a day – a lot more investigation is needed.
Just got my hands on an HP Windows Mixed Reality headset. Now setting up my Windows dev machine to dual boot so that I can have a standard production Windows version for normal work and an insider Program fast ring version to work with this headset. Based on experience, setting up the Insider Preview could take a while.
Found this very useful tutorial on GANs. I like the idea of getting away from fully supervised training with enormous labeled data sets. The ultimate goal would be to provide systems with the ability to obtained generalized understanding of objects and concepts from just a very small set of labeled samples.
Check out Rodney Brooks’ Q&A on human-level AI here. Anyone who can use the word “qualming” in a valid sentence is a genius.
Fascinating and thought provoking article here about iRobot’s reported plan to monetize the spatial maps created by Roombas. Time and time again in my career (including right now) there has been a need for accurate spatial maps. Once only accessible to high-end robots outfitted with Lidars, now almost anything that moves is capable of generating and refining spatial maps.
This fits very nicely with the idea that mixed reality glasses will become ubiquitous. Imagine walking into a new space and getting a spatial map automatically downloaded from the cloud. No need to ask where the restrooms are any more! This kind of capability would be of benefit to almost any enterprise. For example, check into a hotel and the spatial map with directions to your room gets downloaded to your glasses.
There are three parts to this puzzle – mapping, storage and delivery. Once all these become ubiquitous, not having access to this data or MR glasses will seem very odd indeed. Of course, selling data about private houses is not something that should be allowed without the owner’s explicit permission but making the data available to the owner would have tremendous value. There’s going to be a whole new type of specialist – the virtual interior designer. Unless you need to interact with something physically, why bother having the real object rather than a virtual version of it?
Of course there’s always the chance that some company gets the data and has some software that can detect if your floor plan has space for one of their products. In some kind of bizarre world the product could appear virtually in the space with a link to where you could buy it. This would be real/virtual product placement! What a ghastly prospect :-(.