ZenFone AR – Tango and Daydream together

The ZenFone AR is a potentially very interesting device, combining both Tango for spatial mapping and Daydream capability for VR headset use all in one package. This is a step up from the older Phab 2 Pro Tango phone in that it can also be used with Daydream (and looks like a neater package). Adding Tango to Daydream means that it is possible to do inside-out spatial tracking in a completely untethered VR device. It should be a step up from ARKit in its current form which relies on just inertial and VSLAM tracking from what I understand. Still, the ability for ARKit to be used with existing devices is a massive advantage

Maybe in the end the XR market will divide up into those applications that don’t need tight spatial locking (where standard devices can be used) and those that do require tight spatial locking (demanding some form of inside-out tracking).

Using the HoloLens to aid back surgery

Fascinating video of a HoloLens being used in a real back surgery – presumably the video was mostly shot using Spectatorview or something similar. I have seen other systems where mocap type technology is used to get more precision in the pose of the HoloLens but this system doesn’t seem to do that. Not that I am a surgeon but I doubt that the HoloLens can replace the usual fluoroscope since that gives real time feedback on the location of things like needles with respect to the body (yes, I have been on the literal sharp end of this!). However, if the spatial stability of the hologram is good enough, I am sure that it greatly helps with visualization.

As one of the many people with dodgy backs, I am always interested in anything that can improve outcomes and minimize risk and side-effects. If the HoloLens can do that – brilliant!

Raspberry Pi based outdoor camera

It was time to replace one of my old outdoor Panasonic network cameras. They get damaged by the sun – the plastic bubble over the lens gets really nasty and the video ends up looking like there is perma-fog outside.

I have a few of these wide angle webcams around for other projects and it seemed like they might be ideal for this purpose. Since they are not weather-proof I needed a suitable housing. This one from Monoprice was the smallest that I could find. It’s still ludicrously large for this but never mind. I decided just to put the webcam in the housing (attached with hot glue – my favorite engineering material) and pass the USB cable through to the inside of the building to connect to a Raspberry Pi. This eliminates the need to ever open the housing again and reduces the number of wires from two to one (power and ethernet versus USB).

The result is not bad at all considering the cost and it certainly looks like a serious piece of equipment!

The image quality in the corners isn’t spectacular but at least gives good coverage. The housing does intrude at the top left and right but that’s not a major problem.

Yes, there are all kinds of outdoor cameras that you can buy at very reasonable cost. However, I certainly don’t want anything that relies on or involves a cloud service – I want my video data to remain on site at all times. Plus, this is a fully open system so that I can do whatever I want with the data without having to battle proprietary SDKs (even if they are available). For example, I do custom motion detection and multiple resolution stream generation in the camera itself which fits nicely with all the other components of my system.

Mixed Reality and the missing fourth dimension

The screen capture above is a scene from a HoloLens via mixed reality capture (MRC) showing four virtual rings with different levels or brightness. The top left is 100% red, the bottom right black and the other two are intermediate levels of brightness.

The photograph above was shot through a HoloLens and is a reasonable representation of what the wearer actually sees. Unsurprisingly, since all see-through MR headsets work by overlaying light on the real scene, the black ring has vanished and the intermediate brightness rings become transparent to some degree based on the relative brightness to the real world scene.

This is a considerable obstacle for inserting realistic virtual objects into the real world – if they are dark, they will be almost transparent. And while indoors it is possible to control ambient lighting, the same is certainly not true outdoors.

What is needed is not just support for RGB but RGBA where A is the fourth dimension of color in this case. The A (alpha) value specifies the required transparency. The Unity app running on the HoloLens does of course understand transparency and can generate the required data but the HoloLens has no way to enforce it. One way to do this would be to supplement the display with an LCD that acts as a controllable matte. The LCD controls the extent to which the real world is visible at each display pixel while the existing display controls the color and intensity of the virtual object. No doubt there are significant challenges to implementation but this may be the only way to make see-through MR headsets work properly outdoors.

Resources for learning about quantum computing

It’s not too hard to understand the basics of quantum computing in qualitative, overview terms but I have decided that a fun project would be to actually understand the practicalities of quantum computers and the algorithms that run on them.

There doesn’t seem to be any interesting online courses around at the moment so I have been working through this book which does indeed provide a nice progressive path through the subject. In addition I came across this course and complete course notes that also seems to provide a pretty good explanation of things.

There’s really no need to understand everything about quantum mechanics – just a few key principles that are relevant. It’s a bit like you don’t have to understand exactly how electrons and holes flow in a transistor in order to put together logic gates or write code executed by a processor. Follow the rules and everything should just work.

My goal is to be able to do something interesting with one of the various quantum computer simulators that are available – there are a list of them here. An especially interesting example is the Quantum Computing Playground which uses WebGL to provide a GPU-enhanced simulation.

 

Running Ubuntu 16.04 LTS on Windows 10 desktop

Intrigued to discover that the Windows Store has an Ubuntu app and decided to try it out and see how functional it is. Very, it turns out. Took a while to get started as I had to set up the PC for Insider Preview Fast Ring to get the required Windows build level. Once that was done, it was all pretty straightforward.

The app is a command line Ubuntu version. I tried installing ubuntu-desktop but that didn’t really work. However, lots of other things did work including network servers (that need to listen on ports for connections) and things like that. The Windows 10 desktop’s drives are visible at /mnt/x where x is the Windows drive letter. The app runs .bashrc and .profile scripts at startup so it is easy to get things to run automatically. As you can see from the screen capture, it has access to all of the CPU resources unlike a virtual machine where you can partition the cores (and RAM for that matter) – I was able to get all 12 cores running compilations simultaneously. However, in most cases the apparent inability to control the number of cores used by the Ubuntu app is probably unimportant (I didn’t find any way of changing settings).

The Ubuntu app appears on the same host adaptor as the Windows desktop so they share an IP address and port space. I was unable to ssh into the app for example. However, I was able to run webservers on non-standard ports and that worked just fine.

There is no /dev directory and therefore no way to access USB devices such as webcams. USB flash drives do not appear either – only the internal drives are visible to the Ubuntu app as far as I can tell.

All in all there are some caveats but it is essentially a very useful app for situations where it is important to be able to run Linux programs on a Windows machine without the overhead of running a virtual machine.

 

The trouble with temperature sensors

Working with the Bosch XDK reminded me that temperature sensing seems like such an obvious concept but it is actually very tough to do and get correct results. The prototype above was something I tried to do in a startup a few years ago, back when this kind of thing was all the rage. It combined motion sensing, the usual environmental sensors including air quality and could have a webcam attached if you wanted video coverage of the space also.

In this photo of the interior you can see my attempt at getting reasonable results from the temperature sensor by keeping the power and ground planes away from the sensor – the small black chip on the right of the photo. Trouble is, the pcb’s FR-4 still conducts heat, as do the remaining copper traces to the chip. Various other attempts followed included cutting a slot through the FR-4 and isolating the air above the rest of the circuit board from the sensor. This is an example:SensorBoard1.jpegAnd this is a thermistor design (with some additional wireless hardware):

SensorBoard2.jpeg

In the end, the only solution was to use a thermistor attached by wires that could be kept some distance from the main circuitry. Or, just having all the very low power sensors completely removed from the processor.

The Raspberry Pi Sense HAT suffers from this problem as it is right above the Pi’s processor, as does the Bosch XDK itself. Actually I am not aware of any other really good solution apart from the one where a cable is used to separate the sensor board completely from the processor controlling it (which might work for the Sense HAT although I have not tried that.