Manifold: getting ready for the new <timestamp, object> store

Previously, Manifold did not directly deal with timestamps. Instead, rtndf includes a timestamp in the JSON part of the JSON and binary messages that are passed around over the Manifold. However, I am working on a <timestamp, object> store (i.e. a storage system that is indexed via a timestamp) and it makes sense to include this as a Manifold node since it is general purpose and not specific to rtndf over Manifold. Consequently, the latest version of Manifold has been modified to include a timestamp in the MANIFOLD_EHEAD data structure that is at the start of every Manifold message (it’s Manifold’s equivalent of the IP header in some ways).

This has had a knock-on effect of changing APIs since now the timestamp has to be passed explicitly to Manifold library functions. This means that the C++ and Python core libraries needed small modifications. Always a pain to have to go back and tweak all of the Manifold and rtndf C++ and Python nodes but it was worth it in this case. The <timestamp, object> storage system needs to support very high throughputs for both writes and reads so passing the timestamp around as a double in binary form makes a lot of sense. The idea is that all data flowing through the Manifold can be captured by the <timestamp, object> store which can then be accessed by other nodes at some other time. The store can be searched by absolute or nearest timestamp, allowing easy correlation of data across multiple sources.

Mysterious black area – Schmorl’s Node (SN)?

mri2016sectionThe instant radiographers chez richardstechnotes have decided that the mysterious black area on my recent MRI is in fact a Schmorl’s node. Training by Dr Wikipedia suggested that not very much is dark in both T1 and T2 MRI modes which supports the SN diagnosis and rules out all kinds of nasties. Turns out it may or may not be a factor in anything, according to this paper since most SNs are asymptomatic. We’ll have to see what the professionals say…

MRIs over 9 years

mri2007mri2011mri2016 I have a recurrent back problem (sigh) so I have had three MRIs over the last 9 years, The one on the top left was from 2007, the one on the top right was from 2011 and the one to the left was from today. There definitely seems to be a great improvement in resolution between the three (these are just sample images from the the usual slice sequences). My intervertebral discs seem to have gone in the opposite direction however and there seems to be something new (or at least enhanced) – that black circle on the L3 (?) vertebra in the 2016 image. Oh well. Not sure if it is a blessing or a curse that they provide a disk with viewing software so that any idiot (i.e. me) can look at the images without professional interpretation. There’s definitely some sort of parallel here to the handling of unfiltered genetic information…


The pyramid – an rtn data flow point of presence

PyramidThe pyramid was originally put together for another project but has received a new lease of life as an rtn data flow point of presence. It uses a Logitech C920 webcam for video and audio and has powered speakers for text to speech or direct audio output. The top of the pyramid has an LED panel that indicates the current state of the pyramid:

  • Idle – waiting for wakeup phrase.
  • Listening – collecting input.
  • Processing – performing speech recognition and processing.
  • Speaking – indicates that the pyramid is generating sound.

The pyramid has a Raspberry Pi 2 internally along with a USB-connected Teensy 3.1 with an OctoWS2811 to run the LED panel. The powered speakers came out of some old Dell PC speakers and the case was 3D printed.

It runs these rtndf/Manifold nodes:

  • uvccam – generates a 1280 x 720 video stream at 30fps.
  • audio – generates a PCM audio stream suitable for speech recognition.
  • tts – text to speech node to convert text to speech.
  • tty – a serial interface used to communicate with the Teensy 3.1.

Speech recognition is performed by the speechdecode node that runs on a server, as is object recognition (recognize), motion detection (modet) and face recognition (facerec).

The old project had an intelligent agent that took the output of the various stream processors and generated the messages to control the pyramid. This has yet to be moved over to rtndf.

Containerizing of Manifold and rtndf (almost) complete

sensorviewI’ve certainly been learning a fair bit about Docker lately. Didn’t realize that it is reasonably easy to containerize GUI nodes as well as console mode nodes so now rtnDocker contains scripts to build and run almost every rtndf and Manifold node. There are only a few that haven’t been successfully moved yet. imuview, which is an OpenGL node to view data from IMUs, doesn’t work for some reason. The audio capture node (audio) and the audio part of avview (the video and audio viewer node) also don’t work as there’s something wrong with mapping the audio devices. It’s still possibly to run these outside of a container so it isn’t the end of the world but it is definitely a TODO.

Settings files for relevant containerized nodes are persisted at the same locations as the un-containerized versions making it very easy to switch between the two.

rtnDocker has an all script that builds all of the containers locally. Alternatively, Docker Hub has builds for many of the components to save time where pulling is easier than building locally. These include:

  • manifoldcore. This is the base Manifold core built on Ubuntu 16.04.
  • manifoldcoretf. This uses the TensorFlow container as the base instead of raw Ubuntu.
  • manifoldcoretfgpu. This uses the TensorFlow GPU-enabled container as the base.
  • manifoldnexus. This is the core node that constructs the Manifold.
  • manifoldmanager. A management tool for Manifold nodes.
  • rtndfcore. The core rtn data flow container built on manifoldcore.
  • rtndfcoretf. The core rtn data flow container built on manifoldcoretf.
  • rtndfcoretfgpu. The core rtn data flow container built on manifoldcoretfgpu.

There are also a couple of rtndf core versions that can only be built locally as they are so big and take a while to build so not really suited to Docker Hub:

  • rtndfcoretfcv2. The core rtn data flow container built on rtndfcoretf and adding OpenCV V3.0.0.
  • rtndfcoretfgpucv2. The core rtn data flow container built on rtndfcoretfgpu and adding OpenCV V3.0.0.

The last two are good bases to use for anything combining machine learning and image processing in an rtn data flow PPE. The OpenCV build instructions were based on the very helpful example here. For example, the recognize PPE node, an encapsulation of Inception-v3, is based on rtndfcoretfgpucv2. The easiest way to build these is to use the scripts in the rtnDocker repo.

manifoldcore and rtndfcore on Docker Hub

Decided in the end that it made a lot of sense to have the core Manifold and rtndf builds on Docker Hub so there are now versions of manifoldcore and rtndfcore in standard, TensorFlow and TensorFlow GPU builds, along with ManifoldNexus.

It’s unfortunate to have three builds but it makes life a bit easier in some ways. The TensorFlow containers already include Ubuntu 14.04 so it’s easy to use that as a base and add Manifold and rtndf. Plus the GPU build is 1GB so it’s nice to have the much smaller standard build for nodes that don’t need TensorFlow or TensorFlow with GPU.

So installing and running ManifoldNexus is as easy as:

docker pull rtndocker/manifoldnexus
docker run --net=host --name manifoldnexus -v /root/.config/Manifold rtndocker/manifoldnexus

The “-v” part assigns that directory to be persisted as it is where ManifoldNexus stores its configuration. Configuration changes can be made using ManifoldManager.