Category Archives: Manifold

Using Unity and Manifold with Android devices to visualize sentient spaces

This may not look impressive to you (or my wife as it turns out) but it has a lot of promise for the future. Following on from 3DView, there’s now an Android version called (shockingly) AndroidView that is essentially the same thing running on an Android phone in this case. The screen capture above shows the current basic setup displaying sensor data. Since Unity is basically portable across platforms, the main challenge was integrating with Manifold to get the sensor data being generated by ZeroSensors in an rt-aiEdge stream processing network.

I did actually have a Java implementation of a Manifold client from previous work – the challenge was integrating with Unity. This meant building the client into an aar file and then using Unity’s AndroidJavaObject to pass data across the interface. Now I understand how that works, it really is quite powerful and I was able to do everything needed for this application.

There are going to be more versions of the viewer. For example, in the works is rtXRView which is designed to run on Windows MR headsets. The way I like to structure this is to have separate Unity projects for each target and then move common stuff via Unity’s package system. With a bit of discipline, this works quite well. The individual projects can then have any special libraries (such as MixedRealityToolkit), special cameras, input processing etc without getting too cute.

Once the basic platform work is done, it’s back to sorting out modeling of the sentient space and positioning of virtual objects within that space. Multi-user collaboration and persistent sentient space configuration is going to require a new Manifold app to be called SpaceServer. Manifold is ideal for coordinating real-time changes using its natural multicast capability. For Unity reasons, I may integrate a webserver into SpaceServer so that assets can be dynamically loaded using standard Unity functions. This supports the idea that a new user walking into a sentient space is able to download all necessary assets and configurations using a standard app. Still, that’s all a bit in the future.

Advertisements

3DView: visualizing environmental data for sentient spaces

Th 3DView app I mentioned in a previous post is moving forward nicely. The screen capture shows the app capturing real time from four ZeroSensors, with the real time data coming from an rt-ai Edge stream processing network via Manifold. The app creates a video window and sensor display panel for each physical device and then updates the data whenever new messages are received from the ZeroSensor.

This is the rt-ai Edge part of the design. All the blocks are synth modules to speed the design replication. The four ZeroManifoldSynth modules each contain two PutManifold stream processing elements (SPEs) to inject the video and sensor streams into the Manifold. The ZeroSynth modules contain the video and sensor capture SPEs. The ZeroManifoldSynth modules all run on the default node while the ZeroSynth modules run directly on the ZeroSensors themselves. As always with rt-ai Edge, deployment of new designs or design changes is a one click action making this kind of distributed system development much more pleasant.

The Unity graphics elements are basic as I take the standard programmer’s view of Unity graphics elements: they can always be upgraded later by somebody with artistic talent but the key is the underlying functionality. The next step moving forward is to hang these displays (and other much more interesting elements) on the walls of a 3D model of the sentient space. Ultimately the idea is that people can walk through the sentient space using AR headsets and see the elements persistently positioned in the sentient space. In addition, users of the sentient space will be able to instantiate and position elements themselves and also interact with them.

Even more interesting than that is the ability for the sentient space to autonomously instantiate elements in the space based on perceived user actions. This is really the goal of the sentient space concept – to have the sentient space work with the occupants in a natural way (apart from needing an AR headset of course!).

For the moment, I am going to develop this in VR rather than AR. The HoloLens is the only available AR device that can support the level of persistence required but I’d rather wait for the rumored HoloLens 2 or the Magic Leap One (assuming it has the required multi-room persistence capability).

On the road to sentient spaces: using Unity to visualize rt-ai Edge streams via Manifold

It’s only a step on the road to the ultimate goal of AR headset support for sentient spaces but it is a start at least. As mentioned in an earlier post, passing data from rt-ai Edge into Manifold allows any number of ad-hoc uses of real time and historic data. One of the intended uses is to support a number of AR headset-wearing occupants in a sentient space – the rt-ai Edge to Manifold connection makes this relatively straightforward. Almost every AR headset supports Unity so it seemed like a natural step to develop a Manifold connection for Unity apps. The result, an app called 3DView, is shown in the screen capture above. The simple scene consists of a couple of video walls displaying MJPEG video feeds captured from the rt-ai Edge network.

The test design (shown above) to generate the data is trivial but demonstrates that any rt-ai Edge stream can be piped out into the Manifold allowing access by appropriate Manifold apps. Although not yet fully implemented, Manifold apps will be able to feed data back into the rt-ai Edge design via a new SPE to be called GetManifold.

Next step for the 3DView Unity app is to provide visualization for all ZeroSensor streams correctly physically located within a 3D model of the sentient space. Right now I am using a SpaceMouse to navigate within the space but ultimately this should work with any VR headset with appropriate controller. AR headsets will use their spatial mapping capability to overlay visualizations on the real space so they won’t need a separate controller for navigation.

Adding a schemaless, timestamp searchable data store to rt-ai Edge using Manifold

The MQTT-based heart of rt-ai Edge is ideal for constructing stream processing networks (SPNs) that are intended to run continuously. rt-ai Edge tools (such as rtaiDesigner) make it easy to modify and re-deploy SPNs across multiple nodes during the design phase but, once in full time operation, these SPNs just run by themselves. An existing stream processing element (SPE), PutNiFi, allows data from an rt-ai Edge network to be stored and processed by big data tools –  using Elasticsearch for example. However, these types of big data tools aren’t always appropriate, especially if low latency access is required as Java garbage collection can cause random delays.

For many applications, much simpler but reliably low latency storage is desirable. The Manifold system already has a storage app, ManifoldStore,  that is optimized for timestamp-based searches of historical data. A new SPE called PutManifold allows data from an SPN to flow into a Manifold networking surface. The SPN screen capture above shows two instances of the PutManifold SPE used to transfer audio and video data from the SPN. ManifoldStore grabs passing data and stores it using timestamp as the key. Manifold applications can then access historical data flows using streamId/timestamp pairs. It is particularly simple to coordinate access across multiple data streams. This is very useful when trying to correlate events across multiple data sources at a particular point or window in time.

ManifoldStore is intrinsically schemaless in that it can store anything that consists of a JSON part and a binary data part, as used in rt-ai Edge. A new application called rtaiView is a universal viewer that allows multiple streams of all types to be displayed in a traditional split-screen monitoring format. It uses ManifoldStore for its underlying storage and provides a window into the operation of the SPN.

Manifold is designed to be very flexible with various features that reduce configuration for ad-hoc uses. This makes it very easy to perform offline processing of stored data as and when required which is ideal for offline machine learning applications.

Manifold: getting ready for the new <timestamp, object> store

Previously, Manifold did not directly deal with timestamps. Instead, rtndf includes a timestamp in the JSON part of the JSON and binary messages that are passed around over the Manifold. However, I am working on a <timestamp, object> store (i.e. a storage system that is indexed via a timestamp) and it makes sense to include this as a Manifold node since it is general purpose and not specific to rtndf over Manifold. Consequently, the latest version of Manifold has been modified to include a timestamp in the MANIFOLD_EHEAD data structure that is at the start of every Manifold message (it’s Manifold’s equivalent of the IP header in some ways).

This has had a knock-on effect of changing APIs since now the timestamp has to be passed explicitly to Manifold library functions. This means that the C++ and Python core libraries needed small modifications. Always a pain to have to go back and tweak all of the Manifold and rtndf C++ and Python nodes but it was worth it in this case. The <timestamp, object> storage system needs to support very high throughputs for both writes and reads so passing the timestamp around as a double in binary form makes a lot of sense. The idea is that all data flowing through the Manifold can be captured by the <timestamp, object> store which can then be accessed by other nodes at some other time. The store can be searched by absolute or nearest timestamp, allowing easy correlation of data across multiple sources.

The pyramid – an rtn data flow point of presence

PyramidThe pyramid was originally put together for another project but has received a new lease of life as an rtn data flow point of presence. It uses a Logitech C920 webcam for video and audio and has powered speakers for text to speech or direct audio output. The top of the pyramid has an LED panel that indicates the current state of the pyramid:

  • Idle – waiting for wakeup phrase.
  • Listening – collecting input.
  • Processing – performing speech recognition and processing.
  • Speaking – indicates that the pyramid is generating sound.

The pyramid has a Raspberry Pi 2 internally along with a USB-connected Teensy 3.1 with an OctoWS2811 to run the LED panel. The powered speakers came out of some old Dell PC speakers and the case was 3D printed.

It runs these rtndf/Manifold nodes:

  • uvccam – generates a 1280 x 720 video stream at 30fps.
  • audio – generates a PCM audio stream suitable for speech recognition.
  • tts – text to speech node to convert text to speech.
  • tty – a serial interface used to communicate with the Teensy 3.1.

Speech recognition is performed by the speechdecode node that runs on a server, as is object recognition (recognize), motion detection (modet) and face recognition (facerec).

The old project had an intelligent agent that took the output of the various stream processors and generated the messages to control the pyramid. This has yet to be moved over to rtndf.

Containerizing of Manifold and rtndf (almost) complete

sensorviewI’ve certainly been learning a fair bit about Docker lately. Didn’t realize that it is reasonably easy to containerize GUI nodes as well as console mode nodes so now rtnDocker contains scripts to build and run almost every rtndf and Manifold node. There are only a few that haven’t been successfully moved yet. imuview, which is an OpenGL node to view data from IMUs, doesn’t work for some reason. The audio capture node (audio) and the audio part of avview (the video and audio viewer node) also don’t work as there’s something wrong with mapping the audio devices. It’s still possibly to run these outside of a container so it isn’t the end of the world but it is definitely a TODO.

Settings files for relevant containerized nodes are persisted at the same locations as the un-containerized versions making it very easy to switch between the two.

rtnDocker has an all script that builds all of the containers locally. These include:

  • manifoldcore. This is the base Manifold core built on Ubuntu 16.04.
  • manifoldcoretf. This uses the TensorFlow container as the base instead of raw Ubuntu.
  • manifoldcoretfgpu. This uses the TensorFlow GPU-enabled container as the base.
  • manifoldnexus. This is the core node that constructs the Manifold.
  • manifoldmanager. A management tool for Manifold nodes.
  • rtndfcore. The core rtn data flow container built on manifoldcore.
  • rtndfcoretf. The core rtn data flow container built on manifoldcoretf.
  • rtndfcoretfgpu. The core rtn data flow container built on manifoldcoretfgpu.
  • rtndfcoretfcv2. The core rtn data flow container built on rtndfcoretf and adding OpenCV V3.0.0.
  • rtndfcoretfgpucv2. The core rtn data flow container built on rtndfcoretfgpu and adding OpenCV V3.0.0.

The last two are good bases to use for anything combining machine learning and image processing in an rtn data flow PPE. The OpenCV build instructions were based on the very helpful example here. For example, the recognize PPE node, an encapsulation of Inception-v3, is based on rtndfcoretfgpucv2. The easiest way to build these is to use the scripts in the rtnDocker repo.