Category Archives: OpenCV

Containerizing of Manifold and rtndf (almost) complete

sensorviewI’ve certainly been learning a fair bit about Docker lately. Didn’t realize that it is reasonably easy to containerize GUI nodes as well as console mode nodes so now rtnDocker contains scripts to build and run almost every rtndf and Manifold node. There are only a few that haven’t been successfully moved yet. imuview, which is an OpenGL node to view data from IMUs, doesn’t work for some reason. The audio capture node (audio) and the audio part of avview (the video and audio viewer node) also don’t work as there’s something wrong with mapping the audio devices. It’s still possibly to run these outside of a container so it isn’t the end of the world but it is definitely a TODO.

Settings files for relevant containerized nodes are persisted at the same locations as the un-containerized versions making it very easy to switch between the two.

rtnDocker has an all script that builds all of the containers locally. These include:

  • manifoldcore. This is the base Manifold core built on Ubuntu 16.04.
  • manifoldcoretf. This uses the TensorFlow container as the base instead of raw Ubuntu.
  • manifoldcoretfgpu. This uses the TensorFlow GPU-enabled container as the base.
  • manifoldnexus. This is the core node that constructs the Manifold.
  • manifoldmanager. A management tool for Manifold nodes.
  • rtndfcore. The core rtn data flow container built on manifoldcore.
  • rtndfcoretf. The core rtn data flow container built on manifoldcoretf.
  • rtndfcoretfgpu. The core rtn data flow container built on manifoldcoretfgpu.
  • rtndfcoretfcv2. The core rtn data flow container built on rtndfcoretf and adding OpenCV V3.0.0.
  • rtndfcoretfgpucv2. The core rtn data flow container built on rtndfcoretfgpu and adding OpenCV V3.0.0.

The last two are good bases to use for anything combining machine learning and image processing in an rtn data flow PPE. The OpenCV build instructions were based on the very helpful example here. For example, the recognize PPE node, an encapsulation of Inception-v3, is based on rtndfcoretfgpucv2. The easiest way to build these is to use the scripts in the rtnDocker repo.

Motion detection pipeline processor using Python and OpenCV

MotionDetectI found this interesting tutorial describing ways to use OpenCV to implement motion detection. I thought that this might form the basis of a nice pipeline processing element for rtnDataFlow. Pipeline processing elements receive a stream from an MQTT topic, process it in some way and then output the modified stream on a new MQTT topic, usually in the same form but with appropriate changes. The new script is called modet.py and it takes a Jpeg over MQTT video stream and performs motion detection using OpenCV’s BackgroundSubtractorMOG2. The output stream consists of the input frames annotated with boxes around objects in motion in the frame. The screenshot shows an example. The small box is actually where the code has detected a moving screen saver on the monitor.

It can be tricky to get stable, large boxes rather than a whole bunch of smaller ones that percolate around. The code contains seven tunable parameters that can be modified as required – comments are in the code. Some will be dependent on frame size, some on frame rate. I tuned these parameters for 1280 x 720 frames at 30 frames per second, the default for the uvccam script.

The pipeline I was using for this test looked like this:

uvccam -> modet -> avview

I also tried it with the imageproc pipeline processor just for fun:

uvccam -> imageproc -> modet ->avview

This actually works pretty well too.