Category Archives: Sensing

How rt-ai Edge will enable Sentient Spaces

The idea of creating spaces that understand the needs of the people moving within them – Sentient Spaces – has been a long term personal goal. Our ability today to create sensor data (video, audio, environmental etc) is incredible. Our ability to make practical use of this enormous body of data is minimal. The question is: how can ubiquitous sensing in a space be harnessed to make the space more functional for people within it?

rt-ai Edge could be the basis of an answer to this question. It is designed to receive large volumes of multi-sensor data, extract meaningful information and then take control actions as necessary. This closes the local loop without requiring external cloud server interaction. This is important because creating a space with ubiquitous sensing raises all kinds of privacy issues. Because rt-ai Edge keeps all raw data (such as video and audio) within the space, privacy is much less of a concern.

I believe that a key to making a space sentient is to harness artificial intelligence concepts such as online learning of event sequences and anomaly detection. It is not practical for anyone to sit down and program a system to correctly recognize normal behavior in a space and what actions might be helpful as a result. Instead, the system needs to learn what is normal and develop strategies that might be helpful. Reinforcement via user feedback can be used to refine responses.

A trivial example would be someone moving through a dark space at night. It might be helpful to provide light at a suitable intensity to safely help a person navigate the space. The system could deduce this by having experienced other people moving though the space, turning on and off lights as they go. Meanwhile, face recognition could be employed to see if the person is known to the space and, if not, an assessment could be made if an alert needs to be generated. Finally, a video record could be put together of the person moving through the space, using assembled clips from all relevant cameras, and stored (on-site) for a time in case it is useful.

Well that’s a trivial example to describe but not at all trivial to implement. However, my goal is to see if AI techniques can be used to approach this level of functionality. In practical terms, this means developing a series of rt-ai modules using TensorFlow to perform feature extraction, anomaly detection and sequence prediction that are then glued together with sensor and control modules to perform a complete system requiring minimal supervised training to perform useful functions.


The trouble with temperature sensors

Working with the Bosch XDK reminded me that temperature sensing seems like such an obvious concept but it is actually very tough to do and get correct results. The prototype above was something I tried to do in a startup a few years ago, back when this kind of thing was all the rage. It combined motion sensing, the usual environmental sensors including air quality and could have a webcam attached if you wanted video coverage of the space also.

In this photo of the interior you can see my attempt at getting reasonable results from the temperature sensor by keeping the power and ground planes away from the sensor – the small black chip on the right of the photo. Trouble is, the pcb’s FR-4 still conducts heat, as do the remaining copper traces to the chip. Various other attempts followed included cutting a slot through the FR-4 and isolating the air above the rest of the circuit board from the sensor. This is an example:SensorBoard1.jpegAnd this is a thermistor design (with some additional wireless hardware):


In the end, the only solution was to use a thermistor attached by wires that could be kept some distance from the main circuitry. Or, just having all the very low power sensors completely removed from the processor.

The Raspberry Pi Sense HAT suffers from this problem as it is right above the Pi’s processor, as does the Bosch XDK¬†itself. Actually I am not aware of any other really good solution apart from the one where a cable is used to separate the sensor board completely from the processor controlling it (which might work for the Sense HAT although I have not tried that.

Project Soli

Very interesting project from Google ATAP¬†which uses a miniaturized radar device to detect hand gestures. Because it doesn’t use structured light, it has the potential to work outdoors and in difficult environments. Experience shows that structured light has a lot of limitations so radar technology like this is potentially a big step forward.

It has been around for a while – it’ll be interesting to see if it does turn into a real thing.

Raspberry Pi Sense HAT and other sensors added to rtndf so that it’s a bit more IoT-like

sensorviewrtndf now has Python PPEs that support streaming data from a variety of environmental sensors. The sensehat PPE streams data from all of the sensors on the Raspberry Pi Sense HAT. The sensors PPE streams data from a variety of common environmental sensors:

  • ADX345 accelerometer
  • BMP180 pressure/temperature sensor
  • HTU21D humidity sensor
  • MCP9808 temperature sensor
  • TMP102 temperature sensor
  • TSL2561 light sensor

The specific sensors in use can be enabled by selectively commenting out lines in the sensors Python script.

sensorview is another new PPE that can display the sensor streams generated by sensehat and sensors. The screenshot shows the data from a sensehat for example.

imu and imuview – adding IMU sensing to rtndf data flow pipelines

imuviewUp to now the only data sources in rtndf were video and audio. imu is a new Python PPE that can be used to stream IMU data (fused pose, sensor readings etc) into an rtndf data flow pipeline. Another new PPE is imuview, this time a C++ PPE, that can display the resulting stream. The screen capture above shows the data being streamed from a Raspberry Pi SenseHat which is a full 11-dof sensor.

One of the nice things about using a pub/sub system like MQTT is that it is possible to hook into any of the pipeline links to see what data is flowing. To this end, a future PPE will be a generic viewer. The user just gives it the topic and it determines the type of data and displays it appropriately. A very handy debugging tool!