Category Archives: Sentient spaces

Automatic license plate recognition with OpenALPR and rt-ai Edge

I came across OpenALPR a little while ago when thinking about the general problem of enhancing the value of video feeds. It has an easy to use Python binding so it didn’t take very long to create an rt-ai Edge stream processing element (SPE). Actually, the OpenALPR part of it is one line of code – it takes a jpeg from the video stream and adds any recognized plate info as metadata to the output message. The trivial stream processing network in the screen capture above shows its operation as an inline semantic enhancer of a video stream. The OpenALPR SPE only outputs a video frame it if either already contains metadata or else the OpenALPR SPE has added metadata. In this way, multiple recognizers can be applied to the same frame using a pipeline of SPEs.

While I can now see a few private houses starting to sprout specialized license plate reading cameras (which are optimized for this purpose, especially for night operation), I don’t have anything set up as yet so I had to make do with printing car images and waving them in front of a webcam. Seemed to work fine but it would be nice to have a proper setup.

Recognized license plate metadata then becomes another feature that can be used for machine learning and inference within the edge environment – another step on the path to sentient spaces perhaps.

Advertisements

How rt-ai Edge will enable Sentient Spaces

The idea of creating spaces that understand the needs of the people moving within them – Sentient Spaces – has been a long term personal goal. Our ability today to create sensor data (video, audio, environmental etc) is incredible. Our ability to make practical use of this enormous body of data is minimal. The question is: how can ubiquitous sensing in a space be harnessed to make the space more functional for people within it?

rt-ai Edge could be the basis of an answer to this question. It is designed to receive large volumes of multi-sensor data, extract meaningful information and then take control actions as necessary. This closes the local loop without requiring external cloud server interaction. This is important because creating a space with ubiquitous sensing raises all kinds of privacy issues. Because rt-ai Edge keeps all raw data (such as video and audio) within the space, privacy is much less of a concern.

I believe that a key to making a space sentient is to harness artificial intelligence concepts such as online learning of event sequences and anomaly detection. It is not practical for anyone to sit down and program a system to correctly recognize normal behavior in a space and what actions might be helpful as a result. Instead, the system needs to learn what is normal and develop strategies that might be helpful. Reinforcement via user feedback can be used to refine responses.

A trivial example would be someone moving through a dark space at night. It might be helpful to provide light at a suitable intensity to safely help a person navigate the space. The system could deduce this by having experienced other people moving though the space, turning on and off lights as they go. Meanwhile, face recognition could be employed to see if the person is known to the space and, if not, an assessment could be made if an alert needs to be generated. Finally, a video record could be put together of the person moving through the space, using assembled clips from all relevant cameras, and stored (on-site) for a time in case it is useful.

Well that’s a trivial example to describe but not at all trivial to implement. However, my goal is to see if AI techniques can be used to approach this level of functionality. In practical terms, this means developing a series of rt-ai modules using TensorFlow to perform feature extraction, anomaly detection and sequence prediction that are then glued together with sensor and control modules to perform a complete system requiring minimal supervised training to perform useful functions.