Author Archives: richard

Why not just use NiFi and MiNiFi instead of rt-ai Edge?

Any time I start a project I always wonder if I am just reinventing the wheel. After all, there is so much software out there (on GitHub and others)  that almost everything already exists in some form. The most obvious analog to rt-ai Edge is Apache NiFi and Apache MiNiFi. NiFi provides a very rich environment of processor blocks and great tools for joining them together to create stream processing pipelines. However, there are some characteristics of NiFi that I don’t particularly like. One is the reliance on the JVM and the consequent garbage collection issues that mess up latency guarantees. Tuning a NiFi installation can be a bit tricky – check here for example. However, many of these things are the price that is inevitably paid for having such a rich environment.

rt-ai Edge was designed to be a much simpler and lower overhead way of creating flexible stream processing pipelines in edge processors with low latency connections and no garbage collection issues. That isn’t to say that an rt-ai Edge pipeline module could not be written using a managed memory language if desired (it certainly could) but instead that the infrastructure does not suffer from this problem.

In fact, rt-ai Edge and NiFi can play together extremely well. rt-ai Edge is ideal at the edge, NiFi is ideal at the core. While MiNiFi is the NiFi solution for embedded and edge processors, rt-ai Edge can either replace or work with MiNiFi to feed into a NiFi core. So maybe it’s not a case of reinventing the wheel so much as making the wheel more effective.

Advertisements

rt-ai: real time stream processing and inference at the edge enables intelligent IoT

The “rt” part of rt-ai doesn’t just stand for “richardstech” for a change, it also stands for “real-time”. Real-time inference at the edge will allow decision making in the local loop with low latency and no dependence on the cloud. rt-ai includes a flexible and intuitive infrastructure for joining together stream processing pipelines in distributed, restricted processing power environments. It is very easy for anyone to add new pipeline elements that fully integrate with rt-ai pipelines. This leverages some of the concepts originally prototyped in rtndf while other parts of the rt-ai infrastructure have been in 24/7 use for several years, proving their intrinsic reliability.

Edge processing and control is essential if there is to be scalable use of intelligent IoT. I believe that dumb IoT, where everything has to be sent to a cloud service for processing, is a broken and unscalable model. The bandwidth requirements alone of sending all the data back to a central point will rapidly become unworkable. Latency guarantees are difficult to impossible in this model. Two advantages of rt-ai (keeping raw data at the edge where it belongs and only upstreaming salient information to the cloud along with minimizing required CPU cycles in power constrained environments) are the keys to scalable intelligent IoT.

rt-ai Edge

rt-ai Edge is a new concept in edge processing that makes it easy for anyone to build AI and ML enhanced stream processing pipelines in order to close the local loop and offload communications networks and the cloud. Semantic extraction of meaningful data from raw data feeds at the edge ensures that the core only has to deal with actionable information, not noise. rt-ai Edge leverages hardware acceleration within embedded devices to filter raw data into highly salient messages for higher level processing.

rt-ai Edge is in active development right now.

DroNet – flying a drone using data from cars and bikes

Fascinating video about a system that teaches a drone to fly around urban environments using data from cars and bikes as training data. There’s a paper here and code here. It’s a great example of leveraging CNNs in embedded environments. I believe that moving AI and ML to the edge and ultimately into devices such as IoT sensors is going to be very important. Having dumb sensor networks and edge devices just means that an enormous amount of worthless data has to be transferred into the cloud for processing. Instead, if the edge devices can perform extensive semantic mining of the raw data, only highly salient information needs to be communicated back to the core, massively reducing bandwidth requirements and also allowing low latency decision making at the edge.

Take as a trivial example a system of cameras that read vehicle license plates. One solution would be to send the raw video back to somewhere for license number extraction. Alternately, if the cameras themselves could extract the data, then only the recognized numbers and letters need to be transferred, along with possibly an image of the plate. That’s a massive bandwidth saving over sending constant compressed video. Even more interesting would be edge systems that can perform unsupervised learning to optimize performance, all moving towards eliminating noise and recognizing what’s important without extensive human oversight.

The end of the taxiway for the 747 (in the US at least)

Nice photos and story here about the final flight by a US airline of a 747. This brought back memories because, during the 90s, I spent a lot of time on Virgin Atlantic 747s between LHR and JFK (and occasionally BOS). I remember some of the old Virgin aircraft names – Spirit of Sir Freddie, Ruby Tuesday (above) and Lady Penelope for example. One of the things I would do to alleviate the boredom was to try to get off the aircraft first. If you were sitting in the correct seat on the upper deck and managed to get down the stairs before anyone else, there was always a good chance!

The best time was when I managed to get in the cockpit jump seat for a Virgin Atlantic 747 landing at SFO (yes, this was most certainly pre 9/11). It was great to see the crew handle the aircraft and air traffic control and just confirmed something that I already knew – that this kind of stuff was best left to professionals (I was a terrible pilot!).

Virgin Atlantic gradually replaced the 747s with A340s which were just not the same at all but, by then, I had mostly stopped flying across the Atlantic on a regular basis.