[EETimes] AI and Vision at the Edge

By Jeff Bier 2020-09-06

Source: https://www.eetimes.com/dont-be-misled-by-accuracy-3-insights-for-more-useful-ai-models/

Things move fast. Not that long ago, AI and computer vision seemed like the stuff of science fiction — and now suddenly they’re everywhere, from Alexa and Siri to kitchen appliances that can recognize the food you’re making and help you cook it perfectly.

But things are shifting again. Increasingly, intelligence and visual processing are happening at the edge. That is, the computation is occurring locally, rather than in the cloud.  And it’s happening on systems ranging from mobile phones to household appliances, from cars to industrial robots, in cameras and in server closets in buildings — with the common theme being that processing is taking place closer to the sensor than ever.  Why is this, and what are the implications of this trend?

At the Edge AI and Vision Alliance, we’ve observed five factors pushing AI to the edge, which we’ve turned into a slightly awkward acronym: BLERP.  It stands for bandwidth, latency, economics, reliability, and privacy.  Let’s take a quick look at each.

Bandwidth: If you’ve got a commercial greenhouse, casino or retail space with hundreds of cameras in it, there’s just no way to send that information to the cloud for processing — it will swamp whatever kind of Internet connection you have.  You’ve simply got to handle it locally.

Latency: By latency I mean the time between when a system takes in a sensory input and responds to it.  Think about a self-driving car: if there’s suddenly a pedestrian up ahead in the crosswalk, the car’s computer might only have a few hundred milliseconds to make a decision — not enough time to send images to the cloud and wait for a response.

Economics: Cloud computing and communications are getting better and cheaper all the time, but they still cost money — potentially a lot of money, especially where video data is concerned. Edge computing reduces the amount of data that has to be sent to the cloud, as well as the amount of work that has to be done once it’s there, lowering costs.

Reliability: Think of a home security system with facial recognition — you’d like it to be able to let your family members into the house even if the Internet is down, right?  Local processing makes this possible, and makes systems more fault tolerant.

Privacy: The proliferation of audio and visual sensors at the edge creates serious privacy concerns, and sending this information to the cloud increases these concerns dramatically. The more information can be processed and consumed locally, the less chance there is for abuse.  To paraphrase Las Vegas’s unofficial motto, “What happens at the edge stays at the edge.”

If those are the drivers shifting AI to the edge, the thing that’s enabling that shift to succeed is faster, more efficient processors. Computer vision and deep learning are seemingly magical, enabling us to extract meaning from millions of pixels or audio samples.  But that magic comes at a cost: many billions, even trillions of operations per second are required to do AI processing in real time. So an essential requirement of edge AI is processors that can deliver that kind of performance at a price, power consumption, and size compatible with edge or embedded devices.

Jeff Bier is one of the guests on our September 4 podcast. Listen on Spotify.

Fortunately, deep learning algorithms are repetitive and fairly simplistic — it’s just the amount of computation and data that’s massive. And because of that repetitive, predictable nature, it’s possible to create processors that are tuned for these algorithms.  They can easily deliver 10x, 100x or even more performance and efficiency on these tasks compared to general-purpose processors.  This fact, combined with the widely held view that there will soon be billions of AI-enabled edge devices, has launched a Cambrian explosion in processor architectures for high-performance AI in the last few years.

A good way to get a handle on these recent advances is to see the presentations at this year’s Embedded Vision Summit, coming up next week.  The keynote by David Patterson (co-inventor of RISC and a contributor to Google’s TPU architecture) frames this trend perfectly: “A New Golden Age for Computer Architecture: Processor Innovation to Enable Ubiquitous AI.”

In the presentation program, start-ups and established leaders including CEVA, Cadence, Hailo, Intel, Lattice, Nvidia, Perceive, Qualcomm and Xilinx will be presenting their latest edge AI processors as well as tools and techniques to enable efficient mapping of deep neural networks onto these processors. In addition, system design experts will share insights gained from implementing edge AI in diverse applications ranging from self-piloting drones (Skydio) to farm equipment (John Deere) to floor cleaning robots (Trifo).

And exhibitors at the virtual tradeshow, from Arrow to Xperi, will give you an opportunity to see the latest in processors, development tools and software for edge AI.

Borrowing from Professor Patterson’s talk title, we’re entering a golden age of practical edge AI and computer vision.  If ever there were a time to go out and make something — specifically, something that does cool things involving AI at the edge — this is it!

Jeff Bier is the President of consulting firm BDTI, founder of the Edge AI and Vision Alliance, and the General Chair of the Embedded Vision Summit — the premier event for innovators adding vision and AI to products — which will be held online September 15-25, 2020.