In this webinar, our CEO, Steve Teig, will introduce novel strategies for compressing activations, allowing for reduced memory needs and for larger neural networks to be executed on space constrained hardware.
Deep learning seems to touch every discipline these days, but behind its startling magic tricks, it is surprisingly primitive. It is concerning to note the extent to which today’s deep learning relies on folklore: on recipes and anecdotes, rather than on scientific principles and explanatory mathematics. Think of how much more trustworthy, robust, compact, and … Continued
In this keynote address, Perceive CEO Steve Teig will discuss how deep learning’s reliance on inefficient models based on recipes and anecdotes, rather than scientific principles and explanatory mathematics, leads to unsustainable compromises in power-efficiency, privacy, and user experience. By focusing instead on using information theory to guide the development of more rigorous, scalable machine … Continued
In this panel, Perceive CEO Steve Teig joins Ryad Benosman, James Marshall, and Sally Ward-Foxton to discuss neuromorphic vision, which has long been in development and has promised great improvements in latency and power usage for a variety of uses, such as edge applications. The panel will consider if these technologies could be viable … Continued
Perceive is proud to sponsor the Women in Vision networking reception May 17 at the Embedded Vision Summit. Join us there for the opportunity to meet and network with other women who share professional interests in computer vision and edge AI. Please visit this link for more information about the event.
Perceive CEO, Steve Teig, will discuss the current industry focus on compressing weights to lower the memory requirements of neural networks – and why it's necessary to investigate means to reduce activation memory as well. Although there are many schemes used to reduce weights, they often require compromises such as lower precision or smaller … Continued
Today, TinyML focuses primarily on shoehorning neural networks onto microcontrollers or small CPUs but misses the opportunity to transform all of ML because of two unfortunate assumptions: first, that tiny models must make significant performance and accuracy compromises to fit inside edge devices, and second, that tiny models should run on CPUs or microcontrollers. Regarding … Continued
Today’s face recognition networks identify white men correctly more often than white women or non-white people. The use of these models can manifest racism, sexism, and other troubling forms of discrimination. There are also publications suggesting that compressed models have greater bias than uncompressed ones. Remarkably, poor statistical reasoning bears as much responsibility for the … Continued
Explosive data growth and real-time data analysis drives the need for increased performance and power efficiency for edge devices such as home security cameras, wearables, mobile phones, and smart appliances. Purpose-built AI accelerators handle these challenges, significantly speeding up AI applications such as inferencing at the edge by enabling local processing. Learn how purpose-built AI … Continued
The opportunity for edge AI solutions at scale is massive—and is expected to eclipse cloud-based approaches in the next few years. But to realize this potential, we must find ways to simplify and democratize the development and deployment of edge AI systems. What are the most critical gaps that must be filled to streamline edge … Continued
Machine learning aims to construct models that are predictive: accurate even on data not used during training. But how should we assess accuracy? (Hint: simply computing the average error on a pre-determined test set, while nearly universal, is frequently a bad strategy.) How can we avoid catastrophic errors due to black swans—rare, highly atypical events? … Continued