Accelerating AI with analogue computing

Share this on social media:

Image: ioat/shutterstock.com

Fresh from Embedded World, we speak to Tim Vehling at Mythic on the benefits of AI accelerators for industrial vision

Chip companies dedicated to AI acceleration have been springing up over the last few years as AI processing becomes ever more important.

At Embedded World, AI accelerator chips were out in force, as running AI on an edge device at a high enough performance can still be challenging. The French firm, Dolphin Design, picked an award for best start-up at Embedded World for its edge AI accelerator designed for sound and vision processing.

In industrial vision, the constraints of pure edge processing are sometimes less apparent, but nevertheless firms using AI for inspection or robot guidance sill have to reach high levels of performance, and AI accelerators can offer benefits.

Tim Vehling, senior vice president at Mythic, a US-based AI accelerator chip start-up, listed industrial vision and robotics among Mythic's customers and potential customers, when he spoke to Imaging and Machine Vision Europe after Embedded World.

One of the cornerstones of Mythic's technology is its Analogue Compute Engine (ACE). It uses embedded flash both to store data and run computation concurrently – computation happens directly in the memory array. This means data doesn't have to move back and forth from memory, which reduces power and bandwidth requirements in the system.

Because it's using an analogue representation of the information, it's also a lot denser. This results in a single flash element that's about 50 times smaller than what would be used in SRAM.

The M1076 Analogue Matrix Processor, an array of 76 tiles, offers 25 TOPS performance, with a capacity for up to 80M on-chip weights. It provides low-latency deterministic execution of DNN models, with a typical power consumption running complex models of less than 3W.

The company also uses mature 40nm CMOS technology, because it is so much more cost effective and storage efficient with its technology.

Vehling said that plugging the chip into an NXP processor will give a ten times or more increase in performance, while it will improve the performance of an Nvidia Jetson Xavier NX by two or three times.

He added that, most industrial architectures are based on an Intel system, and from a hardware point of view it's just a case of plugging in one of Mythic's cards.

'Most of the customers we deal with are not doing AI from scratch,' Vehling said. 'They've already got some level of AI expertise, and what they're running into is system limitations.

'Two or three years ago, it was less known about how to deploy AI in the industrial sector,' he added. 'Now people know how to deploy it; now it's what is the best solution to get the best results.'

Vehling recalled that Mythic had some customers at Embedded World that were very clear what they were looking for - normally they weren't getting the performance when running a certain model on a certain platform, which is why they needed an accelerator. 'The knowledge of what people are looking for has advanced quite a bit,' he said.

Mythic has raised $165m in venture funding and now has 150 employees worldwide. Vehling said the focus for the company involves getting designs finalised to start shipping production chips next year. Mythic is also working on its next-generation architecture, from which Vehling is expecting to see some significant performance improvements, of the order of eight to ten times improvement.

Vision Components’ Power SoM is an FPGA-based hardware accelerator. Credit: Vision components

01 June 2022

Vision Components’ Power SoM is an FPGA-based hardware accelerator. Credit: Vision components

01 June 2022

Donato Montanari, Zebra Technologies’ vice president and general manager for machine vision

20 July 2022