Stacked sensor with parallel processing built for fast, edge imaging

Share this on social media:

Tags: 

Greg Blackman reports on CEA Leti's new image sensor, shown at Photonics West, which contains onboard processing and is able to image at 5,500 frames per second

CEA Leti has presented a 3D stacked, backside illuminated, intelligent image sensor, able to run at 5,500 frames per second, during SPIE Photonics West in San Francisco.

The sensor’s architecture, which was designed together with CEA List, is much more energy efficient than conventional CMOS sensors, which have a serial output, because pixels send information directly to onboard processors in parallel.

The sensor is formed of photodiodes stacked on top of processing elements and memory. Each 16 x 16-pixel array – the photodiodes are backside illuminated giving higher sensitivity than frontside illumination – has its own processing element (PE), all connected in parallel. The data is sent over short connections to each PE, which means power consumption is kept to a minimum. In addition the processors run at low frequency, which decreases the voltage needed and improves energy efficiency.

The excellent efficiency of the sensor, its low latency, and its high frame rate – achieved thanks to the parallel access to processors – make the sensor suitable for monitoring fast-moving factory production lines, according to Marc Duranton at CEA, speaking to Imaging and Machine Vision Europe. Robot guidance or in augmented reality systems are two other potential applications for the sensor.

CEA Leti’s and List’s demonstrator chip has a resolution of 1,024 x 768 pixels with 192 processors of 16 PEs. The sensor and the processor dies are surfaced connected thanks to a Cu-Cu interconnect 3D stacking technique. The vision sensor has been integrated on an evaluation board with two FPGAs, USB and HDMI interfaces, and GPIOs.

As processors can run different code, the sensor is able to operate at a higher refresh rate in areas where there is lots happening in the scene, and lower refresh rates for other areas where there isn’t any movement. Scanning a background that’s not changing, for instance, doesn’t need to be imaged at 60fps – it generates a lot of redundant data.

The sensor, therefore, can be programmed to trigger and record important events according to what is happening in the scene, rather than analysing data continuously to locate the trigger. ‘There is a lot of communication and a lot of useless information [in a classical system] because you always have to scan data and to transfer it to an external processor,’ Duranton noted, ‘whereas, here the scanning and object detection is done in the sensor itself.’

Image analysis takes place directly inside the sensor onboard the 192 processors – the architecture is such that each set of 16 x 16 pixels could be processed independently. This opens up the potential for intelligence at ‘the edge’, rather than sending images for analysis to a separate machine. ‘You don’t need to have high bandwidth communication with the sensor, because the sensor is able to take the decision locally,’ Duranton explained. ‘[Avoiding] continuously sending video lowers bandwidth and power consumption.’

The architecture is also scalable, remarked Duranton, in the sense that the number of processors and the interconnect with the sensors increases in line with the number of pixels. ‘It’s not like a conventional sensor, where when you double the number of pixels, due to the serialisation, you need to double the frequency of the output data,’ he explained.

Copyright: S. Pajaniradja / CEA - 2018

Duranton said the sensor can scale to 4k or 8k pixel resolution. Compromises have to be made on the size of the system, in that imaging at high frames rates requires large photodiodes to capture a lot of electrons. At the same time, the processors take up space. However, even while running at 1,000fps the chip can execute about 1,000 instructions per processor. ‘We can do a lot even at a high frame rate,’ Duranton commented.

Duranton noted that some other image sensors are able to analyse data directly in the sensor, but the processing is dedicated to a particular algorithm such as white balance correction. ‘We really have a set of general-purpose processors,’ he said.

The next step, according to Duranton, is to develop the software library to improve functionality, as well as implementing neural networks for higher level recognition.

CEA Leti has also developed a curved image sensor – it was demonstrating this as well at Photonics West – that lowers the cost of optics in imaging systems. ‘We could also try to curve this [stacked sensor], so we could really have a sensor that’s like a human retina,’ Duranton said – ‘to have a low-cost optical system with low power and with the processing taking place directly in the sensor.

‘That’s more long term,’ he concluded. ‘We [CEA Leti] have the technology to curve the sensor and we have this stacked retina and, at some point, we would like to combine them.’

Top image: Copyright: S. Pajaniradja / CEA - 2018

Company: 

Related analysis & opinion

15 April 2019

Greg Blackman reports on CSEM's Witness IOT camera, an ultra-low power imager that can be deployed as a sticker. Dr Andrea Dunbar presented the technology at Image Sensors Europe in London in March

22 February 2019

Ron Low, Framos head of sales Americas and APAC, reports from Framos Tech Days at Photonics West in San Francisco where Sony Japan representatives presented image sensor roadmap updates

20 February 2019

Jeff Bier, founder of the Embedded Vision Alliance, discusses the four key trends driving the proliferation of visual perception in machines

15 November 2018

Greg Blackman reports on the buzz surrounding embedded vision at the Vision Stuttgart trade fair, which took place from 6 to 8 November

26 October 2018

Arndt Bake, chief marketing officer of Basler, looks back at the history of machine vision to see what lessons can be applied to the emerging embedded vision market

Related features and analysis & opinion

15 April 2019

Greg Blackman reports on CSEM's Witness IOT camera, an ultra-low power imager that can be deployed as a sticker. Dr Andrea Dunbar presented the technology at Image Sensors Europe in London in March

29 March 2019

Greg Blackman reports from Embedded World, in Nuremberg, where he finds rapid progress in technology for imaging at the edge

22 February 2019

Ron Low, Framos head of sales Americas and APAC, reports from Framos Tech Days at Photonics West in San Francisco where Sony Japan representatives presented image sensor roadmap updates

20 February 2019

Jeff Bier, founder of the Embedded Vision Alliance, discusses the four key trends driving the proliferation of visual perception in machines

15 November 2018

Greg Blackman reports on the buzz surrounding embedded vision at the Vision Stuttgart trade fair, which took place from 6 to 8 November

31 October 2018

As the worldwide machine vision market continues to expand – with new trends emerging and new elements coming into play that could impact existing business models – companies are searching for those ever-important opportunities to stimulate growth.

One such trend is almost certainly embedded vision, although the technology behind it is not new, as Mark Williamson, managing director at Stemmer Imaging, noted: ‘Embedded vision is a big topic. However, it has been here a long time, because every smart camera that you buy is an embedded vision system.’

31 October 2018

Qualcomm Technologies' Snapdragon board is designed for mobile devices, but can be used to create other embedded vision systems. Credit: Qualcomm Technologies

Embedded computing promises to lower the cost of building vision solutions, making imaging ubiquitous across many areas of society. Whether this turns out to be the case or not remains to be seen, but in the industrial sector the G3 vision group, led by the European Machine Vision Association (EMVA), is preparing for an influx of embedded vision products with a new standard.

26 October 2018

Arndt Bake, chief marketing officer of Basler, looks back at the history of machine vision to see what lessons can be applied to the emerging embedded vision market