Building embedded IR imaging devices: what to consider

Share this on social media:

Tags: 

Caption: MiniZed with Flir Lepton

Ahead of his presentation at the Embedded World conference in Nuremberg, Germany on 28 February, Adam Taylor from consultancy firm Adiuvo Engineering and Training specifies what engineers need to consider when building an embedded infrared vision system for IoT and IIoT

Imaging within the infrared domain provides a significant benefit in many applications that make use of the Internet of Things (IoT) and its industrial counterpart the Industrial Internet of Things (IIoT). The creation of an imaging system based on an uncooled thermal imager presents several challenges in interfacing, security, power efficiency and performance.

A heterogeneous system-on-chip allows the creation of a solution that is flexible, secure and power efficient. Prototyping is key to developing a system that enables the challenges to be addressed and time-to-market reduced.

Infrared imagers fall into to two categories: cooled and uncooled. Cooled thermal cameras use image sensors based on HgCdTe or InSb semiconductors that need to be cooled to 70 to 100 Kelvin. This is required to reduce the thermal noise generated by the device. A cooled sensor therefore brings with it increased complexity, cost and weight; the system also takes time – several minutes – to reach operating temperature.

Uncooled infrared sensors can operate at room temperature and use microbolometers in place of an HgCdTe or InSb sensor. Typically, microbolometer-based thermal imagers have lower resolution compared to a cooled camera, but they do, however, make thermal imaging systems simpler, lighter and less costly to build.

The Flir Lepton is an uncooled thermal imager operating in the longwave infrared spectrum. It is a self-contained camera module with a resolution of 80 x 60 pixels (Lepton 2) or 160 x 120 pixels (Lepton 3). The module is configured via an I2C bus, while the video is broadcast over SPI using a video-over-SPI (VoSPI) protocol. These interfaces make it ideal for use in many embedded systems imaging in the infrared.

Prototype imaging in the infrared domain

Creating an IoT or IIoT solution that works within the infrared domain faces the following high-level challenges and needs:

  • High-performance processing systems: in order to implement image processing algorithms, communication and application security;
  • Security: the ability to implement secure configuration, access authentication, secure communication and anti-tamper features to prevent unauthorised access;
  • Flexible interface capabilities: the system has to be able to interface with the infrared modules, local displays, along with wired and wireless communication using both industry standard and proprietary interfaces; and
  • Power efficiency: it not only has to be a solution capable of reducing power consumption depending upon the operating mode, but also one that offers the most power-efficient implementation.

One technology which addresses all of the above challenges is a class of device called a heterogeneous system-on-chip-based solution. These devices combine high-performance ARM processor cores with programmable logic.

A flexible prototyping solution is usually preferred when developing embedded infrared imaging devices. One platform that addresses each of these challenges in a compact form factor is the MiniZed. Users can combine the Flir Lepton or other imager with a Xilinx Zynq Z7007S device mounted on a MiniZed development board. As the MiniZed board supports WiFi and Bluetooth it is possible to prototype IIoT and IoT applications and traditional imaging solutions with a local display.

To create a tightly integrated solution, the engineer can configure the Lepton using the processing system of the Zynq through the I2C bus. The Zynq processing system can also handle the IoT or IIoT stacks and security, running an operating system such as Linux or FreeRTOS. The programmable logic is used to receive the VoSPI stream and transfer the images into DDR, meaning the operating system can access the images. The Zynq processing system also outputs video for a local display.

The application software must also provide the required power management, powering down elements of the design such as programmable logic when the system is not in use. The high-level architectural concept of the approach is demonstrated in the figure below.

High level architecture

If required, the engineer can generate custom image processing functions using high level synthesis, or use pre-existing IP blocks such as the image enhancement core which provides noise filtering, edge enhancement and halo suppression.

Related analysis & opinion

15 April 2019

Greg Blackman reports on CSEM's Witness IOT camera, an ultra-low power imager that can be deployed as a sticker. Dr Andrea Dunbar presented the technology at Image Sensors Europe in London in March

22 February 2019

Ron Low, Framos head of sales Americas and APAC, reports from Framos Tech Days at Photonics West in San Francisco where Sony Japan representatives presented image sensor roadmap updates

20 February 2019

Jeff Bier, founder of the Embedded Vision Alliance, discusses the four key trends driving the proliferation of visual perception in machines

19 February 2019

Greg Blackman reports on CEA Leti's new image sensor, shown at Photonics West, which contains onboard processing and is able to image at 5,500 frames per second

15 November 2018

Greg Blackman reports on the buzz surrounding embedded vision at the Vision Stuttgart trade fair, which took place from 6 to 8 November

Related features and analysis & opinion

15 April 2019

Greg Blackman reports on CSEM's Witness IOT camera, an ultra-low power imager that can be deployed as a sticker. Dr Andrea Dunbar presented the technology at Image Sensors Europe in London in March

29 March 2019

Greg Blackman reports from Embedded World, in Nuremberg, where he finds rapid progress in technology for imaging at the edge

22 February 2019

Ron Low, Framos head of sales Americas and APAC, reports from Framos Tech Days at Photonics West in San Francisco where Sony Japan representatives presented image sensor roadmap updates

20 February 2019

Jeff Bier, founder of the Embedded Vision Alliance, discusses the four key trends driving the proliferation of visual perception in machines

19 February 2019

Greg Blackman reports on CEA Leti's new image sensor, shown at Photonics West, which contains onboard processing and is able to image at 5,500 frames per second

15 November 2018

Greg Blackman reports on the buzz surrounding embedded vision at the Vision Stuttgart trade fair, which took place from 6 to 8 November

31 October 2018

As the worldwide machine vision market continues to expand – with new trends emerging and new elements coming into play that could impact existing business models – companies are searching for those ever-important opportunities to stimulate growth.

One such trend is almost certainly embedded vision, although the technology behind it is not new, as Mark Williamson, managing director at Stemmer Imaging, noted: ‘Embedded vision is a big topic. However, it has been here a long time, because every smart camera that you buy is an embedded vision system.’

31 October 2018

Qualcomm Technologies' Snapdragon board is designed for mobile devices, but can be used to create other embedded vision systems. Credit: Qualcomm Technologies

Embedded computing promises to lower the cost of building vision solutions, making imaging ubiquitous across many areas of society. Whether this turns out to be the case or not remains to be seen, but in the industrial sector the G3 vision group, led by the European Machine Vision Association (EMVA), is preparing for an influx of embedded vision products with a new standard.