Spectral sorting fruit and veg
Until relatively recently, hyperspectral imaging was more normally associated with NASA, military or geoscience imaging programmes; however, the technology is now starting to be adopted for agricultural use. For the majority of applications it is still in its infancy, but the technology is already promising impressive results, for aspects such as identifying contamination in food – physical or bacterial – or sorting fruit and vegetables according to ripeness and quality.
Hyperspectral imaging, like other spectral imaging, collects and processes information from across the electromagnetic spectrum. The goal is to obtain the spectrum for each pixel in the image of a scene, with the aim of finding foreign objects or identifying the chemical composition of food.
Whereas the human eye sees colour of visible light mostly in three bands – red, green, and blue – spectral imaging divides the spectrum into many more bands. This mode of dividing images into bands can be extended beyond the visible. In hyperspectral imaging, the recorded spectra have fine wavelength resolution and cover a wide range of wavelengths.
The output is a 3D hyperspectral data cube (x and y spatial dimensions and a λ spectral dimension). Several ways of creating the data cube exist, one of the most common being spectral scanning, with light diffracted through a spectrograph onto a 2D sensor. This effectively converts a 2D module to a line scan camera, with data given in the x and λ planes. The y plane is created as the camera or – in the case of food processing – the conveyor beneath it, moves.
From a regulatory perspective, very few industries depend on rigorous inspection more than food processing. Contamination scandals have been front page news in recent years and have serious consequences for all involved – in the UK, the 2013 horsemeat contamination scandal is estimated to have wiped £300 million from one supermarket’s market value.
Naturally, therefore, the ability to detect contamination also has major consequences for the farming and food industry, and hyperspectral imaging is beginning to play a key role here – the technology got the US Department of Agriculture’s stamp of approved in 2012.
As a direct result of the horsemeat scandal, swathes of research has been undertaken, including a paper by Mohammed Kamruzzaman et al, published in Food and Bioprocess Technology, which reported on one of the first investigations of hyperspectral imaging in detecting horsemeat-contaminated beef. The investigation used four key wavelengths of light, three visible and one infrared (515nm, 595nm, 650nm, and 880nm), and demonstrated that hyperspectral imaging ‘coupled with multivariate analysis could indeed be successfully applied as a rapid screening technique for adulterate detection in minced meat’.
The adoption of hyperspectral imaging using near-infrared (NIR) imaging spectroscopy has also been suggested in papers for more than a decade as a way to prevent prion diseases in livestock – such as bovine spongiform encephalopathy (BSE) – a 2001 outbreak of which in the UK was caused by cow feed contaminated with BSE-infected beef. A 2004 Journal of Chemometrics paper by JA Fernandez Pierna et al, suggested that focal plane array NIR imaging spectroscopy could be a cost-effective measure for identifying contaminated feed and preventing future outbreaks.
Danish firm Newtec Engineering is among the companies developing hyperspectral systems for food inspection. Its core market is potato analysis, but it is also looking at other vegetables as well as meat and fish. Its head of research, Bjarke Jørgensen, told Imaging and Machine Vision Europe that it is looking to use the technology for a range of applications, from chemometrics – analysing the starch or sugar content of potatoes, for instance – to detecting foreign objects, such as rust, plastics or metal below the surface of the crop.
The company is also in the early stages of bacterial analysis with hyperspectral imaging. According to Jørgensen, tests are being run at the University of Southern Denmark, with initial studies focusing on non-pathogenic bacteria. ‘Not only can we detect the bacteria... we can actually distinguish between [different types of] bacteria, so we can tell if it’s E. coli or Listeria,’ he said. Jørgensen also mentioned the tests show hyperspectral imaging can differentiate between strains of bacteria that are needed in a healthy diet, such as Lactobacillus found in milk, and pathogenic ones.
Newtec works with cameras from its sister company Q Technologies, which use 2D CMOS sensors able to detect light between 400nm and 1,000nm. The company is also investigating indium gallium arsenide (InGaAs) sensors, which would allow hyperspectral analysis up to 2,000nm.
This is echoed by Raf Vandersmissen, CEO of the Xenics-owned sInfraRed, which develops InGaAs sensors and cameras for a wide range of hyperspectral applications. ‘In the sorting industry you’re not only looking at one wavelength; you’re looking at hyperspectral imaging, so at the shortwave infrared along with visible,’ he said.
Vandersmissen highlighted its customer EVK and its hyperspectral sorter for blueberries: ‘They’re looking at the quality of the blueberry itself. With hyperspectral imaging, you’re going to look at the shape, but you’re also going to look at chemical properties. So, you will be able to see the difference between a nice round blueberry and one that is still nice and round but has spoiled or is rotten.’ Companies sorting blueberries are also inspecting for objects like leaves and stems among the fruit.
For the blueberry sorter, EVK uses a 1/4 VGA (256 pixels spatial x 320 pixels spectral) InGaAs sensor running at speeds of up to 330 frames per second.
Vandersmissen stated that frame rate is a very important aspect of hyperspectral imaging – especially where food is moving fast on a conveyor belt or even falling in the sorting machine between layers. The company’s full-VGA InGaAs sensors (512 pixels spatial x 640 pixels spectral) have what it believes to be the industry’s fastest frame rates at 1,700fps.
The InGaAs materials sInfraRed works with covers the range between 900nm and 1,700nm. The company also has an extended range version that starts in the visible and goes from 400nm to 1,700nm. Vandersmissen said that systems based on this sensor might also use a visible camera, as this would deliver better results for the visible spectrum, but by combining the two it eliminates the possibility of a gap at around 900nm.
‘We also have a new type of camera that, while not yet used in the food industry, is based on InGaAs, but with extra layers added in the structure, a type II superlattice,’ Vandersmissen explained. ‘In that way we can extend the range to 2,350nm, starting at 900nm.’
These large light ranges present a challenge in themselves, however. As Jørgensen explained: ‘[Lighting] is the major obstacle... Halogen gives the smoothest spectrum, but [uses] a lot of power, [and gives out] a lot of heat... and [has] a very short lifespan. The alternative is LED [lighting], but this causes issues for the wavelength.’ There are low-cost LEDs that cover the visible spectrum, as well as LEDs emitting between 800nm and 1,000nm, but as Jørgensen pointed out, ‘that is very expensive’.
Another key challenge that hyperspectral imaging presents is the sheer volume of data created. An RGB image, even with millions of pixels across three wavelengths, might be in the region of 1MB in size; for hyperspectral images, however, the image file could be at least an order of magnitude larger. This naturally creates problems for storing and transmitting data.
Jørgensen said that each image created by Newtec’s system is in the order of 3.7GB per potato. He estimated that, based on the typical throughput of the company’s potato grader, the data output of the camera is around 60TB per hour. ‘You would never get that through an Ethernet connection,’ he commented.
For this reason, all processing is carried out onboard the camera in Newtec’s systems to reduce the amount of information transmitted. This need to perform image analysis before data is transmitted is echoed by Vandersmissen: ‘[Hyperspectral] applications require high frame rate cameras, which adds to the file size… We try to make smart cameras, so all the processing is done on the camera’s FPGA.’
As to the interface required to export the data, Vandersmissen said: ‘GigE Vision is fine for medium speed cameras. But for the higher speed cameras we need to switch to Camera Link [which uses two data transmission cables]. And for the highest speed cameras... we have to use a double camera link, with four Camera Link connectors at the output of the cameras… [To do this] you have two frame grabbers, or a dedicated frame grabber card that can handle the input of four cables. The reconstruction is also complicated as your image is spread over four cables.
‘That’s indeed a complicated way, but when [Xenics] make new cameras, it will look at new interfaces like CoaXPress.’
One possible solution may come from a collaboration between researchers based at North Carolina State University and the University of Delaware, which have recently announced an algorithm capable of reconstructing hyperspectral images using less data.
The higher quality of the image reconstruction means that fewer measurements need to be acquired, resulting in smaller image sizes and easier transmission and storage. It also comes with one additional benefit: writing in the IEEE Journal of Selected Topics in Signal Processing in March of this year, the authors specified that they ‘were able to reconstruct image quality in 100 seconds of computation that other algorithms couldn’t match in 450 seconds… And we’re confident that we can bring that computational time down even further.’
It is still at the experimental stage, however, and the authors say the next step ‘is to run the algorithm in a real world system to gain insights into how the algorithm functions and identify potential room for improvement’.
Another challenge of hypersectral imaging is the cost of the infrared sensors. While InGaAs is certainly less expensive than substrates such as mercury cadmium telluride, or indium antimonide – and requires little cooling to stabilise the detector’s temperature – the substrate is still significantly more expensive than visible detectors that use CMOS.
Newtec has investigated using the InGaAs sensors to deliver images up to 2,000nm, but Jørgensen said that, for NewTec, ‘this detector is so expensive today that it would be hopeless to build a business case based on this sensor’.
The company is undertaking research into how graphene could be used and is hoping to demonstrate that graphene sensors can be developed not only to detect between 1,000nm and 2,000nm, but to do so significantly less expensively than InGaAs sensors.
Jørgensen also said this would come with one other cost benefit: ‘A lot of the optics that go in front of a camera are there to make the image focus onto a flat sensor.’ He believes that it should be possible to create curved sensors using graphene, adding that ‘if the sensor could be curved and therefore following where the image is focused you could eliminate a lot of these optics’.