Thanks for visiting Imaging and Machine Vision Europe.

You're trying to access an editorial feature that is only available to logged in, registered users of Imaging and Machine Vision Europe. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Lettuce imaging is tip of the iceberg

Share this on social media:

Researchers at Earlham Institute have developed a machine learning platform to categorise lettuce crops using computer vision and aerial images.

The platform, called AirSurf-Lettuce, is capable of scoring iceberg lettuces with high accuracy of greater than 98 per cent, according to the study.

The researchers conducted field trials at G's Growers, the second largest vegetable grower in the UK, based in Ely.

Aerial imagery is used by crop researchers, growers and farmers to monitor crops during the growing season.

To extract meaningful information from large-scale aerial images collected from the field, high-throughput phenotypic analysis solutions are required, which not only produce high-quality measures of key crop traits, but also support farmers to make prompt and reliable crop management decisions.

The software includes measuring quantity, size and pinpointing location to help farmers harvest with precision and getting the crop to market in the most efficient way. Importantly, this technology can be applied to other crops, widening the scope for positive impact across the food chain.

Lettuce is big business, especially in East Anglia, with 122,000 tonnes produced in the UK each year. Up to 30 per cent of yield can be lost due to inefficiencies in the growing process as well as harvest strategies, which, if made up, could provide a significant economic boost.

It's very important that farmers and growers understand precisely when crops will become harvest-ready, so that they can set in motion the planning of logistics, trading and marketing their produce further along the chain.

Traditionally, however, measuring crops in fields has been very time-consuming and labour intensive, as well as prone to error; therefore novel AI solutions based on aerial images can provide a much more robust and effective method.

Another barrier to efficiency is the fact that inclement weather conditions, which have been increasing in recent years, can throw off harvesting times quite significantly, as crops take different lengths of time to mature.

The AirSurf technology – developed by members of the Earlham Institute’s Zhou Group, including first authors of the paper on the project, Alan Bauer and Aaron Bostrom – uses deep learning combined with ultra-wide-scale imaging analysis, to measure iceberg lettuce in a high-throughput mode. This is able to identify the precise quantity and location of lettuce plants, with the additional advantage of recognising crop quality.

Combining this system with GPS allows farmers to track size distribution of lettuce in fields, which can only help in increasing the precision and effectiveness of farming practice, including harvest time.

First author, Alan Bauer at EI, said: ‘This cross-disciplinary collaboration integrates computer vision and machine learning with the lettuce growing business to demonstrate how we can improve crop yields using machine learning.’

Industry partner at G's Growers, Innovation Manager Jacob Kirwan, added: ‘Farming at a large scale means that precision is essential when ensuring that we are producing crops in an environmentally and economically sustainable way. Using technology like AirSurf means that growers are able to understand the variability in their fields and crops at a much higher level of detail that was previously possible.

‘The decisions that can then be taken from this information, such as varying applications of inputs and irrigation; changing harvest strategies and planning the optimum time to sell crop, will all contribute towards increasing on farm yields and improving farm productivity.’

Related news

Reconstruction of traffic signs with high-resolution colour non-line-of-sight imaging using conventional CMOS cameras sensors

21 June 2019

Dr Johannes Meyer (left) and EMVA president Jochem Herrmann

20 May 2019

Recent News

04 October 2019

Each pixel in Prophesee’s Metavision sensor only activates if it detects a change in the scene – an event – which means low power, latency and data processing requirements

18 September 2019

3D sensing company, Outsight, has introduced a 3D semantic camera that combines lidar ranging with hyperspectral material analysis. The camera was introduced at the Autosens conference in Brussels

16 September 2019

OmniVision Technologies will be showing an automotive camera module at the AutoSens conference in Brussels from 17 to 19 September, built using OmniVision’s OX03A1Y image sensor with an Arm Mali-C71 image signal processor

09 September 2019

Hamamatsu Photonics claims it is the first company to mass produce a mid-infrared detector that doesn’t use mercury and cadmium, which are restricted under the European Commission’s RoHS directive