The big stories of 2016

Share this on social media:

This year has been a busy one for the vision sector, with numerous acquisitions, embedded computing becoming more prominent, and greater interest in 3D and hyperspectral imaging. Greg Blackman looks back at a packed 2016

Buying big

The big acquisitions occurred towards the end of the year, with Flir purchasing Point Grey for $253 million in October and Teledyne Technologies buying e2v for £620 million earlier this month. But the company buy-outs began in January with European private equity firm, Ambienta, acquiring high-speed camera manufacturer Mikrotron. Ambienta plans to establish a new machine vision company, LakeSight Technologies, with products from Mikrotron and Tattile, which Ambienta purchased in 2012.

On the systems side, Hexagon bought Aicon 3D Systems in April, while lighting firm Gardasoft was acquired by Optex of Japan in May, and Cognex bought EnShape in November to bolster its 3D imaging capabilities.

North America struggles

The North American vision market had a tough start to the year, contracting 11 per cent in the first quarter of 2016, according to the AIA. It recovered towards the end of the year, up seven per cent in Q3. The European market, by contrast, grew eight per cent in 2016, according to figures from VDMA Machine Vision.

Testing time-of-flight

There was a flurry of development around time-of-flight (ToF) imaging, a technique not considered accurate enough for machine vision until fairly recently – Odos Imaging winning the Vision Award in 2014 was when the machine vision sector started to recognise the technology. Odos Imaging was involved in a project in January 2016 to build a prototype ToF subsea camera designed to monitor pollution on the seabed.

Basler was showing its new ToF camera at the trade fair Control in April, while Pmd won a Frost and Sullivan award in May for its ToF technology.

Basler's time-of-flight camera.

Embedded processing

Jeff Bier, founder of the Embedded Vision Alliance, commented in an article in the August/September issue of Imaging and Machine Vision Europe that the low cost of embedded hardware is set to change the face of machine vision. This was the general view from the Vision show in November, with companies including Basler, Sick, MVTec, Imago Technologies, and many others presenting solutions in this space.

In January, a new version of the GenICam standard was released with functionality for embedded vision. Both Basler and FPGA provider Xilinx have launched online platforms for engineers to develop embedded vision systems while, in October, a €4 million Horizon 2020 project – TULIPP – began, aiming to increase the peak performance per watt of image processing applications fourfold.

Putting the hype into hyperspectral

There is now much more interest around imaging at wavelengths outside the visible spectrum. Hyperspectral imaging used to be the realm of pure science, but is now being employed far more frequently in industry. Vision distributor Stemmer Imaging added software from Perception Park to its product portfolio, which has been designed specifically with industrial imaging in mind.

In March, the European Helicoid project deployed hyperspectral imaging technology in a surgical trial to detect cancer tissue in the brain. The technique has also been used for the early detection of Alzheimer's disease, to harvest cauliflowers, and to analyse ancient manuscripts.

This technology could also find its way into mobile phones; earlier this month, scientists from the VTT Technical Research Centre of Finland were trialling a hyperspectral camera for smart phones.

VTT Technical Research Centre of Finland has trialled a hyperspectral camera for smart phones.

Deep learning

There has been a lot of hype around deep learning algorithms for image processing, technology that’s being developed by internet firms like Google and Facebook, but which could influence machine vision in the future. The latest version of Halcon, released in November, includes an OCR algorithm based on deep learning. Machine learning is also now being used in security and medical imaging.

Parrot goggles

And finally, 2016 was the year when NASA captured images of a sonic boom while working towards quieter supersonic aircraft; an electronics shop in Germany employed a service robot equipped with vision to greet its customers; thermal imaging was used to keep racehorses from getting injured; and scientists at Stanford University made laser goggles for a parrot and filmed it flying through a laser sheet to study flight.

Using four cameras running at 1,000fps, a high-speed laser and a willing slow-flying parrot equipped with custom 3D printed laser goggles, researchers at Stanford University captured images of the wingtip vortices.

Related analysis & opinion

27 January 2020

Prior to speaking at the Embedded World trade fair, The Khronos Group’s president, Neil Trevett, discusses the open API standards available for applications using machine learning and embedded vision

13 January 2020

Vassilis Tsagaris and Dimitris Kastaniotis at Irida Labs say an iterative approach is needed to build a real-world AI vision application on embedded hardware

22 May 2019

Greg Blackman reports on the discussion around embedded vision at the European Machine Vision Association’s business conference in Copenhagen, Denmark in mid-May

15 April 2019

Greg Blackman reports on CSEM's Witness IOT camera, an ultra-low power imager that can be deployed as a sticker. Dr Andrea Dunbar presented the technology at Image Sensors Europe in London in March

A point cloud of a National Research Council Canada artefact superimposed on a CAD model. Credit: NIST

31 July 2020

How do you choose a 3D vision system for a robot cell? Geraldine Cheok and Kamel Saidi at the National Institute of Standards and Technology in the USA discuss an initiative to define standards for industrial 3D imaging

Related features and analysis & opinion

The panel discussion at the Embedded World trade fair

15 April 2020

Matthew Dale looks at what it will take to increase the adoption of embedded vision

Pegnitz river in Nuremberg

12 February 2020

Vision technology will be one of the highlights at Embedded World in Nuremberg. Here, we preview what to expect

27 January 2020

Prior to speaking at the Embedded World trade fair, The Khronos Group’s president, Neil Trevett, discusses the open API standards available for applications using machine learning and embedded vision

13 January 2020

Vassilis Tsagaris and Dimitris Kastaniotis at Irida Labs say an iterative approach is needed to build a real-world AI vision application on embedded hardware

Engineers at KYB in front of a pick-and-place solution for handling steel metal cylinders. Credit: Pickit

03 August 2020

Car manufacturing has been hit hard by Covid-19, but the need for automation on production lines has not diminished, as Greg Blackman finds out

A point cloud of a National Research Council Canada artefact superimposed on a CAD model. Credit: NIST

31 July 2020

How do you choose a 3D vision system for a robot cell? Geraldine Cheok and Kamel Saidi at the National Institute of Standards and Technology in the USA discuss an initiative to define standards for industrial 3D imaging

04 June 2020

How will the world feed 10 billion people by 2050 with no new land for agriculture? Greg Blackman speaks to machine builder Bühler about how optical sensing can maximise yield in grain processing

Two robots have been installed at Aalborg University Hospital in Denmark. Credit: Kuka

04 June 2020

Keely Portway looks at how robots are automating procedures in hospital testing laboratories, and how imaging underpins this