This year has been a busy one for the vision sector, with numerous acquisitions, embedded computing becoming more prominent, and greater interest in 3D and hyperspectral imaging. Greg Blackman looks back at a packed 2016
The big acquisitions occurred towards the end of the year, with Flir purchasing Point Grey for $253 million in October and Teledyne Technologies buying e2v for £620 million earlier this month. But the company buy-outs began in January with European private equity firm, Ambienta, acquiring high-speed camera manufacturer Mikrotron. Ambienta plans to establish a new machine vision company, LakeSight Technologies, with products from Mikrotron and Tattile, which Ambienta purchased in 2012.
On the systems side, Hexagon bought Aicon 3D Systems in April, while lighting firm Gardasoft was acquired by Optex of Japan in May, and Cognex bought EnShape in November to bolster its 3D imaging capabilities.
North America struggles
The North American vision market had a tough start to the year, contracting 11 per cent in the first quarter of 2016, according to the AIA. It recovered towards the end of the year, up seven per cent in Q3. The European market, by contrast, grew eight per cent in 2016, according to figures from VDMA Machine Vision.
There was a flurry of development around time-of-flight (ToF) imaging, a technique not considered accurate enough for machine vision until fairly recently – Odos Imaging winning the Vision Award in 2014 was when the machine vision sector started to recognise the technology. Odos Imaging was involved in a project in January 2016 to build a prototype ToF subsea camera designed to monitor pollution on the seabed.
Basler's time-of-flight camera.
Jeff Bier, founder of the Embedded Vision Alliance, commented in an article in the August/September issue of Imaging and Machine Vision Europe that the low cost of embedded hardware is set to change the face of machine vision. This was the general view from the Vision show in November, with companies including Basler, Sick, MVTec, Imago Technologies, and many others presenting solutions in this space.
In January, a new version of the GenICam standard was released with functionality for embedded vision. Both Basler and FPGA provider Xilinx have launched online platforms for engineers to develop embedded vision systems while, in October, a €4 million Horizon 2020 project – TULIPP – began, aiming to increase the peak performance per watt of image processing applications fourfold.
Putting the hype into hyperspectral
There is now much more interest around imaging at wavelengths outside the visible spectrum. Hyperspectral imaging used to be the realm of pure science, but is now being employed far more frequently in industry. Vision distributor Stemmer Imaging added software from Perception Park to its product portfolio, which has been designed specifically with industrial imaging in mind.
In March, the European Helicoid project deployed hyperspectral imaging technology in a surgical trial to detect cancer tissue in the brain. The technique has also been used for the early detection of Alzheimer's disease, to harvest cauliflowers, and to analyse ancient manuscripts.
This technology could also find its way into mobile phones; earlier this month, scientists from the VTT Technical Research Centre of Finland were trialling a hyperspectral camera for smart phones.
VTT Technical Research Centre of Finland has trialled a hyperspectral camera for smart phones.
There has been a lot of hype around deep learning algorithms for image processing, technology that’s being developed by internet firms like Google and Facebook, but which could influence machine vision in the future. The latest version of Halcon, released in November, includes an OCR algorithm based on deep learning. Machine learning is also now being used in security and medical imaging.
And finally, 2016 was the year when NASA captured images of a sonic boom while working towards quieter supersonic aircraft; an electronics shop in Germany employed a service robot equipped with vision to greet its customers; thermal imaging was used to keep racehorses from getting injured; and scientists at Stanford University made laser goggles for a parrot and filmed it flying through a laser sheet to study flight.
Using four cameras running at 1,000fps, a high-speed laser and a willing slow-flying parrot equipped with custom 3D printed laser goggles, researchers at Stanford University captured images of the wingtip vortices.