Thanks for visiting Imaging and Machine Vision Europe.

You're trying to access an editorial feature that is only available to logged in, registered users of Imaging and Machine Vision Europe. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Machine vision firms target embedded market

Share this on social media:

Greg Blackman reports from the Embedded Vision Summit in Santa Clara, where Allied Vision launched its new camera platform

Allied Vision has launched a €99 camera with onboard ASIC processor specifically for the embedded vision market. The camera aims to be a bridge between the high performance, costly and low volume industrial vision market and the higher volume, lower cost embedded market.

It was launched at the Embedded Vision Summit, a computer vision conference organised by the Embedded Vision Alliance and held in Santa Clara, California from 1 to 3 May.

Andreas Gerk, Allied Vision’s CTO, said during the show that the typical cameras in the embedded market are not as rich in features as in the machine vision sector. Allied Vision hopes to offer some of the functionality found in machine vision to embedded vision developers through its new product line. Gerk added that the new camera platform is ‘totally different to what we have done before’.

Embedded vision is a hot topic in the machine vision sector at the moment, with the VDMA organising a panel discussion at the Vision show in Stuttgart last year, and companies like Basler introducing its online vision community, Imaginghub, for those building embedded vision solutions. Mark Hebbel, head of new business development at Basler, gave a tutorial at the Embedded Vision Summit on choosing time of flight sensors for embedded applications.

Other notable machine vision names that were exhibiting at the event in Santa Clara included Ximea, MVTec, Euresys, and Vision Components.

Jeff Bier, the founder of the Embedded Vision Alliance, said during the conference that embedded vision can mean many things: it can be an industrial camera with a processor inside; an embedded system with an integrated camera or with an external camera; or even a system sending images to the cloud.

Neural networks

Half of the technical insight presentations at the conference focused on deep learning and neural networks. These are algorithms that can be trained to recognise objects in a scene using lots of data, as opposed to the traditional method of writing an algorithm for a specific task. Bier said that 70 per cent of vision developers surveyed by the Alliance were using neural networks, a huge shift compared to only three years ago at the 2014 summit when hardly anyone was using them.

Bier gave a presentation at the conference predicting that the cost and power consumption for the computation required for vision will decrease by 1,000 times over the next three years, much of this thanks to neural networks.

Bier clarified the statement saying that the 1,000 times came from, firstly a 10 times improvement in the efficiency of the neural networks, which have largely been developed for accuracy rather than efficiency, compounded with a 10 times efficiency improvement in the processors running neural networks, and a 10 times improvement in the software that mediates between the processors and the algorithms.

In an article on the Embedded Vision Alliance’s website, Bier noted five computer vision trends that are likely to have a big impact on society in general: huge amounts of image data; deep learning; 3D sensing; simultaneous location and mapping (SLAM) used in robotics; and computing on the edge, a term that means doing processing on the device rather than on a server or in the cloud.

The advances in computer vision are opening up all kinds of new ways of using vision technology, from the embedded vision inside Microsoft’s Hololens augmented reality headset – Marc Pollefeys, director of science at Microsoft and a Professor at ETH Zurich, gave a keynote presentation about Hololens – to cameras for generating analytics for retail. Embedded vision also has the potential to disrupt more traditional markets, like surveillance – Michael Tusch at ARM gave a presentation on this topic.

Rudy Burger at Woodside Capital Partners though made the point that there haven’t actually been many large scale embedded vision products – he mentioned Kinect and Mobileye, which Intel acquired last year, as two examples. ‘We’re just at the very beginning,’ he said.

Turning back to machine vision, and Arun Chhabra at 3D surface inspection company 8tree gave a presentation about making an embedded 3D vision system for mapping dents on aircraft, a practice that traditionally is extremely rudimentary and labour intensive. 8tree’s system is a 3D scanner that operates by pattern projection and can annotate the area of the plane being inspected to measure any dents.

The embedded vision sector is not just a new market for machine vision companies, but the way vision in general is being deployed is changing, which in turn could impact machine vision. The reason why Allied Vision, Basler and others are starting to provide embedded vision products is to cater for these new ways of using vision technologies, so that if a customer asks whether a system can be employed on an ARM chip, for instance, then they are able to do that.

Related analysis & opinion

27 January 2020

Prior to speaking at the Embedded World trade fair, The Khronos Group’s president, Neil Trevett, discusses the open API standards available for applications using machine learning and embedded vision

13 January 2020

Vassilis Tsagaris and Dimitris Kastaniotis at Irida Labs say an iterative approach is needed to build a real-world AI vision application on embedded hardware

22 May 2019

Greg Blackman reports on the discussion around embedded vision at the European Machine Vision Association’s business conference in Copenhagen, Denmark in mid-May

15 April 2019

Greg Blackman reports on CSEM's Witness IOT camera, an ultra-low power imager that can be deployed as a sticker. Dr Andrea Dunbar presented the technology at Image Sensors Europe in London in March

22 February 2019

Ron Low, Framos head of sales Americas and APAC, reports from Framos Tech Days at Photonics West in San Francisco where Sony Japan representatives presented image sensor roadmap updates

Related features and analysis & opinion

AMRC's Integrated Manufacturing Group's work spans robotics and automation, integrated large volume metrology, digital assisted assembly and manufacturing informatics

28 October 2019

Greg Blackman visits the University of Sheffield’s Factory 2050 where Rolls-Royce, McLaren and Siemens, among others, are investing in research on digital manufacturing

26 June 2019

Keely Portway looks at new terahertz and infrared technologies being developed to inspect aircraft

Pegnitz river in Nuremberg

12 February 2020

Vision technology will be one of the highlights at Embedded World in Nuremberg. Here, we preview what to expect

27 January 2020

Prior to speaking at the Embedded World trade fair, The Khronos Group’s president, Neil Trevett, discusses the open API standards available for applications using machine learning and embedded vision

13 January 2020

Vassilis Tsagaris and Dimitris Kastaniotis at Irida Labs say an iterative approach is needed to build a real-world AI vision application on embedded hardware

22 November 2019

Keely Portway finds out how vision is being used to gather data about consumers

28 February 2020

Paul Wilson, managing director of Scorpion Vision, describes what it takes to install a 3D robot vision system in a Chinese foundry

The Harvesters (1565), by Pieter Bruegel the Elder

24 February 2020

Greg Blackman speaks to Dr Richard Dudley about the National Physical Laboratory’s 3D imaging rig, which will be scanning wheat in crop breeding trials this summer