Machine vision firms target embedded market

Share this on social media:

Greg Blackman reports from the Embedded Vision Summit in Santa Clara, where Allied Vision launched its new camera platform

Allied Vision has launched a €99 camera with onboard ASIC processor specifically for the embedded vision market. The camera aims to be a bridge between the high performance, costly and low volume industrial vision market and the higher volume, lower cost embedded market.

It was launched at the Embedded Vision Summit, a computer vision conference organised by the Embedded Vision Alliance and held in Santa Clara, California from 1 to 3 May.

Andreas Gerk, Allied Vision’s CTO, said during the show that the typical cameras in the embedded market are not as rich in features as in the machine vision sector. Allied Vision hopes to offer some of the functionality found in machine vision to embedded vision developers through its new product line. Gerk added that the new camera platform is ‘totally different to what we have done before’.

Embedded vision is a hot topic in the machine vision sector at the moment, with the VDMA organising a panel discussion at the Vision show in Stuttgart last year, and companies like Basler introducing its online vision community, Imaginghub, for those building embedded vision solutions. Mark Hebbel, head of new business development at Basler, gave a tutorial at the Embedded Vision Summit on choosing time of flight sensors for embedded applications.

Other notable machine vision names that were exhibiting at the event in Santa Clara included Ximea, MVTec, Euresys, and Vision Components.

Jeff Bier, the founder of the Embedded Vision Alliance, said during the conference that embedded vision can mean many things: it can be an industrial camera with a processor inside; an embedded system with an integrated camera or with an external camera; or even a system sending images to the cloud.

Neural networks

Half of the technical insight presentations at the conference focused on deep learning and neural networks. These are algorithms that can be trained to recognise objects in a scene using lots of data, as opposed to the traditional method of writing an algorithm for a specific task. Bier said that 70 per cent of vision developers surveyed by the Alliance were using neural networks, a huge shift compared to only three years ago at the 2014 summit when hardly anyone was using them.

Bier gave a presentation at the conference predicting that the cost and power consumption for the computation required for vision will decrease by 1,000 times over the next three years, much of this thanks to neural networks.

Bier clarified the statement saying that the 1,000 times came from, firstly a 10 times improvement in the efficiency of the neural networks, which have largely been developed for accuracy rather than efficiency, compounded with a 10 times efficiency improvement in the processors running neural networks, and a 10 times improvement in the software that mediates between the processors and the algorithms.

In an article on the Embedded Vision Alliance’s website, Bier noted five computer vision trends that are likely to have a big impact on society in general: huge amounts of image data; deep learning; 3D sensing; simultaneous location and mapping (SLAM) used in robotics; and computing on the edge, a term that means doing processing on the device rather than on a server or in the cloud.

The advances in computer vision are opening up all kinds of new ways of using vision technology, from the embedded vision inside Microsoft’s Hololens augmented reality headset – Marc Pollefeys, director of science at Microsoft and a Professor at ETH Zurich, gave a keynote presentation about Hololens – to cameras for generating analytics for retail. Embedded vision also has the potential to disrupt more traditional markets, like surveillance – Michael Tusch at ARM gave a presentation on this topic.

Rudy Burger at Woodside Capital Partners though made the point that there haven’t actually been many large scale embedded vision products – he mentioned Kinect and Mobileye, which Intel acquired last year, as two examples. ‘We’re just at the very beginning,’ he said.

Turning back to machine vision, and Arun Chhabra at 3D surface inspection company 8tree gave a presentation about making an embedded 3D vision system for mapping dents on aircraft, a practice that traditionally is extremely rudimentary and labour intensive. 8tree’s system is a 3D scanner that operates by pattern projection and can annotate the area of the plane being inspected to measure any dents.

The embedded vision sector is not just a new market for machine vision companies, but the way vision in general is being deployed is changing, which in turn could impact machine vision. The reason why Allied Vision, Basler and others are starting to provide embedded vision products is to cater for these new ways of using vision technologies, so that if a customer asks whether a system can be employed on an ARM chip, for instance, then they are able to do that.

Related analysis & opinion

05 May 2020

Greg Blackman speaks to Kieran Edge at the University of Sheffield's Advanced Manufacturing Research Centre, about new vision projects and the presentation he is to give for UKIVA's vision technology hub, to be broadcast on 14 May

23 November 2020

As AMD buys Xilinx and Nvidia acquires Arm, we ask two industry experts what this could mean for the vision sector

10 November 2020

Greg Blackman explores the efforts underway to improve connectivity in factories

27 January 2020

Prior to speaking at the Embedded World trade fair, The Khronos Group’s president, Neil Trevett, discusses the open API standards available for applications using machine learning and embedded vision

13 January 2020

Vassilis Tsagaris and Dimitris Kastaniotis at Irida Labs say an iterative approach is needed to build a real-world AI vision application on embedded hardware

Related features and analysis & opinion

An Illustration of the surface of PAN fibres (left) and carbon fibres after spreading (right). Credit: Fraunhofer IGCV and Chromasens

10 November 2020

Work is underway to build a vision system to detect defects in webs of composite materials

05 May 2020

Greg Blackman speaks to Kieran Edge at the University of Sheffield's Advanced Manufacturing Research Centre, about new vision projects and the presentation he is to give for UKIVA's vision technology hub, to be broadcast on 14 May

23 November 2020

As AMD buys Xilinx and Nvidia acquires Arm, we ask two industry experts what this could mean for the vision sector

10 November 2020

Greg Blackman explores the efforts underway to improve connectivity in factories

The panel discussion at the Embedded World trade fair

15 April 2020

Matthew Dale looks at what it will take to increase the adoption of embedded vision

Pegnitz river in Nuremberg

12 February 2020

Vision technology will be one of the highlights at Embedded World in Nuremberg. Here, we preview what to expect

Engineers at KYB in front of a pick-and-place solution for handling steel metal cylinders. Credit: Pickit

03 August 2020

Car manufacturing has been hit hard by Covid-19, but the need for automation on production lines has not diminished, as Greg Blackman finds out

A point cloud of a National Research Council Canada artefact superimposed on a CAD model. Credit: NIST

31 July 2020

How do you choose a 3D vision system for a robot cell? Geraldine Cheok and Kamel Saidi at the National Institute of Standards and Technology in the USA discuss an initiative to define standards for industrial 3D imaging