UKIVA Machine Vision Conference and Exhibition 2018

Share this on social media:

16 May 2018
Milton Keynes, UK

Autonomous vehicles take conference centre stage

By Paul Wilson, UKIVA chairman

The 2018 UKIVA Machine Vision Conference and Exhibition combines an exciting conference programme across seven presentation theatres with an exhibition from some of the world’s leading machine vision companies. The two keynote presentations will examine the use of vision in two very different types of autonomous vehicles.

Professor Tony Pipe, deputy director at Bristol Robotics Laboratory, UWE, will talk about the Venturer project in the first keynote. The Venturer consortium, led by SNC-Lavalin’s Atkins, consists of ten public, private and academic experts, including BAE Systems, Williams and Bristol Robotics Laboratory. This wide-reaching project considers the responses of passengers and other road users to driverless cars, as well as looking at the enabling technology and insurance and legal implications of increased vehicle autonomy. The consortium’s autonomous vehicle, BAE Systems’ Wildcat, is equipped with a situational awareness system featuring radar and cameras.

In the second keynote, Henry Harris-Burland, VP of marketing from Starship Technologies, will discuss the development of advanced, self-driving personal delivery robots that can carry food or shopping within a 2-mile radius, using pavements to make their deliveries. The robot is equipped with nine or ten cameras, radar, and ultrasonic sensors that create an ‘awareness bubble’, allowing it to detect and avoid obstacles. A robot will be in action at the event.

Vision threads

The session dedicated to deep learning provides background information on this hot topic, its benefits and its applications. Presentations on embedded vision will review the differences between embedded vision and traditional vision, as well as looking at potential new machine vision architectures.

3D vision highlights technological innovations such as: a new pattern projection method; the combined use of 3D and 2D data for improved performance; and combining multiple 3D sensors. The practical use of 3D vision systems, especially related to use with robots and cobots, will also be covered.

Camera technology talks will cover interfaces such as 5GigE and 10GigE, and how to migrate towards them. Other talks will look at new sensors and ways of getting more out of existing sensors, including TDI and WDR, and the use of multi-sensor systems.

Optics and illumination explores the common pitfalls and solutions when sourcing optics. Other subjects include hyperspectral imaging, multi-spectrum lighting, ways of standardising LED lighting schemes, and optimising lighting for production lines. The importance of precision lighting control in a variety of machine vision applications will be highlighted.

The use of vision systems and robots also features strongly in the vision systems thread, as well as tools to improve automation efficiency. These include an IoT tool for capturing slow-motion video of machine faults, and an integrated control solution to provide real-time imaging feedback for self-optimising production processes. A new low profile 3D camera will be reviewed.

The understanding vision technology programme includes the CoaXPress standard, EMVA and ISO procedures for measuring camera sensitivity, choosing optics for the latest CMOS and InGaAs sensors, and lighting for machine vision-robot applications. There is also a chance to see how to avoid failures with vision systems, and how different vision techniques can benefit a wide range of industries.

Vision innovation covers current and future 3D and 2D technologies, developments in LWIR technology, and multispectral imaging. There will be a look at pixel level polarisation filters for removing reflection effects directly on the CMOS image sensor, and how choosing the correct cable assembly can solve application-based challenges.

More than 50 exhibitors will be at the show, including vision component manufacturers, vision component and system distributors, and systems integrators. The exhibition offers a great opportunity to see some of the latest vision products and talk to experts about any aspect of machine vision.

UKIVA
Exhibitors: 

Acrovision

Acrovision will show how, more frequently, vision applications need item-handling to optimise the image acquisition process. Acrovision will also demonstrate vision technology for collaborative robots.

Vision can be used to guide a robot, provide component position data for pick-and-place applications, or inspect a product by moving a camera on a robot arm. Acrovision can now supply a family of robots for different requirements. The key advantage of using collaborative robots is that they have inherent safety features that enable them to work alongside humans.

The guide-to-teach function of cobots enables quick and easy programming, and they can be integrated easily into existing production systems or re-deployed to another application.

Company: 

Active Silicon

Active Silicon will be showcasing its USB3 Vision Processing Unit, demonstrating simultaneous acquisition and display from four USB3 Vision cameras. The USB3 VPU has been designed for industrial and medical use. With standard PC interfaces available, this embedded system can be readily adapted for many applications.

The company will also display a new line of low-cost high-performance frame grabbers – both Camera Link and CoaXPress. In addition to embedded systems and frame grabbers, it will be demonstrating its latest HD/3G-SDI solution for Tamron HD block cameras.

Active Silicon will be giving a presentation at the show entitled: ‘High-speed image acquisition with real-time GPU processing’.

Company: 

CVR Lighting

CVR Lighting will be exhibiting its Matrix LED lighting system, which offers tuneable light in wavelengths from 440nm to 970nm. This is an excellent lighting system for hyperspectral imaging.

The ability to image an object in colour adds a new dimension to the data space. Equal to the spatial representation of an object, the intensity of its features in the image is key for analysis. This intensity varies depending on the wavelength of the incident light. Distinguishing between the signals coming from various wavelengths is a complex undertaking.

The Matrix LED lighting system is ideal for applications needing colour or hyperspectral imaging. Matrix illumination is available in configurations offering between 3 and 16 channels, all controllable individually via an RS 485 interface.

Company: 

Effilux

Effilux designs, manufactures and distributes LED lighting for machine vision. During the show, Effilux will introduce its catalogue of flexible and modular products: Effi-Flex, Effi-Backlight, Effi-Lase, and Effi-Ring. Effilux will also exhibit some custom products to show its capability to make bespoke solutions.

Effilux helps its customers to use the right lighting systems for a given application. The company can offer technical feasibility studies in its laboratory in Birmingham, and it can offer free loans.

Effilux is a French company, founded in 2009, and has just opened an office in Birmingham, UK.

Company: 

Euresys

Euresys will show the Coaxlink Octo, an eight-connection CXP-6 frame grabber. The camera data transfer rate is 5GB/s; the frame grabber comes with a PCIe Gen 3 x8 bus offering a peak delivery bandwidth of 7.8GB/s. The effective delivery bandwidth is 6.7GB/s. The device is also compatible with the Memento Event Logging Tool.

The Coaxlink Octo is designed for multi-camera applications, with support for up to eight cameras on a single frame grabber. Successful Coaxlink applications include 3D automated optical inspection, flat panel display inspection, printing inspection, and in-vehicle video transfer.

Company: 

Framos

Framos will be presenting its full imaging portfolio with a focus on cameras and industrial solutions including Intel’s RealSense technology.

The company will show examples of the latest advances in machine vision, 3D technology and embedded vision, and provide practical solutions to industry-specific challenges. In addition, based on the long-term partnership with sensor manufacturer, Sony, Framos will showcase the latest CMOS sensors for vision applications.

Users can explore easy-to-integrate 3D vision on demo stations. The booth will also include cameras from Emergent Vision, Smartek Vision and ISVI, customised sensor solutions from Pyxalis, and lighting components from Effilux and Falcon Illumination.

Company: 

Gardasoft

Jools Hudson from Gardasoft will give a presentation illustrating the cause of poor results in machine vision because of variations in LED output intensity. The talk will also cover the ability of dedicated lighting controllers to reduce hardware costs and enable new application opportunities in both area scan and line scan imaging.

There will also be the opportunity to see multiple lighting schemes from Gardasoft in operation at the exhibition. The CC320 Trigger Timing Controller will be on show, linked to multiple lights and a single camera. The controller can be configured to send trigger outputs to each light to turn them on and off in sequence. The controller also triggers the camera for each lighting pulse, allowing the acquisition of a sequence of different images from the same object using a single camera.

This approach can also be used in line scan applications where the information from different illumination sources can be captured on sequential lines on a single line scan camera, and individual images for each illumination source extracted using image processing software.

Company: 

IDS

IDS will show the new NXT vision app-based cameras and sensors for the first time in the UK. Also on show will be the Ensenso X series of 3D stereo cameras featuring 5-megapixel Sony IMX264 CMOS sensors and GPU-based image processing.

IDS NXT Vegas is the first product of a new device family. With an integrated 1.3-megapixel CMOS sensor, an integrated liquid lens, autofocus, LED illumination and a time of flight sensor for distance measurement, it is fully equipped for many different image processing tasks.

The Ensenso X 3D GigE camera system provides up to 20 per cent wider field of view, 35 per cent greater lateral resolution, and almost 30 per cent lower noise compared to earlier versions. With the new Ensenso SDK 2.2, stereo matching can now be supported by a graphics card. Using the GPU accelerates the processing by about five times, depending on the parameterisation.

At the conference, Peter Dietrich will present on 3D vision and how 3D data can be combined with 2D information. A second talk will cover how standards define procedures for measuring key performance parameters of a camera or camera system.

LMI Technologies

LMI Technologies will exhibit its flagship Gocator 3D smart sensors for inline inspection, including live demos of a laser line profiler and a snapshot sensor. The event is an opportunity for industry professionals to experience Gocator’s 3D inspection capabilities while immersing themselves in LMI’s FactorySmart approach to inline automation, inspection and optimisation. This approach goes beyond the simple data acquisition of standard sensors to provide a more intelligent solution to the real-world challenges of manufacturing.

Additionally, LMI will hold a presentation at 3pm in the 3D vision field, entitled: ‘Multi-3D sensor set-ups and acceleration capabilities made easy’. The talk will explore how to sync multiple sensors to inspect complex parts, and how a PC can be used to accelerate data acquisition and processing speed.

Matrix Vision

Matrix Vision will present camera models based on Sony Pregius sensors with up to 12 megapixel resolution, and Gigabit Ethernet, Dual Gigabit Ethernet and USB 3.0 interfaces.

Furthermore, the 3D/6D modular perception camera, the MvBlueSirius, will be on display. Inspired by human vision the camera determines position, location, and movement of known objects in space. Logistics and automotive users will benefit from this reliable and fast system.

In addition, the company will present the intuitive smart camera MvBlueGemini. This camera makes it possible for end users and system integrators to implement applications more efficiently. Applications can be solved without programming because the camera covers all basic image processing functions.

Finally, Matrix Vision will introduce a standalone PC version of the MvImpact Configuration Studio (MvImpact-CS), the software core of the MvBlueGemini.

Company: 

Matrox Imaging

Matrox Imaging is proud to be exhibiting with its representative in the UK, ClearView Imaging. Visitors will see the latest advances in Matrox Imaging machine vision software, including the new classification tool found in the Matrox Imaging Library (MIL) which uses deep learning. The latest update to MIL also includes the addition of a photometric stereo tool to bring out hard to spot surface anomalies and a new dedicated tool to locate rectangular features.

Also on display will be the latest version of the company's Design Assistant flowchart-based development environment, its Iris GTR smart camera and 4Sight GPm vision controller. Sales manager, Jason MacDonald, will present ‘Demystifying machine learning for machine vision’ at the conference. His presentation will provide an overview of the technology, discuss its applicability and limits for machine vision, and touch on the benefits and challenges with applying it effectively.

Company: 

Multipix Imaging

Multipix Imaging will return to MVC 2018 with the latest machine vision components from leading manufacturers. Specialising in the complete vision chain, Multipix has been supplying and supporting OEMs, system integrators and machine builders within the UK and Ireland for more than 20 years.

Embedded vision and deep learning are among the hot topic areas Multipix will cover with representatives on hand from Basler, Euresys and MVTec. Multipix will also set the scene for 3D imaging technologies, currently an exciting growth area within vision solutions. This will include the introduction of a new and accurate 3D projection method that is transforming camera integration with robots and cobots, helping with the Industry 4.0 drive. Simon Hickman, sales director at Multipix, will present on 3D vision at 11am.

Sick

Sick will highlight its AppSpace programming environment and show how the concept is starting to bear fruit for automation and robotics development.

AppSpace is an open software platform that allows customisation of applications on Sick programmable sensors and devices. Rather than being restricted to proprietary software, AppSpace enables system integrators, OEMs and end-users to develop their own solutions.

Visitors to the UKIVA show will learn about applications being developed in AppSpace that are now available for wider industry use. The first is the Sick LabelChecker, a label reading and verification solution based on Sick’s InspectorP vision sensor. Sick engineers perfected the application working closely with a European chocolate manufacturer. The solution is based on the InspectorP camera for reading and verifying text, numbers, barcodes and 2D codes, as well as inspecting label design and print quality.

Sick will also promote its Inspector PIM60 URCap, an entry-level vision-guided robot solution for pick-and-place, inspection and measurement. It integrates Inspector PIM60 2D vision sensors with U3, U5 and U10 robots. The Inspector PIM60 URCap is a powerful toolkit for creating a vision-guided robot task with minimum time and effort.

The company will introduce its Trispector P Beltpick solution for enhanced picking of products on a conveyor through integrated 3D vision robot guidance. The solution offers support for ABB PickMaster and Universal Robots, and provides users with access to AppSpace apps for belt picking applications.

Company: 

Stemmer Imaging

Stemmer Imaging will be presenting a technical seminar covering machine learning at the UKIVA machine vision conference. Equipment on show at the exhibition highlights 3D imaging capabilities. New 3D tools within Common Vision Blox and two LMI Gocator 2340 all-in-one 3D smart sensors operating in dual sensor mode will be demonstrated.

With deep learning such a hot topic in industrial machine vision, Dr Jon Vickers will give a presentation at 11am on 16 May covering machine learning of which deep learning is a part. The talk will show the kinds of applications that typically succeed and, crucially, where machine learning is not the correct choice.

The latest 3D tools within Common Vision Blox (CVB2018) will be demonstrated in conjunction with a C2-2040HS 3D GigE Vision camera from Automation Technology and a M18 laser stripe generator from Z-Laser. The CVB 2018 Image Manager gains core 3D functionality to be able to handle and display 3D data including calibration data. Two new tools have also been added: Metric3D can create calibrated 3D data, while Match3D allows an acquired point cloud to be matched to a trained point cloud. This enables differences between the trained and test data to be found.

LMI Gocator 2340 3D profile sensors can collect cross sectional profile scans to form 3D point clouds representing whole parts for performing volumetric measurements. The Gocator will be shown working in dual sensor mode where it automatically recognises a second Gocator called a buddy. Dual sensor mode combines profile data from both sensors as if they were one, and uses a single GUI to configure, measure, make decisions, and show results.

Company: