Skip to main content

Programming choices: library or open source?

Since the mid-1990s, developing machine-vision software for an industrial vision application has followed a predictable path: you invest in a professional machine-vision software package from an established supplier, pick the most appropriate solution within that package for your general requirements, and then develop your software with a little hand holding from your supplier’s technical support team. But, today, if you intend to develop a machine vision application, there is a new option: go open source.

Looking to develop an imaging application? You can access a range of free solutions through open-source libraries such as AForge.NET, the Open Source Computer Vision Library (OpenCV), or Point Cloud Library. Aiming to execute targeted deep learning in a machine vision application? Investigate open-source frameworks such as TensorFlow, Caffe, or PyTorch.

These general-purpose libraries contain advanced interoperable software shared by developers at the bleeding edge of vision research, and anyone can access and use them for free. This includes the full source code, enabling you to tinker at the fundamental level, perform extensive software testing and implement new features. What’s more, these libraries are supported by a large, global community of experts, keen to contribute new software, share ideas and updates, and collaborate in building new innovative solutions.

An example of the quality of free software on offer is PatchCore, an automated visual anomaly detection method freely available on Github – the largest source code host in the world – to anyone who wants to try and implement it on a GPU. Developed by University of Tübingen PhD student Karsten Roth during an internship, alongside expert collaborators at Amazon, PatchCore addresses the cold-start problem – where the software has to identify anomalies without having been given access to any negative examples or defects. It detects and localises these anomalies using a maximally representative memory bank of normal feature sets (images of non-defective items) and an outlier detection model.

The method has already been used in practice for anomaly detection on solar cell electroluminescence images, and a number of other real-world applications. “Intel’s open-source OpenVino toolkit has this anomaly detection library that has reimplemented various collections of anomaly detection methods,” says Roth. “Within this library, PatchCore is the best performing method on average, even though this library has more recent methods included as well.”

Talking more generally about how open-source machine vision software is starting to make a mark, Roth says: “Even individuals can start to build their own neural network-based applications – there is very much a growing interest in expanding and building on top of these open-source applications.”

Why then, are most machine vision and automation innovators still paying for professional libraries? “There is a pretty significant overhead that comes with developing your own set of tools that are specifically targeted to your problem at hand,” says Roth. “Obviously, if you have the money and you have the people to research and figure out which method to select and train for your specific problem, then – of course – you can do it, but there are those applications and use cases where it's hard to replace professional libraries.”

The value of libraries

Having been deeply involved in the industry since its inception, Pierantonio Boriero, Director of Product Management at Zebra Matrox Imaging, has a good handle on why clients still turn to Zebra’s portfolio of professional machine-vision software development tools: “Our users value working with software that's been rigorously professionally validated; knowing that they can have access to technical support and maintenance when they truly need it; and also that they have a certain peace of mind when it comes to intellectual property rights (there's no risk of hidden royalties, which would affect the cost basis for their particular product).”

Zebra Technologies, which acquired Matrox Imaging earlier in the year for $875 million, offers the popular Matrox Imaging Library X (MIL X). MIL was first released in 1993 and initially focused on 2D algorithms and tools that work on monochrome images. Iterations over the years have increased MIL’s capabilities, allowing users to perform colour analysis in images, work on 3D data and, since 2018, leverage deep learning for inspection. It features functions for image capture, processing, analysis, annotation, display and archiving, and also includes MIL CoPilot, an interactive environment for experimentation, prototyping and code generation.

“We're continuing to add deep learning models to our library to provide users with more options to perform automated visual inspection using deep learning,” expands Boriero. Today, MIL X represents a comprehensive collection of ready-made tools or functions for developing machine vision, image analysis and medical imaging applications. “In MIL, we provide the tool, the environment needed to train the deep learning model for a specific use case and, obviously, perform the inference or prediction using that model,” says Boriero.

Alongside MIL X, which is designed for users with expert programmers on staff capable of developing machine-vision applications, Zebra also offers Matrox Design Assistant X. “MIL is really, first and foremost, aimed at original equipment manufacturers, whereas Design Assistant is aimed at systems integrators,” explains Boriero. “System integrators need an environment to accelerate their application development.” Design Assistant is an integrated development environment for Microsoft Windows where vision applications are created by constructing an intuitive flowchart instead of writing traditional program code, and where graphical web-based operator interfaces for the applications are designed.

Similarly, MVTec offers two products targeted at users with differing in-house machine-vision software development expertise: Halcon and Merlic (alongside MVTec’s Deep Learning Tool for easy image data labelling offered for free). First developed by researchers from the Munich University of Technology and adapted for industry by spin-off MVTec, Halcon was first released in 1996. Today, Halcon is a powerful toolkit featuring an integrated development environment (HDevelop), numerous interfaces and a range of deep-learning technologies. It serves all industries, with a library use in all areas of imaging, including blob analysis, morphology, matching, measuring and identification.

“It's a great, comprehensive product nowadays, with more than 2,100 operators,” says Maximilian Lückenhaus, MVTec Software Director Marketing and Business Development. “And it’s used in tens of thousands of applications worldwide.” One unusual application example was in a NASA robot for the International Space Station called Robonaut, where Halcon was wielded to allow the robot to track and grasp objects in zero gravity.

“Halcon is meant for programmers that want to have full control of everything,” explains Lückenhaus. “But we learned that there are also other customers that don't have the time to program the whole solution for themselves, or customers that don't have knowledge about machine-vision technologies.” For these customers, MVTec built Merlic. Merlic provides access to the same library, but only contains preconfigured tools for typical tasks accessible through a graphical user interface, as Lückenhaus describes: “You take the tools, and drag and drop, and you can stitch them together – you need no programming knowledge.” As well as integrated PLC communication and image acquisition based on industry standards, all standard machine-vision tools, such as calibration, measuring, counting, checking, reading and position determination – as well as 3D vision with height images – are included.

MVTec's Maximilian Lückenhaus (left) and Zebra Matrox Imaging's Pierantonio Boriero

With open-source software starting to build up basic technical support services and training to ease the development process, Lückenhaus sees the likes of OpenCV as competitors. However, he highlights two key drawbacks for customers thinking about opting for open-source development. “In many cases, it is not quite clear what the patent situation behind some of the algorithms is,” he says. “And, as long as you have a programming community that pushes some specific subjects, you have parts that are quite up-to-date and you have parts that are quite old. This is different for our library – we must keep all important parts of our library up-to-date because our industry customers demand it.

“A lot of machine vision and automation still is more a niche product, and you won't find so many open-source programmers for it,” he continues. “Smaller companies that are really quite closely working together with customers is where professional libraries will remain important from our point of view.”

Going with the flow

Another company offering specialised libraries of algorithms that can be used by developers to implement machine-vision applications is MathWorks. Its Deep Learning Toolbox is also a child of the 1990s (originally called the Neural Network Toolbox) and consists of deep-learning algorithms, techniques and models that provide a framework for designing and implementing deep neural networks, including those that can be used in industrial vision contexts. Users can also access several apps that help them through the network design, testing and evaluation process.

The Deep Learning Toolbox has been applied in a huge range of industries, from the development of a deep-learning system for real-time object detection at sea by maritime technology company Drass to improving automated visual inspection of sheet-shaped products on the production line for Mitsui Chemicals.

Much like MVTec, Zebra and others offering professional libraries, MathWorks has recently made it easier to use the various tools within the Deep Learning Toolbox, releasing higher level packages for non-experts “who are looking to solve a specific problem, whether it’s using deep learning or some other technique under the hood”, such as the Medical Imaging Toolbox. MathWorks also provides lower-level packages if users wish to customise further, such as the Computer Vision Toolbox and Lidar Toolbox.

Where MathWorks differs is in fully embracing the various vision software development platforms that exist today, including those that are open source. For David Willingham, Principal Deep Learning Product Manager at MathWorks, it is important the company acknowledges the reality that product developers have their preferences and don’t necessarily want to be tied to a particular operating system, coding language or brand – interoperability is key.

“A few years ago, our community and the open-source community for deep learning had the same problem, and that was ‘how do we all coexist together, and how can we share the models that we've created with other platforms as freely as possible?’,” says Willingham. “By having different interoperability techniques in the Deep Learning Toolbox, it enables users to mix and match which tools they might want to use in different parts of their workflow.”

In Zebra’s MIL X and MVTec’s Halcon, users can choose from supplied, pre-defined deep-neural network architectures or import a compatible third-party open-source neural network model stored in the widely used Open Neural Network Exchange (ONNX) format. The Deep Learning Toolbox takes this a step further, providing import and export functions to ONNX, as well as popular, free and open-source platforms TensorFlow and PyTorch.

On top of this, MathWorks offers Matlab Coder and GPU Coder to expedite the deployment of imported networks – a unique offering for downstream product development. What this means is that users who have created a deep-learning model in, for example, TensorFlow can import into Matlab to automatically generate native embedded code for application in an FPGA, therefore simplifying and accelerating the process of integrating deep learning in a given product.

“While, in many cases, the latest research comes out in open source, it's very difficult to take that research and get that onto a chip in a product that might go to mass market – it's a long lag time,” says Willingham. “We're focused on enabling engineers to take these techniques and make products out of them. That's why I say there's a bright future for professional libraries – there's always a need to help people do things quickly and easily and understand the value for their business.”



Topics

Read more about:

Software, Deep learning

Media Partners