Skip to main content

State of play

Mark Williamson, director of corporate market development, Stemmer Imaging

All aspects of machine vision technology have come a long way in a very short space of time, and camera technology in general – and smart camera technology in particular – are no exceptions. Smart camera performance has increased significantly in recent years to accommodate increased image resolution and more complex and faster processing and analysis requirements. The development and mass manufacture of low power processors for smart phones means that it is now possible for smart cameras to offer image processing capabilities that were previously only available on PC-based systems.

In recent years we have also seen a number of camera companies attempt to remove cost and functionality, and adopt a commodity business model. This has led to a significant drop in prices with an expected increase in quantities. The winner here is the customer – but, for the manufacturer, especially in these slow economic times, the increase in quantities is struggling to compensate for the reduction in margin. This pressure is forcing all camera companies to change their business model, move up market or fade away. These pressures on margins and a slowdown in the product innovation cycle is likely to lead to a period of consolidation for camera manufacturers in 2014 as there is not the space for all manufacturers to survive. Darwin’s theory of natural selection will mean fewer larger camera suppliers in the coming years.

One area of camera technology that has attracted a surge of interest in the past year, however, has been the 3D smart camera and it will be interesting to see if this increased level of interest is translated into increased sales in 2014. The emergence of easier to use 3D smart cameras with integrated laser source and optics for 3D triangulation has been one of the most striking developments in smart camera technology and has the potential to move this previously complex 3D market into the mainstream. While 3D machine vision can solve many problems, it adds more data and more complexity to the system that these new solutions simplify. End users and vision integrators realise that many 3D applications can still be handled by 2D systems so 3D smart cameras have historically only been used where they are really needed. However, with this new ease of use there is significant interest from the automotive, automation, electronics, rubber and tyre, metal and wood industries, so there is every chance that 2014 will prove to be the ‘year of the 3D smart camera’.

Chris Brown, European business development manager, Flir Systems

Manufacturers are increasingly considering machine vision. For many, systems based on infrared technology are proving to be superior. They are enabling manufacturers to validate and increase product quality and throughput, minimise waste and improve profitability.

While traditional machine vision can see a production problem, it can’t detect thermal irregularity. Infrared vision gives much more information. It can also detect heat energy in the presence of smoke and steam. For non-contact precision temperature measurement there is nothing to rival infrared.

Another advantage is that it requires no additional lighting to illuminate the target scene; it achieves the same quality image day and night. For traditional systems external lighting is a prerequisite and adds to the overall cost. Furthermore, establishing the correct lighting conditions can be time-consuming and the end result is generally temperamental.

Flir Systems is an established provider of fixed mounted thermal imaging cameras, models of which are sufficiently small to be installed anywhere on the production line. The company’s A-Series cameras are proving popular for automatically checking the thermal performance of products such as electronic resistors and car windscreen heating elements.

In the manufacture of plywood and veneer, logs are softened for further processing and infrared imaging ensures the critical temperature is reached. The food and beverage industries are also enthusiastically adopting the technology for a variety of applications.

These are just a few examples, but the application scope of thermal imaging for quality assurance and process control is huge.

Sam Lopez, senior manager of sales and marketing, Matrox Imaging

The 1970s heralded the arrival of the third industrial revolution. Based on automation and driven by the microchip, computers and globalisation, this revolution succeeded in bringing wide-scale production to ever higher levels of efficiency. It is now reaching maturity and a fourth industrial revolution, Industry 4.0, is taking hold. Driven by digitisation, Industry 4.0 looks to transform manufacturing with smarter machines and software, robots that offer greater ease-of-use, new processes like 3D printing, and networked manufacturing processes. All of this will result in the flexibility to produce smaller batches locally that cost less and require less labour.

As a supplier of machine vision hardware and software, we see that flexible manufacturing affects how our factory-floor customers perceive and use Matrox Imaging products. Manufacturers look for machine vision systems that can interface easily to robots and PLCs. Vision platforms must be versatile and support a wide range of applications and lend themselves to various tasks and product lines without requiring costly re-tooling. Factories need tools that let them re-configure on-the-fly and accommodate new inspections and parts, while following the trend towards using simpler, smaller robots for many different applications.

Manufacturers of vision systems want general-purpose hardware – for example smart cameras, which let them perform a variety of tasks like code reading, character recognition, measurement, etc., instead of function-specific sensors, like barcode readers, designed for one task only. Vision hardware must be computationally fast, compact and rugged enough to meet the demands of today’s factories.

Software also plays a critical role in flexible manufacturing as it is used to reprogram and reconfigure the vision and production tasks. Manufacturers look for machine vision software that is simple to use, yet still demand algorithms that are sophisticated and robust enough to handle image variations and work across a variety of hardware platforms.

Tim Losik, CEO, ProPhotonix

This year has seen several LED and laser diode advancements. However, one of the most important technology changes affecting machine vision is the introduction of the new direct emitting, 520nm, laser diode from Osram Opto-Semiconductor. This technology makes green, structured light lasers a much more attractive solution for OEMs and system integrators, versus traditional technologies.

Structured light lasers have been available in green for some time using DPSS lasers which are typically bulky, expensive and lack temperature stability. There are a number of reasons why OEMs and system integrators will choose a green machine vision laser. The user is often restricted by the absolute power level due to laser safety classifications. Just as the human eye is more sensitive to green than to red, the same is typically true of cameras. The green laser will appear brighter than red of the same power output. Machine vision equipment fabricators can now safely use direct emission green structured light laser without the high cost and onerous space requirements of previous technology, and obtain faster throughput without sacrificing inspection integrity.

Kyle Voosen, marketing director, National Instruments, UK and Ireland

If you live in the UK, ‘cameras everywhere’ is hardly an eye-catching topic. The British Security Industry Association says there are between 4 million and 5.9 million CCTV cameras installed in the UK alone.

While this number may seem large, it’s tiny when compared to what’s coming. A major transformation will take cameras off the street corner and machine vision systems off the factory floor and embed them into nearly every aspect of our lives – keeping us informed, productive and even safe.

Leading the transformation is the emergence of very powerful yet low-cost and low-power embedded processors, which are making it possible to incorporate computer vision technology into all kinds of embedded systems. One of the first benefactors of embedded vision is the automobile, where computer vision will soon help you park, keep you in your lane and even warn you if you appear drowsy behind the wheel.

A bit closer to the typical machine vision industry is the revolutionary work taking place at Rethink Robotics. With the help of embedded vision, the firm developing robots intended to work side-by-side with people on factory floors, in laboratories and someday, even, in your home.

Embedded vision is helping the visually impaired to see. At the University of Oxford, researchers led by Dr Stephen Hicks are integrating cameras and LED arrays into normal-looking eyewear. With the help of a hip mounted computer, the system can continuously analyse its surroundings and overlay bright, high-contrast information on the lenses in a way that augments the wearer’s visual limitations.

Of course, the most obvious embedded system to make use of computer vision is likely to be in your pocket. The business intelligence firm, Strategy Analytics, estimates that within three years there will be two billion smart mobile devices in circulation, most of them with built-in cameras.

Mike Gibbons, director of sales and marketing, Point Grey

We’ve seen significant adoption of USB 3.0 since Point Grey came out with the first USB 3.0 camera in 2011. Most key camera, software, and peripheral vendors have announced the release, or at least plans to release, USB 3.0 products. Hundreds of our customers worldwide are integrating cameras into imaging and vision systems, often replacing old analogue or Camera Link-based solutions. Over the next few years more and more users will migrate to USB 3.0 as a result of its higher 440MB/s bandwidth and its ease-of-use.

Educating customers is the key to this migration, however, and those vendors with the necessary experience and background in USB technology will become dominant. We can also expect continued development and widespread adoption of the USB3 Vision standard, aimed at giving users the flexibility and reliability that they require.

Just as important as the camera interface, however, is the image sensor technology. In the long-term there will continue to be a place for both CCD and CMOS. Both technologies are improving, with CMOS image quality getting better and CCD frame rates getting faster. Most new USB 3.0 camera offerings use CMOS, in general because they are lower power and fit easily within the 4.5W power budget; are relatively easy to design into a camera, requiring less electronics and engineering expertise; and the speeds of many CMOS actually require high bandwidth. However, we can expect CCD-based USB3 Vision cameras to become just as common, as demand from customers for optimal imaging performance flourishes. Many of the new sensors from Sony are quad-tap imagers (the CCD is split into four quadrants) that allow the sensors to run at much faster frame rates, and use proprietary EXview HAD CCD II technology to reduce read noise and improve quantum efficiency and sensitivity.

The overall trend is about providing users with choice – of form factor, interface, and image sensor – and 2014 promises to be another exciting year for developments in digital imaging technology.

Mark Butler, group manager for product management and marketing, Teledyne Dalsa

Many factors have influenced the machine vision market. Firstly, there has been a technology shift from CCD to CMOS technology. The shift is happening with both area scan and line scan applications. While the current state of the industry indicates the majority of revenue is from CCD-based cameras, when looking at the work done in R&D labs you’ll find the polar opposite – almost all R&D work is being performed on CMOS-based cameras.

A second technology shift involves that to tri-linear colour and near-infrared. In line scan imaging, tri-linear colour products are growing very rapidly. The total system cost, which includes cameras, frame grabber, lenses and computational power, has steadily dropped over the years. As a result, more and more applications are able to benefit from the utilisation of colour images. Colour nicely breaks the visible spectrum into three separate areas, typically red, green and blue, which enables the detection of a greater amount of defects relative to monochrome imaging. Therefore, end equipment can now affordably offer colour images for viewing by operators which are preferred over monochrome images. As a later step, near infrared (NIR) imaging should also trend upwards.

There has also been a market shift beyond traditional machine vision applications. Non-traditional machine vision applications such as Intelligent Traffic Systems (ITS) and a variety of applications around entertainment now require the skill sets that are at the core of traditional machine vision. Development of very fast cameras, 3D stereoscopic imaging, and different types of image processing has been key to the success of machine vision companies.

The medical market has grown. A number of populations around the world are experiencing an aging demographic boom that will mean a greater demand for better and faster diagnoses. This will drive the need for in-lab automation within the medical industry. Applications such as digital pathology and ophthalmology are growing at a rapid rate and will require machine vision to increase the amount of automation in medicine.

Ron Folkeringa, business manager, Intercon 1

As the entry cost of machine vision systems and components decreases, vision will be used in more applications. There will also be a significant increase in the number of new users who are looking to leverage the technology to provide improvements in their applications.

This rapid increase will be exciting, as well as challenging, for the suppliers. It will be exciting to see all the new methods by which the technology will be utilised in areas that we had not even considered before, but at the same time challenging in the sense that users will be using the technology in ways that were never intended. This will provide an opportunity for the vision industry to educate this new user base in the fundamentals of the technology along with general best practices.

The other exciting aspect is that, because these new users will not have been entrenched in the way vision has been utilised in the past, they will present to us some exciting and challenging opportunities. These applications will test the limits of current capabilities and in turn foster the development of new materials and technologies that can address these new applications.

Interconnect cables are not immune to these new challenges. Cables will continue to be bent and contorted into the most challenging of spaces and range of motions and all the while they will be expected to maintain excellent signal integrity through millions of flex cycles. It is these unique challenges and opportunities that allow Intercon to provide its customers with the innovative solutions they are known for. These are exciting times for all.

Marco Snikkers, director of sales and marketing, Pixelteq

Multispectral imaging is gaining traction, moving from the research lab to data-rich field applications. The benefits are being realised in application-specific cameras from fixed machine vision to autonomous unmanned aerial systems. Many users have already demonstrated three- to nine-channel multispectral cameras that open powerful new applications in agriculture, biomedical, inspection, remote sensing, robotics, authentication, and more. In the next year a growing number of OEMs and end users will commercialise multispectral sensors in a variety of application-specific cameras.

Technology advances are moving spectral imaging outside the lab – beyond filter wheel or grating-based cameras into true field industrial-grade cameras. Micro-patterned filter array technology is creating multispectral devices with the same frame rates, size, weight and power as traditional monochrome cameras. No longer limited to just RGB, pixel-scale dichroic filters can control both the position and width of spectral bands optimised to a given application. And wafer-level processing now makes production of multispectral sensors both scalable and cost-effective.

Silicon-based multispectral sensors will be used more frequently to combine discrete visible colours with near-infrared (NIR) or ultraviolet (UV) bands, enhancing spectral performance and contrast beyond human vision in the same snapshot at video rates. Implementations will include four-channel RGB and NIR line-scan cameras through nine-channel area sensors with custom mosaics incorporating spectral bands from the visible, NIR and UV.

This multispectral technology will also be applied to InGaAs and InSb sensors, changing what has traditionally been panchromatic detection into ‘colour infrared’ vision using multiple pseudo-coloured bands in the short- and mid-wave infrared.

And new applications continue to emerge as researchers use spectroscopy and hyperspectral cameras to discover better ways to inspect, diagnose, detect, screen, sort, measure, and image.

Looking ahead, expect to see more multispectral sensors in more places, enabling application-specific cameras that deliver data-rich images.

Frank Grube, CEO, Allied Vision Technologies

Machine vision cameras and image-processing systems keep conquering new fields of applications beyond the factory floor. Camera manufacturers must adapt with expanded technology portfolios and custom development for specific markets.

In the last three decades, machine vision has grown in the manufacturing sector for automated quality inspection. In the meantime, other application fields have also discovered the benefits of automated image processing and digital cameras: Intelligent Transportation Systems (ITS), medical and scientific imaging, security and surveillance or even the entertainment industry.

So far, these new application markets have used industrial cameras that were predominantly designed for industrial inspection. In many cases, they replaced consumer cameras and brought the benefits of high durability, precise triggering, and remote operation. However, as these markets get more familiar with digital image processing, they demand more specific features that standard industrial cameras don’t provide.

To meet these requirements, camera manufacturers will increasingly have to move away from the one-size-fits-all business model to design specific camera models for specific applications. This has already started, for example, for traffic and surveillance applications. With its extended operating temperature range and lens control features, a camera like the AVT Prosilica GT takes into account the specific needs of the outdoor imaging market that did not exist on the factory floor.

Another way for camera manufacturers to address non-industrial applications is to expand the range of imaging technologies they offer. This is particularly true with the spectral sensitivity of cameras. Infrared cameras are used in scientific and security applications to reveal what neither the human eye nor conventional cameras can see.

As their application markets diversify, camera manufacturers must adapt their product and technology range. With one of the widest camera portfolio both for the visible and infrared spectrum, Allied Vision Technologies has already started this transition.

Scott Summerville, CEO, Microscan Systems

Products, components or packages containing damaged or poor 1D or 2D codes can result in very costly errors. At Microscan, we see more and more interest in machine vision verification using smart cameras as a tool to ensure code readability in automated in-line operations. This trend is being fuelled by a growing number of companies that require compliance to code standards and impose fines on their suppliers for codes that do not meet these standards. A code may be readable within one system, yet unreadable at other points in the supply chain using different types and brands of readers. This is where verification adds so much value: it provides an objective, agreed-upon measurement for code quality to ensure readability throughout the supply chain.

Another reason why in-line smart camera machine vision verification is gaining momentum is the fact that manufacturers from the food and beverage, consumer goods, and pharmaceutical industries can error-proof each individual code on their packaging, which allows them to track and trace the product through the supply chain to the consumer. This capability facilitates faster, targeted and smarter consumer recalls in the event that one is ever necessary. Also, in-line verification with machine vision is analogous to 100 per cent inspection of codes, and therefore much more thorough than random sampling. Depending on the grading standard selected, and tolerance, in-line verification can also detect poor print quality before codes become problematic and generate costly rework or downstream fines. In this way, in-line verification can be an important part of a company’s predictive maintenance programme.

In-line verification allows companies to verify the quality of both 1D and 2D codes against established industry standards, including ISO 15415, ISO 15416, and AIM DPM. Machine vision software analyses the quality of the code and assigns a value for each parameter required by the code quality standard, and the code receives an overall grade. An alert can be set if codes begin to fall below the established grade, well before the codes begin to actually become unreadable. If quality verification to established industry standards is not necessary, manufacturers can easily modify the verification parameters to meet their own internal requirements.

ISO and AIM standards require verification systems that are equipped with particular lighting geometries. This can be addressed with high-performance lighting products that are specifically engineered for integration with smart cameras, including built-in light controllers that can be directly managed from the camera.

The sooner issues can be detected in the supply chain, the faster manufacturers can take action to address the problem. Or, even better, take action before the issue even becomes a problem. An in-line machine vision solution – with a compact smart camera, machine vision software and industrial lighting – provides a low-cost solution for companies in need of high-speed code verification.

Gregory Hollows, director of machine vision solutions and certified vision professional, Edmund Optic


Imaging technology for both consumer and industrial applications is in the midst of a revolution. There is every indication that sensor and lighting technology will continue to rapidly improve. To realise the benefits of this, optical components must also improve, placing the optics industry in a time of transition. Older sensor technologies did not typically have the ability to fully exploit the imaging capabilities of most lenses. Thus, many applications could be addressed with a fairly small number of less complex optical components. These lenses were mostly designed for security or photography, and not machine vision or higher-end imaging.

The rapid increase in the number of pixels on a sensor, coupled with the reduction in overall pixel size, has exposed the limitations of many of those long standing optical solutions – limitations related to a combination of design quality, manufacturing tolerances, and cost constraints. The drive for improved optical component quality is moving lenses to the laws of physics limitations associated with manipulating light. Many of these limitations can be addressed by more complex designs with tighter quality requirements. The outcome is a much larger range of application-specific imaging lenses designed to meet customer needs. This is good for the market, but rapid product migration beyond the traditionally used security and photography products can catch the customer base unprepared.

The key is helping customers navigate the newly expanded range of products while creating an understanding of the associated price to quality ratio that comes with more rigorous design and manufacturing requirements. The increased performance of detectors and lighting subsystems presents a continuing challenge. But if suppliers educate their customers in the benefits of higher-performance optical components – and the associated performance improvements in optical systems – the customer base will be encouraged to fully leverage the capabilities and benefits that the latest imaging systems can offer.

Tue Mørck, director, global business development, JAI


For an established industrial tool like vision, quality, cost and standardisation are, as would be expected, key parameters – but what about other trends? For the segments JAI operates in, the trends are many and range from standardisation of interfaces and spatial resolution, including in 3D, to sensitivity – including spectral sensitivity.

Spectral imaging, i.e. getting more out of the spectral information seen by the imager, has been a focus area for JAI for years and is gaining interest in the industrial segments for both single and multi-imager cameras. Well-known examples include: traffic imaging, in which it becomes more important to see vehicles and surroundings in colour and spectral technologies are used to read license plates; electronics components that are colour marked and electrical heat generation that is imaged with infrared; food inspection that requires high precision colour resolution and near-infrared vision is used to sort food and classify defects; and print inspection that requires higher colour consistency.

The majority of the spectral vision applications are still centred on visible-light, which reflects the direct replacement of human inspection, but new and more demanding colour applications require consistency, higher spatial resolution and visibility outside the visible spectrum. Near-infrared light penetrates some hydrous materials like food and makes it is possible to investigate the near surface structure for defects. Near-ultraviolet light is absorbed by many materials making it possible to investigate surface features like colour changes and surface topology, and the shorter wavelengths resolve smaller details. The number of applications grows day by day.

This goes hand in hand with the technological development of light sources, imagers, software, and optics. The LED technology has made it possible to achieve affordable narrowband light with high spectral resolution; newer CCD and CMOS imagers show significant higher NIR sensitivity and lower noise; the increased processing power makes it possible to process colour spectral information faster; and the corrected optics is coming down in price. What you see is not always what you inspect!

Bob Grietens, CEO, Xenics

The constant push to reduce production costs and to increase production yield requires investment in more sophisticated, in-line product and process inspection tools – and infrared imaging and spectroscopy can contribute significantly. More precisely, infrared imaging can see defects not visible for visible cameras or can detect temperature anomalies. New applications are finding their way in innovative inspection systems,.

One fast-growing application is in the recycling industry, applying non-contact, spectroscopic sorting techniques for distinguishing between different materials, plastics in particular.

Also, temperature monitoring is very important in many industries; it helps to reduce process windows, but also to cut heating cost and to meet the process temperature requirements, set by new legislation. The food industry, with its very stringent regulations, profits from infrared sensors to measure both high process and low storage temperatures.

Most systems are putting a set of conflicting constraints on the cameras: they want to have a small form factor and low-power on the one hand, while wanting to push the image processing into the camera on the other. We have a set of small-outline cameras covering the full spectrum from the visible to the long-wave infrared. These cameras are smaller than 45mm cube size and dissipate less than 2W. They also contain all the embedded software to adjust the operational settings of the camera in order to deliver a crisp, high dynamic range image. In this way, the software development burden is taken away from the system developer.

New optical techniques are emerging, also in the infrared wavelength range, such as optical coherence tomography. This non-contact technique will allow low-cost inspection of semi-transparent, multi-layered objects in manufacturing processes. For these tasks, advanced high-speed InGaAs cameras are suitable.

More differentiated vision systems are required to identify terrorist threats automatically. The best results are obtained with multi-band cameras and image fusion. The additional wavelength bands reveal more information and are used to complement the visual image, which still gives the best situational awareness for surveillance.

The last example also clearly shows that a convergence between the machine vision and security markets is imminent.

Lou Hermans, COO, Cmosis

What to expect in 2014 on the machine vision image sensor scene? In order to predict the features of image sensors likely to be introduced in 2014, we have to analyse the market drivers first.

Firstly, price: the machine vision camera market is becoming more and more competitive. CMOS image sensors with integrated control and high-speed digital interfaces have lowered the entry barrier compared to CCD-based machine vision cameras. New camera suppliers have entered the market and some camera customers started developing their own machine vision cameras in-house instead of buying a camera. This evolution increases competition and price pressure.

Secondly, size: there is a clear trend towards smaller cameras. Smaller cameras with the same optical resolution mean smaller pixels and smaller sensors.

And, finally, performance: cameras, and as such image sensors, have become much more powerful – more pixels, higher sensitivity combined with higher frame rates and lower power. Global shutter pixels are a must, if not for all then certainly for most machine vision applications and this will also be the case in 2014.

How do these evolutions translate in machine vision image sensor requirements?

One requirement is for smaller pixels. To keep sensors small and cheap, but to maintain or increase the sensor resolution, pixels have to become smaller. This is in addition to maintaining global shutter operation and not compromising on the electro-optical performance.

A second requirement is for higher frame rates. The implementation of fast on-chip ADC in combination with fast serial digital data interfaces will lead to even faster full frame rates.

Also, there is a need for higher resolutions. Smaller pixels allow higher resolution within the standard optical lens formats. The growing market for high-resolution display inspection mainly drives the demand for high-resolution imagers.

Finally, there is a requirement for application specific features. Adding features specifically targeting machine vision applications will increase the performance and ease-of-use of the imaging device. Such features include on-chip high dynamic range modes, defect pixel corrections and some basic image statistics.

Marc Damhaut, CEO, Euresys


Students from the Stanford University have recorded these fantastic videos of birds in flight (http://news.stanford.edu/news/2013/july/bird-flight-secrets-070213.html) using a 3,300-frame-per-second camera. The high-speed footage enabled the students to analyse the birds’ wing and body movements in minute detail.

In machine vision, we have certainly come a long way since the 25/30 frames per second of PAL/NTSC. I remember using a Charge Injection Device (CID) camera 20 years ago for a high-speed tracking application at 120 frames per second (in a reduced-size region of interest). Today, CMOS sensors all offer that capability, and more; 175 fps cameras are now available at full 12 megapixel resolution. Or, 470 fps at four megapixels!

That poses new challenges, though. Unlike the Stanford experiment, high-end machine vision applications require asynchronous camera control, perfect synchronisation of the camera exposure time with the lighting or camera movement or other motion control device, and transfer of the sequence of images for real-time on-the-fly processing and analysis.

This is the kind of application where a PC and a frame grabber shine. The latest PC architectures offer very high memory bandwidth and multicore processors provide plenty of processing power. The frame grabber’s camera control hardware, image buffer and Direct Memory Access (DMA) engine allow the card to acquire and transfer images or sequences of images at very high speed, without any delay (at 500 images per second or so, each image lasts just two milliseconds!).

This happens under the control, but independent of the host PC, which is kept informed by events managed by the Windows operating system. And, even as Windows is not a real-time operating system, the frame grabber will ensure that the image acquisition and the machine run smoothly at that speed.

This is a crucial feature in high-end applications such as 3D PCB inspection.

Thomas Walter, manager of the industrial solutions department at Messe Stuttgart


Over the last 25 years, both machine vision and the Vision trade fair have grown and established themselves. During that time machine vision has matured considerably. Now ‘seeing robots’ interact flexibly with the environment, 3D conquers new fields of application, the components are becoming smaller, smarter and more reasonably priced, and international interface standards and intuitive operating concepts together open up massive growth potential.

Along with the level of maturity, there is a certain consolidation in the industry. In many areas the pace of innovation has slowed down. At the same time the international interest in machine vision applications is growing; cost-saving potential, sustainability in production, and 100 per cent quality inspections are just some of the advantages machine vision technology can offer.

Because of the different market dynamics, we have adapted the frequency of the Vision trade fair, which now takes place every two years. For the visitors this means more world premieres at each event. In turn, the exhibitors can prepare themselves for Vision in a more target-oriented manner and showcase even more innovations. A right step in line with the industry. The anticipation of Vision 2014 can already be felt among the exhibitors.

The potential of machine vision continues to increase rapidly and there does not seem to be any end in sight. In the international growth markets such as Asia there is a need for increased automation. The industries of security, medical, life sciences, entertainment, sport, and intelligent traffic systems are opening up more non-industrial fields of application. The increased demand in these broadly diversified industries and new international markets offer new business potential for many companies. Vision in Stuttgart functions as an innovation platform, marketplace and central meeting point for the world of machine vision. We are expecting around 400 exhibitors, including all key players in the industry to Vision 2014, which takes place from 4 to 6 November at the trade fair centre in Stuttgart.

 



Topics

Read more about:

Business

Media Partners