Andrew Williams explores the latest imaging technologies for traffic monitoring, including how AI could be used to make sense of all that data
A growing number of transport systems around the world are getting smarter as artificial intelligence begins to be deployed. AI algorithms help transport system operators make sense of the data they collect, a big part of which are images.
One company that is actively involved in this area is the Laboratory for Integration of Systems and Technology (CEA-LIST) – one of three specialist institutes of the French Alternative Energies and Atomic Energy Commission’s Technological Research Division (CEA Tech) – which has been consistently involved in R&D activities in artificial intelligence and neural networks applied to vision for about 15 years. Last year, the LIST Computer Vision Lab launched DeepManta – short for ‘deep-learning, many tasks’ – which has been trained to perform real-time analysis of traffic video streams. It can answer questions relating to whether or not a car is present in a given image, the car’s location, distance, speed, volume and orientation in space, as well as its make and model.
As Dr Stéphane David, head of the industrial partnerships, ambient intelligence, and interactive systems division at CEA-LIST, explained, the DeepManta algorithm belongs to a new type of artificial intelligence devoted to multi-task deep learning that ‘sets a new frontier for efficiency, and therefore portability of the technology in embedded applications’.
In practical terms, the system works by capturing a video stream with a basic monocular camera. When a car is identified in the field of view, the system surrounds it with 2D and 3D boxes to enable consistent spatial location in the video in real time. The algorithm then automatically generates a line in a report or a visual annotation on a display – sometimes both – including a range of relevant information relating to the car. According to David, there are plenty of uses for the system.
Potential applications include vehicle and people counting, with advanced qualitative features for finer statistics, as well as trajectory and urban density mapping. Other potential uses range from smart parking – identifying free car parking spaces, along with managing payment using number plate recognition – and detecting vehicles or pedestrians to prevent collisions.
‘More broadly, this class of algorithm can also be used for signal detection to assist tram and train drivers and for monitoring in smart cities,’ said David.
He remarked that the majority of the applications have already been the subject of successful research projects, and are currently in the process of being transferred to a variety of industrial organisations, including public infrastructure operators.
‘DeepManta can be plugged into any video stream, including those from video surveillance, creating an infrastructure for multiple functionalities,’ David noted, adding that its use could become a key part of smart cities.
However, despite the clear potential benefits, David said that a number of implementation challenges still remain. To begin with, although the system has been demonstrated successfully from a technological perspective, he stressed that there is still a significant way to go in terms of securing regulatory approval. He also highlighted the fact that an appropriate supporting business model still needs to be devised, primarily because it has never been done before on a large industrial scale.
‘Smart cities and traffic monitoring are among the many blooming application domains in the city, fuelled by the progress of AI applied to vision, including cleaning and maintenance infrastructure monitoring, automated urban logistics, and crowd characterisation,’ he stated.
Liquid lens controller
Moving from analysing transport data to capturing it in the first place, industrial LED specialist Gardasoft has developed a new approach to focus a camera over different distances, which has potentially big benefits for traffic monitoring. The system, which will be displayed during the Intertraffic Amsterdam exhibition in March, uses a shape-changing liquid lens from Swiss company Optotune, in combination with a traditional fixed focus lens. Driven by Gardasoft’s TR-CL180 lens controller, the liquid lens’ focus can be changed in 10ms.
Jools Hudson, marketing manager at Gardasoft, explained that the shape of the lens can be changed in just a few milliseconds by applying a current to alter the focal length, removing the need for manual or motorised refocusing for imaging objects at different distances from the camera.
‘The shape of the liquid lens is adjusted through precise control of the current applied to the outer diaphragm of the lens, and optical power is directly proportional to the current applied,’ she said.
It is also possible to use the controller to program a timed sequence of optical power settings using an analogue signal. For example, a laser displacement sensor can be used to monitor the distances of different objects in real time and send an analogue signal to the lens controller, to drive the lens to the correct optical power setting. This allows macro changes in lens settings to be completed in less than 8ms with accurate and repeatable focus.
According to Hudson, there are many intelligent transport system (ITS) applications of the controller, including speed and red light enforcement, which she explained require several images to be captured at different distances from the camera. Such systems typically use fixed focus lenses, which are stopped down to give sufficient depth of field for the image to be in approximate focus at both distances. As an alternative, Hudson explained that the liquid lens allows the system to be rapidly adjusted to achieve what she described as ‘perfect focus at multiple distances’. When used with a 200mm macro lens, the focus can typically be adjusted from 100mm to infinity.
‘In a traffic applications, the liquid lens can be used to change the camera focus, so that vehicles are in focus at different distances from the camera. This can be an autofocus operation, but more commonly the distance to the vehicle is determined in some way, for example from a vehicle sensor or an initial out-of-focus camera image. Then the lens focus is changed to the known position of the vehicle, possibly then tracking the focus as the vehicle moves,’ Hudson said.
‘The system is still in the early stage of development for traffic applications, but we are currently in discussion with a number of traffic system experts about the optimum method of deployment – and the capability of the technology is well-established,’ she added.
In Hudson’s view, a key advantage of the Optotune system is that the liquid lens configuration allows the aperture to be opened up to allow much more light through, while still obtaining precise focus at any working distance. This avoids a reduction in light reaching the sensor, associated with the traditional approach of stopping down the aperture on a fixed focus lens to provide sufficient depth of field.
Another advantage of the fast fine-tuning of focus lengths enabled by the system is that it can be used to accommodate limitations in the physical properties of lenses, permitting broader applications of the same systems. In highway monitoring Hudson claimed this means a road can be imaged further into the distance, and she believes that there is also greater flexibility in positioning the camera for optimum imaging in tolling and car park applications.
‘When taking multiple images of a vehicle, it can be in focus for all images by changing the focus to track its movement. When imaging a junction or multi-lane highway, there is also more flexibility in the range of vehicle locations that can be in focus,’ she said.
Gardasoft's TR-CL180 lens controller can control a liquid lens in traffic monitoring systems
‘In multi-wavelength applications, imaging through a windscreen for occupancy information using infrared wavelengths will require different focus conditions to image the number plate using white light, because of different refractive properties at different wavelengths. The liquid lens configuration can accommodate this,’ she added.
‘Any application where focus quality or light throughput is compromised could benefit from the addition of liquid lens technology,’ she concluded. ‘Traffic is an important market for us and we manufacture LEDs and lighting controllers for a variety of ITS applications. We are also in discussions with our partners to develop the use of liquid lenses in traffic, rail and many broader machine vision applications.’
Traffic video detection
Another company active in the transport monitoring space is Flir Systems, which manufactures a range of ITS hardware and software to monitor motorists and pedestrians in cities, detect incidents on highways and in tunnels, collect traffic data and ensure safety on public railways. As Michael Deruytter, director of innovation for ITS at Flir Systems, explained, the company’s traffic video detection systems can be based on visual or thermal cameras, or on a combination of both.
An installed video or thermal imaging camera sends an input signal to a detection unit, housed either on the camera or integrated into a standard 19-inch rack. When the camera or the video image processing modules are set, detection zones are superimposed on the video image. When a vehicle, bicycle or pedestrian enters a detection zone, the system is activated. Dedicated algorithms generate various types of traffic information, including presence and incident-related data, for statistical processing, and data for pre- and post-incident analysis.
‘Real-time analysis of video or thermal camera images allows for more efficient traffic management in tunnels, on highways and in urban areas. Traffic lights can be adapted in real-time, according to current traffic flows. When incidents occur, early detection enables faster intervention by rescue teams, preventing secondary accidents,’ said Deruytter.
The company’s latest effort is the ITS-Series Dual AID camera that combines thermal and visual imaging technology with advanced video analytics to enable automatic incident detection, data collection and early fire detection.
‘We have also recently expanded our portfolio with dedicated detection and monitoring systems for the public transport market. These innovations include thermal sensors and cameras for trackside monitoring, vehicle detection at railway crossings, fire detection in railway tunnels and fire detection on train coaches,’ said Deruytter.
Intertraffic Amsterdam innovation awards
Imaging systems feature strongly in the final shortlist of nominees for this year’s Intertraffic Amsterdam innovation award, to be presented during the show from 20 to 23 March. The list includes the Sprinx Traffic AID system, created by Italian company Sprinx Technologies. The system is based on 3D tracking and runs Hanwha Techwin WisenetX cameras, which the company claims has ‘significantly enhanced the ability to detect incidents and keep traffic on the move’.
Also shortlisted is the Citix 3D camera from French company Eco-Counter, a monitoring system capable of automatically counting and differentiating pedestrians, cyclists and vehicles.
Other candidates include the Signs to Lines TRO mapping system from UK firm AppyParking, which uses a combination of vehicle-mounted lidar scanners, photography, AI and machine learning to map paint on the street related to traffic and parking management.
Finally, TrafficCam3D, from Viion Systems in Canada, is shortlisted.
TrafficCam3D is a lidar smart camera with on-board processing and telemetry for traffic safety and security applications.