Predictive cities

Share this on social media:

Tim Reynolds finds out how vision and AI algorithms are making cities safer

Image: jamesteohart/shutterstock.com

In cities, around seven out of 10 traffic fatalities are cyclists and pedestrians. Speed kills, but human error also remains a common cause of accidents, from vehicles turning, reversing and pulling out, or failing to give right of way. Globally, more than half of the 1.3 million people who die in road accidents are road users who aren’t in a vehicle.

Speed and red-light camera systems have helped prevent accidents and protect vulnerable road users for many years, but now there are projects making use of artificial intelligence (AI) to improve safety on roads. For example, the Dubai police are trialling AI technology from Vitronic Middle East to prosecute traffic violations at pedestrian crossings. The Pedestrian Safety Enforcement System is able to distinguish between different road users in real time, to control traffic lights when a pedestrian is waiting to cross, or to document drivers running a red light.

But what more is possible? Can AI be used to predict traffic and road user behaviour? Computer vision specialist Amritpal Singh, CEO of Viscando, is developing technology to support smart cities, autonomous driving and industrial needs. Viscando’s 3D and AI-sensing solutions enable clients to better understand movements and interaction to enable data-driven traffic safety.

The company is based in Gothenburg, the automotive capital of Sweden, so it has natural contact with the sector. Viscando also works closely with regional authorities on traffic management projects and local information communication technology (ICT) development.

Singh believes that prediction of close future scenarios – not minutes ahead of time, but seconds – is within reach with the technology. Singh is working on projects to understand the intentions of people before they act; to predict if, say, a bike or pedestrian will cross the road or continue along it up to five seconds before the decision is made. ‘We are not at 100 per cent accuracy, but 82 per cent,’ he said. ‘The data is there; we just need to be better at extracting it.’

There is plenty to do from the tech side in terms of pure data analysis. ‘It’s very much in line with what you need to have if you are applying machine learning,’ commented Singh. ‘There is room for other disciplines to contribute here, such as with behavioural insights, but the data on what the pedestrians and cyclists are doing is already there and the information available.’

This leads to two entangled loops in terms of decision-making. ‘There is a low latency for decision-making in traffic. You have perhaps 50 to 100 milliseconds to make a warning and ensure appropriate actions are taken to avoid an accident. This may require more intelligence in the vehicles, and local processing of data and decision-making in the local infrastructure.

‘And there is a longer loop using the full data collected to extract insights,’ he added. These insights could, for example, help design intersections and crossings to reduce conflict between users.

The same data could be used to generate scenarios for autonomous vehicle testing. To prove the safety of autonomous vehicles will require billions of kilometres on roads, but that data may not be useful for understanding more complex interactions, such as urban traffic, lane merging and other potential conflict scenarios. ‘Interactions are more important than kilometres,’ said Singh.

Both quality and quantity of data are required. Singh believes that use of the passive data captured on camera infrastructure could be used here, especially as they observe real-world situations and more complex interactions. ‘We can collect billions of kilometres passively,’ he said.

Talking to the city

Image processing onboard the camera, close to the sensor, opens up the potential to capture more data that is ultimately going to make cities or transport smarter. NTT Smart Solutions has a focus on Internet of Things-enabled edge analytics, and vision is one important input into its systems.

Current clients use NTT solutions to predict, for example, train occupancy 24 hours ahead, or the movement of people into and around large venues for crowd control, with a 20-minute forward horizon.

NTT is developing the potential of connected vehicle systems, working with Toyota to investigate how vehicles can communicate with infrastructure, such as data centres and 5G networks. Currently, these research systems can detect an obstacle and warn a moving car within around five seconds. Other key applications of such a connected vehicle infrastructure include generating accurate, real-time maps and rapid detection of congestion.

Bill Baver, vice president of NTT Smart Solutions, noted that optical sensors will need to be capable of doing some of the analytics within the device itself. ‘We don’t want to be pushing video back to the core data centres,’ he said. ‘The vision side should have more configurable capabilities. This would be helpful for multiple use cases.’

He also argued that imaging technologies need to be adaptable to multiple data highways and provide application programming interfaces to integrate easily into solutions.

With vehicles now being developed incorporating a battery of sensors – lidar, radar and image sensors across the visible and infrared – it’s only a matter of time before vehicles start to communicate with the city streets they are driving through.