EU Tulipp platform to ease embedded vision development effort

Share this on social media:

Embedded system designers now have a reference platform for vision-based development work, thanks to a €4 million Horizon 2020 project called Tulipp, which has recently concluded.

The Tulipp project – towards ubiquitous low-power image processing platforms – began in January 2016. The finished reference platform includes a full development kit, comprising an FPGA-based embedded, multicore computing board, parallel real-time operating system, and development tool chain with guidelines. This is coupled with use cases covering medical x-ray imaging, driver assistance, and autonomous drones with obstacle avoidance.

Developed by Sundance Multiprocessor Technology, each instance of the Tulipp processing platform is 40 x 50mm and is compliant with the PC/104 embedded processor board standard. The hardware platform uses the multicore Xilinx Zynq Ultrascale+ MPSoC which contains, along with the Xilinx FinFET+ FPGA, an Arm Cortex-A53 quad-core CPU, an Arm Mali-400 MP2 GPU, and a real-time processing unit containing a dual-core Arm Cortex-R5 32-bit real-time processor based on the Arm-v7R architecture.

A separate expansion module (VITA57.1 FMC) allows application-specific boards with different input and output interfaces to be created while keeping the interfaces with the processing module consistent.

Coupled with the Tulipp hardware platform is a parallel, low latency, embedded real-time operating system developed by Hipperos specifically to manage complex multi-threaded embedded applications.

The platform has also been extended with performance analysis and power measurement features developed by Norges Teknisk-Naturvitenskapelige Universitet (NTNU) and Technische Universität Dresden (TUD).

The Tulipp consortium’s experts have written a set of guidelines, consisting of practical advice, best practice approaches, and recommended implementation methods, to help vision-based system designers select the optimal implementation strategy for their own applications.

This will become a Tulipp book to be published by Springer by the end of 2019 and supported by endorsements from the ecosystem of developers that are currently testing the concept.

The medical x-ray case study the project partners undertook demonstrates image enhancement algorithms for x-ray images running at high frame rates. Its autonomous driving study was able to run a pedestrian recognition algorithm in real-time at a processing time per frame of 66ms, meaning every second image could be analysed when imaging at 30Hz.

The UAV case study demonstrates a real-time obstacle avoidance system for UAVs based on a stereo camera setup with cameras orientated in the direction of flight.

The use cases and the platform were shown at the 2018 Vision trade fair in Stuttgart, Germany.

‘As image processing and vision applications grow in complexity and diversity, and become increasingly embedded by their very nature, vision-based system designers need to know that they can simply and easily solve the design constraint challenges of low power, low latency, high performance and reliable real-time image processing that face them,’ commented Philippe Millet of Thales and Tulipp’s project coordinator. ‘The EU’s Tulipp project has delivered just that. Moreover, the ecosystem of stakeholders that we have created along the way will ensure that it will continue to deliver in the future.’

Company: 

Related news

Recent News

18 July 2019

Scientists at the Harvard School of Engineering and Applied Sciences have developed a compact snapshot polarisation camera based on diffraction gratings containing nanoscale structures

18 July 2019

The UK quantum enhanced imaging hub, Quantic, has secured £28 million in UK funding to continue development of sensing and imaging equipment using quantum technologies

21 June 2019

Carnegie Mellon University showed a non-line-of-sight imaging technique able to compute millimetre- and micrometre-scale shapes of curved objects

11 June 2019

Researchers at Earlham Institute have developed a machine learning platform to categorise lettuce crops using computer vision and aerial images