EU Tulipp platform to ease embedded vision development effort

Share this on social media:

Embedded system designers now have a reference platform for vision-based development work, thanks to a €4 million Horizon 2020 project called Tulipp, which has recently concluded.

The Tulipp project – towards ubiquitous low-power image processing platforms – began in January 2016. The finished reference platform includes a full development kit, comprising an FPGA-based embedded, multicore computing board, parallel real-time operating system, and development tool chain with guidelines. This is coupled with use cases covering medical x-ray imaging, driver assistance, and autonomous drones with obstacle avoidance.

Developed by Sundance Multiprocessor Technology, each instance of the Tulipp processing platform is 40 x 50mm and is compliant with the PC/104 embedded processor board standard. The hardware platform uses the multicore Xilinx Zynq Ultrascale+ MPSoC which contains, along with the Xilinx FinFET+ FPGA, an Arm Cortex-A53 quad-core CPU, an Arm Mali-400 MP2 GPU, and a real-time processing unit containing a dual-core Arm Cortex-R5 32-bit real-time processor based on the Arm-v7R architecture.

A separate expansion module (VITA57.1 FMC) allows application-specific boards with different input and output interfaces to be created while keeping the interfaces with the processing module consistent.

Coupled with the Tulipp hardware platform is a parallel, low latency, embedded real-time operating system developed by Hipperos specifically to manage complex multi-threaded embedded applications.

The platform has also been extended with performance analysis and power measurement features developed by Norges Teknisk-Naturvitenskapelige Universitet (NTNU) and Technische Universität Dresden (TUD).

The Tulipp consortium’s experts have written a set of guidelines, consisting of practical advice, best practice approaches, and recommended implementation methods, to help vision-based system designers select the optimal implementation strategy for their own applications.

This will become a Tulipp book to be published by Springer by the end of 2019 and supported by endorsements from the ecosystem of developers that are currently testing the concept.

The medical x-ray case study the project partners undertook demonstrates image enhancement algorithms for x-ray images running at high frame rates. Its autonomous driving study was able to run a pedestrian recognition algorithm in real-time at a processing time per frame of 66ms, meaning every second image could be analysed when imaging at 30Hz.

The UAV case study demonstrates a real-time obstacle avoidance system for UAVs based on a stereo camera setup with cameras orientated in the direction of flight.

The use cases and the platform were shown at the 2018 Vision trade fair in Stuttgart, Germany.

‘As image processing and vision applications grow in complexity and diversity, and become increasingly embedded by their very nature, vision-based system designers need to know that they can simply and easily solve the design constraint challenges of low power, low latency, high performance and reliable real-time image processing that face them,’ commented Philippe Millet of Thales and Tulipp’s project coordinator. ‘The EU’s Tulipp project has delivered just that. Moreover, the ecosystem of stakeholders that we have created along the way will ensure that it will continue to deliver in the future.’

Company: 

Related news

Recent News

03 September 2020

Terahertz imaging company, Tihive, has been awarded €8.6m from the European Innovation Council's Accelerator programme to scale up its industrial inspection technology

19 May 2020

The National Institute of Standards and Technology and ASTM Committee E57 have released proceedings on a workshop to define the performance of 3D imaging systems for robots in manufacturing

12 May 2020

The sensors boast a pixel pitch of 5μm thanks to Sony's stacking technology using a copper-to-copper connection. They also deliver high quantum efficiency even in the visible range

06 April 2020

Zensors' algorithms analyse feeds from CCTV cameras to provide real-time data on the number of people in an area and whether safe distances are maintained between them