Thanks for visiting Imaging and Machine Vision Europe.

You're trying to access an editorial feature that is only available to logged in, registered users of Imaging and Machine Vision Europe. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

EU Tulipp platform to ease embedded vision development effort

Share this on social media:

Embedded system designers now have a reference platform for vision-based development work, thanks to a €4 million Horizon 2020 project called Tulipp, which has recently concluded.

The Tulipp project – towards ubiquitous low-power image processing platforms – began in January 2016. The finished reference platform includes a full development kit, comprising an FPGA-based embedded, multicore computing board, parallel real-time operating system, and development tool chain with guidelines. This is coupled with use cases covering medical x-ray imaging, driver assistance, and autonomous drones with obstacle avoidance.

Developed by Sundance Multiprocessor Technology, each instance of the Tulipp processing platform is 40 x 50mm and is compliant with the PC/104 embedded processor board standard. The hardware platform uses the multicore Xilinx Zynq Ultrascale+ MPSoC which contains, along with the Xilinx FinFET+ FPGA, an Arm Cortex-A53 quad-core CPU, an Arm Mali-400 MP2 GPU, and a real-time processing unit containing a dual-core Arm Cortex-R5 32-bit real-time processor based on the Arm-v7R architecture.

A separate expansion module (VITA57.1 FMC) allows application-specific boards with different input and output interfaces to be created while keeping the interfaces with the processing module consistent.

Coupled with the Tulipp hardware platform is a parallel, low latency, embedded real-time operating system developed by Hipperos specifically to manage complex multi-threaded embedded applications.

The platform has also been extended with performance analysis and power measurement features developed by Norges Teknisk-Naturvitenskapelige Universitet (NTNU) and Technische Universität Dresden (TUD).

The Tulipp consortium’s experts have written a set of guidelines, consisting of practical advice, best practice approaches, and recommended implementation methods, to help vision-based system designers select the optimal implementation strategy for their own applications.

This will become a Tulipp book to be published by Springer by the end of 2019 and supported by endorsements from the ecosystem of developers that are currently testing the concept.

The medical x-ray case study the project partners undertook demonstrates image enhancement algorithms for x-ray images running at high frame rates. Its autonomous driving study was able to run a pedestrian recognition algorithm in real-time at a processing time per frame of 66ms, meaning every second image could be analysed when imaging at 30Hz.

The UAV case study demonstrates a real-time obstacle avoidance system for UAVs based on a stereo camera setup with cameras orientated in the direction of flight.

The use cases and the platform were shown at the 2018 Vision trade fair in Stuttgart, Germany.

‘As image processing and vision applications grow in complexity and diversity, and become increasingly embedded by their very nature, vision-based system designers need to know that they can simply and easily solve the design constraint challenges of low power, low latency, high performance and reliable real-time image processing that face them,’ commented Philippe Millet of Thales and Tulipp’s project coordinator. ‘The EU’s Tulipp project has delivered just that. Moreover, the ecosystem of stakeholders that we have created along the way will ensure that it will continue to deliver in the future.’

Company: 

Related news

Recent News

04 October 2019

Each pixel in Prophesee’s Metavision sensor only activates if it detects a change in the scene – an event – which means low power, latency and data processing requirements

18 September 2019

3D sensing company, Outsight, has introduced a 3D semantic camera that combines lidar ranging with hyperspectral material analysis. The camera was introduced at the Autosens conference in Brussels

16 September 2019

OmniVision Technologies will be showing an automotive camera module at the AutoSens conference in Brussels from 17 to 19 September, built using OmniVision’s OX03A1Y image sensor with an Arm Mali-C71 image signal processor

09 September 2019

Hamamatsu Photonics claims it is the first company to mass produce a mid-infrared detector that doesn’t use mercury and cadmium, which are restricted under the European Commission’s RoHS directive