Embedded World
The trade fair will cover embedded systems for security for electronic systems, distributed intelligence, the Internet of Things, e-mobility, and energy efficiency.
ATD Electronique
ATD Electronique has several demonstrations at its booth 3A-539. One is the Macnica EasyMVC camera interface SLVS-EC Rx IP, equipped with the new Sony high-speed sensor IMX530. The interface is supported by SLVS-EC v2.0 IP onto an Intel Cyclone 10GX platform.
A second demo includes the Sony sensor IMX299 with SLVS-EC v1.2 IP, also from Macnica, demonstrated on a Xilinx Kintex Ultrascale platform.
Further demonstrations include the miniature CMOS image sensor Naneye M, and the new global shutter image sensor CSG14K for machine vision and AOI applications supporting the 1-inch optical format from Ams.
In addition, ATD Electronique has organised a series of presentations in a separate conference room at the trade show. Highlights are speeches from Macnica about SLVS-EC IP solutions; Sony, with inside information on the new Sony SWIR sensors; and Ams, with talks about small camera modules, applications and integration as well as new NIR enhanced sensors. A small contingent of seats is available upon email request through sales@atdelectronique.com.
Basler

Basler (2-550) will showcase its solution portfolio for embedded vision at the show in Nuremberg, in cooperation with partners such as NXP, Nvidia, Microsoft and Amazon Web Services.
As one of the highlights, Basler will release the first camera module and matching add-on camera kits for the launch of NXP's i.MX 8M Plus applications processor.
The i.MX 8M Plus features a dual camera image signal processor and, for the first time, an accelerator for neural networks. In combination with the new Basler camera modules, the resulting vision system provides the perfect solution for intelligent, vision-based machine learning applications.
As a further highlight, Basler will present its new AI vision solution kit with cloud connection. The AI kit gives customers direct access to the cloud services of Amazon Web Services.
Basler has also expanded its partnership with Nvidia, with the Dart Bcon for MIPI camera modules now running on the Jetson platform, including Jetson Nano and Jetson TX2 series.
Various live demonstrations in the areas of digital signage, security access control, and motion analysis (skeleton tracking) will show the possibilities offered by Basler's embedded vision solutions.
E-con Systems

E-con Systems (2-645) will showcase its cameras for embedded platforms like Nvidia Jetson TX1/TX2/Xavier/Nano, NXP’s IMX6/IMX7/IMX8, Rockchip’s RK3399, and Xilinx FPGAs with various camera interfaces (MIPI, GMSL and USB).
The firm will be introducing its new full HD lowlight MIPI camera for Google Coral Development Board at the show. E-con Systems will also launch its AR0233-based GMSL2 camera with solid IPP66 enclosure for water, dust and impact resistance, for outdoor applications and continuous operation. In addition, the company will demonstrate eight cameras streaming simultaneously on Nvidia Jetson Xavier.
Embedded experts will be on hand for a consultation.
Framos

Framos (booth 2-647) will be demonstrating a selection of image processing solutions at the exhibition and conference. The company will present its first 3D industrial camera, the D435e, along with an Intel RealSense-compatible system design kit for skeleton tracking.
The firm’s camera and sensor module portfolio allows vision engineers and developers to evaluate many different image sensors – including the latest ones – on open processor platforms. Users can then produce a proof-of-concept quickly before developing it further.
Imago Technologies

Imago Technologies (2-639) will be showing its VisionBox Daytona, ideal for applications that need Nvidia’s Tegra TX2 GPGPU together with camera interfaces, real-time IO and mobile access to data and images.
The user can connect the Daytona into a Wi-Fi network or can use the integrated 4G modem. Typical GigE cameras can be connected with a single Ethernet cable. The IO functionality provides a trigger-over-Ethernet as well.
Also on display will be the latest Arm, I-Core and multi-core DSP-based VisionBox products, now in series production, as well as a demonstration of the VisionCam event-based camera sensors.
Kithara Software
Kithara Software (booth 4-446) will showcase its most recent developments for real-time software solutions. Special emphasis will be on the functions of Kithara RealTime Suite regarding automation with EtherCat, real-time image capture and processing, connection to automotive interfaces, as well as real-time data storage.
MVTec Software
MVTec Software (4-203) will focus on speed, robustness, and hardware compatibility for applications in industrial sectors at the show.
A number of live demonstrations will provide practical insight into embedded vision applications. For example, a multi-platform setup will show how four embedded boards perform different tasks with Halcon and Merlic, thereby showcasing the wide range of platforms on which MVTec's software runs.
Another demonstration will illustrate how Merlic software identifies and checks medical test tubes as objects. It uses the Pallas smart camera from MVTec's Chinese partner Daheng Imaging.
In a third demonstration, five standard examples of deep learning will be run simultaneously – semantic segmentation, object detection, classification, optical character recognition, and the latest MVTec feature, anomaly detection. An Nvidia Xavier will be used as the platform.
MVTec will also participate in the supporting programme at the trade fair. Christoph Wagner, product manager for embedded vision, will hold a presentation entitled: ‘Why choosing the right machine vision software for an embedded vision product is not a no-brainer’ at the Embedded World conference on 27 February at 10:30am.
Furthermore, MVTec will give a presentation on Halcon deep learning for embedded devices at the accompanying exhibitors forum on 25 February at 1:30pm.
Phytec
Phytec (1-438) will present its embedded imaging kit i.MX 8M, a platform for customised embedded imaging systems using MVTec’s Halcon software for embedded devices.
The kit contains pre-compiled SD card images for the Halcon demo HPeek. With HPeek, image processing solution developers can evaluate and benchmark the performance of Halcon on NXP i.MX8 processors.
Halcon brings professional image processing routines, including deep learning algorithms, into embedded vision products.
Phytec offers coordinated hardware such as camera and processor modules, as well as a wide range of services for customer-specific system development and manufacturing in its own production facilities in Mainz.
At the trade show, Phytec will also present its portfolio of development and manufacturing services for customised designs, series hardware, and offers the opportunity to conduct a first project discussion with experts on site.
Vision Components

Vision Components (booth 2-444) will present its range of camera modules with a MIPI CSI-2 interface. These components enable compact, repeatable OEM designs and easy connection of image sensors to more than 20 single-board computers, including Nvidia Jetson, DragonBoard, all Raspberry Pi boards, and all 96Boards.
Vision Components has also integrated non-native MIPI sensors in MIPI camera modules, using a specially developed adapter board. Examples for these are IMX250 and IMX252 sensors from the Sony Pregius series, which is characterised by high light sensitivity and low dark noise.
The manufacturer’s Linux-based, freely programmable embedded vision systems will also be on display. These cameras and 3D line sensors are based on a Xilinx Zynq SoC, which has long-term availability.
New quad-core embedded cameras provide a performance boost thanks to the onboard Snapdragon 410 processor: 1.2GHz clock rate, 1GB RAM, and 16GB flash memory. In addition to various built-in interfaces like GigE and 12 GPIOs, this board camera is available with optional extension boards that enable easy, flexible addition of an SD card slot and more interfaces: serial interface, I²C, RS232, DSI, RJ45 Ethernet adapter, and power interface.
Jan-Erik Schmitt, vice president of sales, will also give a talk during the Embedded World conference on 27 February at 2pm entitled: 'MIPI cameras: new standard for embedded vision'.
Xilinx

Xilinx (3A-235) will showcase a collection of demos, highlighting its Adaptive Compute Acceleration Card, Alveo, Industrial IoT, the Vitis unified software platform, and automotive solutions.
One of the demonstrations on the Xilinx booth will be region of interest-based encoding using a video codec unit onboard the Zynq Ultrascale+ chip.
When streaming video at limited transmission bandwidth, it’s necessary to use intelligent encoding whereby a region of interest (ROI) can be encoded at higher visual quality than the rest of the region within the frame. Using Vitis AI, the Xilinx deep learning processor unit is integrated in the pipeline and is used to identify the ROI’s mask within the frame. The video codec unit then allocates more bits for ROIs in comparison to the rest of the region at a given bitrate to improve the encoding efficiency. The key markets for this demonstration are video surveillance, video conferencing, medical and broadcast.
Also on display will be a machine learning inference solution for edge use cases on the Versal Adaptive Compute Acceleration Card. Using Xilinx tools, the system is built on a heterogeneous compute platform where adaptable engines are used to integrate live video interfaces along with pre- or post-processing elements.
Real-time 3D calculation of physical effects using the Bullet Engine on Alveo accelerator boards will be shown. Industrial applications benefit from digital real-time models of industrial devices.
Other demonstrations include a cloud-trained neural network on a connected Xilinx IIoT edge device, along with an automated driving demonstration and path-finding platform, and an ADAS development kit from Xylon, based on the Xilinx Zynq Ultrascale+ MPSoC device.
In addition to these demos, Xilinx will be discussing several topics at Embedded World on 26 February, including the Versal AI core at 2pm; low-bit CNN implementation and optimisation on FPGAs at 3pm; and emerging SoC performance and power challenges at 4pm.