Intel L515 depth camera

Share this on social media:

FRAMOS, a global partner for vision technologies, now includes Intel’s first LiDAR device – the new L515 depth camera – in its product range. The L515 is the world’s smallest and most power-efficient solid-state LiDAR depth camera with XGA resolution and a large range of 0.25m to 9m. It generates 30 frames per second with a depth resolution of 1024 x 768. It has a field of view of 70° x 55 ° (±2°) and the ability to generate 23 million depth points per second. Under controlled indoor lighting, the L515 depth camera achieves unparalleled depth quality, with a Z error of less than 20mm at maximum range. The short pixel exposure time of less than 100 ns minimizes motion blur artefacts even with fast-moving objects. Its millimetre accuracy is retained throughout the depth camera’s lifespan, without the need for calibration.

Consuming less than 3.5 watts of power, the tiny LiDAR depth camera enables easy mounting on handheld devices. It weighs around 100g and is smaller than a tennis ball, with a diameter of 61mm and height of 26mm. This makes for easier integration into mobile devices such as portable scanners for volumetric measurement. Logistics specifically is a market than can benefit from the L515 depth camera’s high resolution and precise volumetric measurement. Many other applications can be found in industry and robotics as well as 3D (body) scanning, healthcare, and retail. The camera will also be of interest to end users in the maker space and 3D enthusiasts.

The depth camera has an onboard vision processor, accelerometer, gyroscope, and Full-HD RGB sensor with a resolution of 1920 x 1080 pixels. The L515 3D camera is an integral part of the Intel RealSense depth camera family, and offers cross-platform compatibility. It works seamlessly with the existing Intel RealSense System Design Kit SDK 2.0 and other Intel RealSense devices.

The constellation will be used to generate a complete, high-resolution 3D point cloud of the Earth's surface (Image: Shutterstock/rommma)

10 May 2023

The technique processes measurements using a transformer-based encoder to extract high-dimensional features and feeds them into a multi-scale attention network-based decoder to output the class, location, and size information of multiple objects at once.

04 May 2023

The FSM-IMX570 Devkit provides a simple, coherent framework for quickly developing a working prototype of an indirect time-of-flight embedded vision system

15 March 2023

The Innoviz360 is designed for OEMs looking to achieve level 4-5 automation in applications such as robotaxis, shuttles, trucks, and delivery vehicles. (Image: Innoviz)

07 January 2023