Visit Baidu AI

Online Perception Module

In the Apollo 1.5 release, using 3D point cloud data, the perception module can detect, segment, and track dynamic obstacles. Our core perception segmentation component is called CNN Segmentation, which has been trained on a large-scale point cloud dataset. In this release, Apollo has open-sourced our 3D perception runtime code and model.

Refer to our technical documentation in GitHub for a detailed algorithm description.

Key 3D Perception Components

HDMap ROI Filter

The HDMap ROI (Region of Interest) Filter, filters LiDAR points that are outside the ROI by removing objects not of interest, such as buildings and trees around the road. In order for this to be completed, High-Resolution map data is required for ROI-based filtering.

Refer to our technical documentation in GitHub for a detailed algorithm description.

CNN Segmentation

After the HDMap ROI Filter has filtered out obstacles, we then obtain the filtered point cloud of objects inside the ROI. This is then fed into our CNN Segmentation module that detects and segments out obstacles in the foreground such as cars, trucks, bicycles, and pedestrians.

The segmentation algorithm is based on the Fully-Convolutional Deep Neural Network to learn obstacle point cloud characteristics and then predict the relevant properties of the obstacles such as the foreground object probability, the offset displacement w.r.t. object center, and the object height. Then it constructs an adjacency graph based on these attributes and analyzes the connected components for object segmentation.

The CNN segmentation algorithm introduces advanced deep learning technologies into point cloud-based obstacle detection and is capable of learning effectively from large amounts of data for object detection and segmentation. Compared to traditional algorithms, our method provides state-of-the-art performance on point cloud obstacle detection and segmentation, which achieves high recall levels, and low false positive rates. This is all done in real-time powered by Nvidia CUDA acceleration technology.

Refer to our technical documentation in GitHub for a detailed algorithm description.

MinBox Builder
The object builder component establishes a bounding box for the detected objects. Due to occlusions or distance to the LiDAR sensor, the point cloud forming an obstacle can be sparse and cover only a portion of surfaces. Thus, the box builder works to recover the full bounding box given fitted polygon point of the detected point cloud. Please refer to the technical document for algorithm details.

At times, due to occlusions or distance to our LiDAR sensor, an objects full point cloud may not be visible. The MinBox Builder works in two ways to help solve this:

• First, the object builder component establishes a bounding box for the detected object.

• Second, the box builder works to build the full bounding box of an object via different known polygon points.

Refer to our technical documentation in GitHub for a detailed algorithm description.

HM Object Tracker

The HM object tracker is designed to track obstacles detected by the CNN segmentation step. In general, it forms and updates track lists by associating current detections with existing track lists, deletes old track lists if they no longer exist, and creates new track lists if new objects are detected. Once new objects are detected, the HM object tracker predicts the position, orientation, and velocity of the obstacle.

Refer to our technical documentation in GitHub for a detailed algorithm description.

Video Presentation

Perception Demonstration in Offline Visualizer
The above videos are examples of our perception detection results using our offline visualizer tool. The raw point cloud is visualized in grey and the point cloud within our ROI area is shown in green. Detected dynamic obstacles are depicted in blue bounding boxes with arrows that indicate their heading.
Perception Demonstration in Dreamview
In Dreamview, perception input is visualized as both static and dynamic obstacles. Obstacles are shown on the map as purple polygons with white arrows indicating the heading of dynamic obstacles.

Cloud-Based Calibration Service

Apollo provides a cross-platform calibration service located on the cloud, allowing developers the freedom from needing to deploy calibration tools locally or on the vehicle. This affords developers greater flexibility in calibrating different sensor platforms and makes the platform more user-friendly.

The aim of sensor calibration for autonomous driving is acquiring the intrinsic and extrinsic parameters between sensors. Sensor calibration is the first step for various sensor fusion algorithms.

Accurate extrinsic calibrations between LiDAR and GNSS/INS sensors is important for High-Definition Map production, LiDAR-based localization, and object detection in autonomous driving regions.

Apollo is committed to helping all levels of developers through targeted support. We look forward to you joining us!

Apollo is committed to helping all levels of developers through targeted support. We look forward to you joining us!