Visit Baidu AI

Online Perception Module

In the Apollo 1.5 release, the perception module of Apollo aims to do dynamic obstacle detection, segmentation and tracking given 3D point cloud data. Our core perception segmentation component is called CNN Segmentation which has been trained based on a large-scale manually labeled point cloud dataset. In this release, we open sourced the major 3D perception runtime code and model, and hope to benefit the self-driving car community. Please refer to our technical documentation in github for detailed algorithm description.

The key component in 3D perception

HDMap ROI Filter
The HDMap ROI filter filters out the Lidar points which are outside ROI. It aims to remove background objects (e.g., buildings, trees around the road) and leaves behind the point cloud inside the ROI for subsequent processing. It requires hi-resolution map data to be loaded for ROI-based filtering. Please refer to the technical document for algorithm details.
CNN Segmentation
After HDMap ROI filter, most of the background obstacles outside ROI (e.g., buildings, trees etc.) have already been removed. Then we obtain the filtered point cloud including only the points inside ROI (i.e., the drivable road and junction areas), which is feed into our CNN Segmentation module for detecting and segmenting out foreground obstacles such as cars, trucks, bicycles, pedestrians and so on. This segmentation algorithm is based on the Fully-Convolutional Deep Neural Networks and learns feature representation of point cloud to predict attributes related to obstacles (e.g., the foreground object probability, the offset displacement w.r.t. object center and the object height). Then it constructs an adjacency graph based on these attributes and analyzes the connected components for object segmentation. The CNN segmentation algorithm introduces advanced deep learning technologies into point-cloud-based obstacle detection, and is capable of learning effective features from huge amount of data for object detection and segmentation. Compared to traditional algorithms, our method provides the state-of-the-art performance on point cloud obstacle detection and segmentation, achieving very high recall, low false positive rate and real-time computing powered by Nvidia CUDA acceleration technology. Please refer to technical documents for algorithm details.
MinBox Builder
The object builder component establishes a bounding box for the detected objects. Due to occlusions or distance to the LiDAR sensor, the point cloud forming an obstacle can be sparse and cover only a portion of surfaces. Thus, the box builder works to recover the full bounding box given fitted polygon point of the detected point cloud. Please refer to the technical document for algorithm details.
HM Object Tracker
The HM object tracker is designed to track obstacles detected by the segmentation step. In general, it forms and updates track lists by associating current detections with existing track lists, deletes the old track lists if it no longer persists, and spawns new track lists if it identifies new detections. The motion state of the updated track lists will be estimated after association. Please refer to the technical document for algorithm details.

Video Presentation

Perception Demonstration in Offline Visualizer
Here we visualize the perception detection results using our offline visualizer tool. The raw point cloud is demonstrated in grey color and the point cloud within ROI area is shown in green color. Detected dynamic obstacles are shown in blue bounding box with arrows indicating its heading.
Perception Demonstration in Dreamviewer
Here we demonstrate the perception results shown in our dreamviewer. It’s a tool which accepts perception topics and visualize both the map and dynamic obstacles. The dynamic obstacles are shown in purple polygons with white arrow indicating its heading.

Cloud based Calibration Service

Apollo provides cross-platform calibration service on the cloud, and developers do not need to deploy any calibration tools locally or on the car. This service will greatly improve the flexibility of calibrating different sensor platforms and reduce the practice barrier.

The aim of sensor calibration for autonomous driving is acquiring the intrinsic parameters of sensors and the extrinsic parameters between sensors from sensor measurements. Sensor calibration is the first step for various multiple sensor fusion algorithms.

In autonomous driving area, combining multiple-beam LiDAR and GNSS/INS is a common sensor configuration for High-Definition Map production, LiDAR-based localization and object detection. Hence, it is of great importance to accurately calibrate the extrinsic parameters between the two sensors.

We are committed to enabling all levels of developers to obtain the most targeted support in the Apollo platform. Apollo platform looks forward to your joining!

We are committed to enabling all levels of developers to obtain the most targeted support in the Apollo platform. Apollo platform looks forward to your joining!