Visit Baidu AI

Online Perception Module

The perception module incorporates the capability of detecting and recognizing obstacles and traffic lights. Given input the LiDAR points and RADAR data, the obstacle submodule detects, segments, classifies and tracks obstacles in the ROI that is defined by the high-resolution (HD) map. The submodule also predicts obstacle motion and position information (e.g., heading and velocity). The traffic light submodule detects traffic lights and recognizes their status in the images. Based on the aforementioned two submodules, Apollo 2.0 is able to achieve autonomous driving in simple urban scene. Refer to our technical documentation in GitHub for a detailed algorithm description.

Key Perception Components

Obstacle Perception

The obstacle perception includes LiDAR-based and RADAR-based obstacle perception, and fusion of both obstacle results. The LiDAR-based obstacle perception, based on the Fully Convolutional Deep Neural Network, predicts obstacle properties such as the foreground probability, the offset displacement w.r.t. object center and the object class probability. Then it implements object segmentation based on these attributes. The RADAR-based obstacle perception is designed to process the initial RADAR data. In general, it extends the track id, removes noise, builds obstacle results and filters the results by ROI. The obstacle results fusion is designed to fuse the LiDAR and RADAR obstacle results. In general, it manages and associates obstacle results from different sensors, and integrates obstacle velocity by Kalman Filter.

Traffic Light Perception

Traffic light module obtains the coordinates of traffic lights in front of the car by querying HD-Map. The traffic lights are projected from world coordinates to image coordinates by using intrinsic and extrinsic parameters of sensors. Then camera selection is performed based on the projection results. A larger ROI in the image is depicted around the projection area of traffic lights. Traffic lights are then detected in the ROI with a surrounding bounding box, and recognized as different color states. After obtained the single frame states, a sequential reviser is used to correct the states. The detection and recognition are based on CNN models and both have high recall and precision. The traffic light module is able to work in both day and night.

Video Presentation

Perception Demonstration in Offline Visualizer
The above videos are examples of our perception detection results using our offline visualizer tool. The raw point cloud is visualized in grey. Detected car, pedestrian, cyclist and unknown are depicted by bounding boxes in green, pink, blue and purple respectively, with red arrows that indicate their heading.
Perception Demonstration in Dreamview
In Dreamview, perception input is visualized as both static and dynamic obstacles. Obstacles are shown on the map as purple polygons with white arrows indicating the heading of dynamic obstacles.

Cloud-Based Calibration Service

Apollo provides a cross-platform calibration service located on the cloud, allowing developers the freedom from needing to deploy calibration tools locally or on the vehicle. This affords developers greater flexibility in calibrating different sensor platforms and makes the platform more user-friendly.

The aim of sensor calibration for autonomous driving is acquiring the intrinsic and extrinsic parameters between sensors. Sensor calibration is the first step for various sensor fusion algorithms.

Accurate extrinsic calibrations between LiDAR and GNSS/INS sensors is important for High-Definition Map production, LiDAR-based localization, and object detection in autonomous driving regions. In addition, Baidu’s autonomous driving system uses multi-sensor fusion to improve the perception performance. Therefore, it is necessary to calibrate the extrinsic of cameras and radar as well.

Apollo is committed to helping all levels of developers through targeted support. We look forward to you joining us!

Apollo is committed to helping all levels of developers through targeted support. We look forward to you joining us!