Simulation Scenarios Data
The simulation scenarios data includes the human-edited and real-collected scenarios, covering a variety of road types, obstacle types and road environments. At the same time, it opens the cloud simulation platform to support the concurrent online verification of algorithm modules in multiple scenes to accelerate the algorithm iteration speed.
Human-edited scene set
This scenarios set comes from virtual editing, constructing a variety of scenes collected such as traffic lights and straight lanes. The rich virtual editing scenes help quickly verify the algorithm’s basic ability and accelerate the iterative efficiency.
scenarios data: 101
Annotation Data
Annotation data is generated by human annotation to meet deep learning and training needs. At present, we have opened up many kinds of annotation data and provide corresponding computing capabilities in the cloud for developers to train algorithms in the cloud and improve the algorithm iteration efficiency.
Laser Point Cloud Obstacle Detection And Classification
It provides 3D point cloud annotation data, marking four types of obstacles: pedestrians, motor vehicles, non-motor vehicles and miscellaneous. The data can be used for the research, development and evaluation of obstacle detection and classification.
Download data: 121MB
Traffic Light Detection
It provides common vertical traffic light image data. The data is collected in the daytime, whether it is sunny, cloudy or foggy. The data resolution is 1080P.
Download data: 88.1MB
Road Hackers
This dataset has two main types of data, which are the street view images and the vehicle movement status. The street view image provides the image in front of the vehicle. The vehicle movement status data includes the current speed of the vehicle and the trajectory curvature.
Download data: 6.6GB
Image-Based Obstacle Detection And Classification
The data collection covers urban roads and high-speed scenes. Four major types of obstacles are humanly annotated: motor vehicles, non-motor vehicles, pedestrians and static obstacles, which can be used for the research and evaluation of the visual obstacle detection and recognition algorithms.
Download data: 108MB
Obstacle Trajectory Prediction
The sample data comes from the comprehensive abstract features of multi-source sensors. Each group of data provides related information of 62-dimensional vehicle and roads, which can be used in the research, development and evaluation of the obstacle behavior prediction algorithms.
Download data: 27KB
Scene Analysis
The data includes thousands of frames of high-resolution RGB videos and the corresponding pixel-by-pixel semantic annotations. At the same time, it provides dense point clouds with semantic segmentation measurements, emergency stereoscopic videos and stereo panoramic images.
Download data: 109.7GB
Demonstration Data
At present, we have opened up many kinds of demo data covering sensor collection, self-positioning, end to end and other module data, helping developers debug each module code to ensure that Apollo’s latest open code module can run successfully in the developer's local environment. The capabilities of each module can be experienced through demo data.
Vehicle System Demo Data
It provides the sensor data collected at real scenes (the output of each on-board module such as Lidar point cloud data, vehicle remote control data), which can be used to debug main modules in Apollo vehicles.
Download data: 12.1GB
Calibrate Demo Data
It provides the calibration service demo data generated by the vehicle calibration data collection tools. The data includes HDL-64ES3 raw data for about 3 minutes, combined inertial relative motion information, and corresponding md5 checksum file.
Download data: 560MB
End-To-End Data
It provides input raw sensor data and output decision control instructions. The original data of the current Apollo input sensor are mainly images and the output control instructions include the steering wheel angle, acceleration and braking.
Download data: 156GB
Self-Positioning Module Demo Data
It provides relative trajectory, continuous video frame and deep neural network model. Developers can learn self-positioning module functions from this data set.
Download data: 237MB
Multi-Sensor Fusion Localization Data
The dataset contains sensor data for a normal urban road scenario with a duration of 3 minutes and a total length of 3 km, and other basic data that multi-sensor fusion localization module requires. The data can be used for multi-sensor fusion localization module debugging.
Download data: 4.0GB