Geeks rejoice: Baidu Apollo hat ein Testset veröffentlicht, um Algorithmen für autonomes Fahren anzulernen. Großartig!
Autonomous Driving has attracted tremendous attention in the last few years. Among the many enabling technologies for autonomous driving, environmental perception is the most relevant to the vision community. As such we host a challenge to understand the current status of computer vision algorithms in solving the environmental perception problems for autonomous driving. In this challenge, we have prepared a number of large scale datasets with fine annotation. Based on the datasets, we have define a set of realistic problems and encourage new algorithms and pipelines to be invented for autonomous driving, rather than applied on autonomous driving. A total amount of 10K USD cash prize will be awarded to top performers. More specifically the first, second, and third place winner in each task will receive USD 1200, 800, 500 cash prize, respectively. Each winner must submit a paper describing their approaches after the competition is closed.
We have collected and annotated two large scale datasets. The first is provided by Berkeley DeepDrive (BDD). The BDD set includes 100K short video clips (each is 40 seconds), one key frame in each video clips is annotated.
The second data set, ApolloScape set, is provide by Baidu. ApolloScape contain survey grade dense 3D points and registered multi-view RGB images at video rate, and every pixel and every 3D point are semantically labelled. In addition precise pose for each image is provided.