Die Entwicklung von autonomen Fahrzeugen macht auch neue Teststrecken und Fahrtests notwendig. Warum und wie diese aussehen können, zeigt der heutige Artikel.
At Zenuity—a joint venture between Volvo and Autoliv, a Swedish auto-safety company—this test is just one of many ways we make sure not just that autonomous vehicles work but that they can drive more safely than humans ever could. If self-driving cars are ever going to hit the road, they’ll need to know the rules and how to follow them safely, regardless of how much they might depend on the human behind the wheel.
Even now your car doesn’t need you as much as it once did. Advanced computer vision, radar technology, and computational platforms already intervene to avoid accidents, turning cars into guardian angels for their drivers. Vehicles will continue taking over more driving tasks until they’re capable of driving themselves. This will be the biggest transportation revolution since cars replaced horse-drawn carriages.
But it’s one thing to build a self-driving vehicle that works, and quite another to prove that it’s safe. Traffic can be as unpredictable as the weather, and being able to respond to both means navigating countless scenarios. To fully test all those scenarios by simply driving around would take not years but centuries. Therefore, we have to find other ways to assure safety—things like computer simulations and mathematical modeling. We’re combining real traffic tests with extensive augmented-reality simulations and test cases on one of the world’s most advanced test tracks to truly understand how to make self-driving cars safe.
It’s easy for a self-driving vehicle to cruise down a straightaway in the middle of a sunny day. But what about what we call corner cases—scenarios in which several unlikely factors occur together? A road littered with fallen branches during a thunderstorm poses different challenges to a vehicle than an elk crossing the road while the sun is setting.
Manufacturers will likely be held liable for vehicles that react incorrectly, and so they want to know how the vehicle will respond. For us, the biggest question is “How do we know the vehicle is safe?”
But before that, we must first ask what it means for a self-driving vehicle to be safe. Safe doesn’t mean perfect; perfect information about the environment will never be available. Instead, it must mean the self-driving vehicle can handle the problems it’s designed to handle, like obeying speed limits, yielding to a car merging into its lane, or observing right-of-way at a stop sign. And it must also recognize when it is at risk of exceeding its design specifications. For example, the vehicle shouldn’t attempt to drive after being placed in the middle of the forest.
In 9 out of 10 accidents resulting in fatalities or major injuries, mistakes by the driver are a contributing factor, according to multiple U.S. and U.K. sources. Because of this, the quick answer to what is “safe enough” is usually “better than a human driver.” But the devil is in the details. It’s too easy a challenge to surpass the drunken driver or even the statistically average driver. The median driver, you might argue, is not very good.
We propose that self-driving cars be held neither to a standard so strict that it delays the introduction of a life-saving technology nor to one so lenient that it treats the initial customers as guinea pigs. Instead, the first self-driving vehicles should be demonstrably safer than a vehicle driven by the median human driver. We believe that if every component can be demonstrated to work better than a human and if the complex algorithms that govern each component can interact together to drive the vehicle, it’s reasonable to conclude that the car is a better driver than the human.
This means designing the vehicle’s systems to handle any situation within its scope and discount the rest. While it is possible a parachutist could land directly in front of the vehicle, it is so extremely unlikely it is not required to consider that scenario for safety tests.