Geschäftsmodell: Sicherheit wird das nächste Milliarden-Geschäft bei autonomen Fahrzeugen!

Was wird das nächste große Ding bei autonomen Fahrzeugen? Sicherheit und Compliance!

Here are four startup business opportunities few founders are pursuing that could not only save lives by accelerating autonomous cars, but also turn into massive, billion-dollar businesses. Robotaxi Regulation around Safety: Studies show that many billions of miles need to be driven by autonomous vehicles until we can statistically prove that they are safer than humans.  Unfortunately, thousands of lives will be lost at the hands of human drivers while we wait for those billions of miles to be driven autonomously.   An effective tool at our disposal is computer simulation; which can subject a vehicle's sensing, prediction, planning, and execution to many traffic scenarios, accelerated at a rate that it many times "real-time." In fact, there are many mapping and simulation startups, as well as incumbent Ansys through its acquisition of OPTIS, aiming at offering software to model autonomous cars.  Perhaps, they should consider building businesses around certifying that autonomous cars are safe.   Why would a simulator company consider a standalone business model? Unfortunately autonomous vehicle software is not matured to the point where modules, such as mapping and simulation, can be plugged in and out.  However, these tools can subject an autonomous vehicle's AI to many driving environments, and certify that the AI is "safe enough" under a broad variety of circumstances. These tools can enable autonomous car companies to self-certify their vehicles, or become the basis of a third-party who, from the use of these tools, can certify compliance. Robotaxi certification and compliance: Once we determine that a robotaxi can be built that is safer than the human-piloted variety, the next step is a standardizing means of testing whether individual robotaxis are indeed safe and compliant.  Given the enormous level of hardware, and software complexity, it is probably impossible to validate every sensor and line of code. However, going back to the airline analogy, it would be possible to conclude, with a high confidence level, that a vehicle is safe and compliant by performing several routine checks.  For example, a simple on-board simulator can verify software integrity. Furthermore, placing simple markers outside the vehicle can check for sensor calibration, in addition to the standard visual inspection of of the vehicle itself i.e., tires/wheels/windshield(s)/doors/etc. There may be a tradeoff between speed of checking for compliance and safety confidence, but again, so long as the confidence level associated with a given procedure puts the robotaxi at a level where it is statistically safer than a human, then it ought to be deemed as compliant. Robotaxi Insurance: In the unfortunate event of an accident, government inspectors will attempt to determine whether the vehicle was compliant at the time of the accident.  If it is determined that all systems were operating according to regulations, then either 1) the robotaxi was not at fault, 2) there was a hardware malfunction, or 3) the robotaxi encountered a corner cast it was not able to handle; a situation that a human driver may not have been able to handle either.  The robotaxi operator will have to generate a report for public record outlining the details of the accident, root causes, and steps that are being taken to prevent the incident from occurring again. A similar example is data breaches at financial institutions: if their customers' personal information is compromised, an investigation takes place to determine whether the company was compliant to measures required to protect its customers information.  In the case of non-compliance or gross negligence, they can be sued for damages. However, if the investigation deems that they were indeed compliant, then they must then provide certain disclosures and pay fines for investigations. The financial institution can purchase a cyber-insurance policy to protect itself against the expenses of these disclosures and penalties in the event of a hack so long as it is complying to regulation. The same can apply to autonomous vehicles.  Methods will be established to test whether autonomous cars are likely to be safer than human drivers.  In the unfortunate event of an accident where the robotaxi was at fault, an investigation by the authorities will determine whether the vehicle was indeed compliant.  If it is concluded that the vehicle was compliant, but there was a sensor malfunction, then the incident would be handled in the same manner as a human-piloted vehicle would suffering from a brake or steering malfunction, which falls into the category of product liability.  If it is determined that the vehicle was compliant; however, the software simply failed as a result of encountering a corner case it could not handle correctly, then traditional auto insurance can cover the damage, and a new insurance product; "A/V insurance," can cover the cost associated with the required disclosures and investigations.  Will traditional auto insurance companies underwrite the latter, or cyber-insurance? Better yet, is there a startup, or several startups, that can help authorities determine whether a robotaxi is compliant, and offer insurance to cover the penalties that will likely be levied to those at fault in accidents? In this gold rush of discovering autonomous vehicle technology most founders are focused on building sensors, perception, prediction, planning, maps, and simulators.  Unfortunately, the autonomous driving software stack is pretty integrated, limiting exit opportunities for founders and investors to acquisitions. I see an opportunity to build a massive enterprises testing and certifying compliance, with the roles of these companies in our autonomous futures being as important as those building the sensors and software stacks in these four-wheeled robots!  

Keine News mehr verpassen!