Tesla gibt einen interessanten Einblick in die Wirkungsweise seines sog. Autopiloten, darunter so spannende Themen wie Bilderkennung und neuronale Netze.
At Train AI last month, Karpathy gave a quick talk about the history of computer vision software and the transition to what he now calls “software 2.0” where engineers are stepping back from designing software and instead utilizing machine learning to create the programs.
He explained how it applies to Tesla and specifically the company’s autonomous driving effort.
The engineer refers to Tesla’s vehicles as “robots” and therefore, he says that Tesla has the largest deployment of robots in the world with a fleet of over 250,000 vehicles.
Now he needs to train those robots to drive themselves.
Karpathy says that since joining Tesla 11 months ago, he has pushed for more “software 2.0” in Tesla’s Autopilot stack, which he illustrated with those images:
He is likely referring to the rewrite of Tesla’s neural net that was pushed in the Autopilot software update back in March, which featured a significant improvement in Autopilot capabilities.
Now that the neural net is slowly taking over the code of Tesla’s Autopilot, Karpathy says that they the team is focusing on labeling and creating the dataset infrastructure.
Karpathy explained that since he joined Tesla, the reason for the amount of sleep that he is losing has shifted from the actual modeling and algorithms to handling the datasets:
As an example, he describes how labeling different types of road lines can be quite complex due to the variety of lanes across different regions.
He gave another example of Tesla’s dataset with traffic lights, which he says can “get crazy really fast”:
Karpathy explained that it takes “time and attention” to build a dataset and it is “painful,” which is why they are trying to build new tools at Tesla to help them create “software 2.0” code.