Das MIT und Toyota haben einen visuellen Datensatz für die Weiterentwicklung des Autonomen Fahrens veröffentlicht.
How can we train self‐driving vehicles to have a deeper awareness of the world around them? Can computers learn from past experiences to recognize future patterns that can help them safely navigate new and unpredictable situations?
These are some of the questions researchers from the Massachusetts Institute for Technology (MIT) AgeLab at the MIT Center for Transportation & Logistics and the Toyota Collaborative Safety Research Center (CSRC) are trying to answer by sharing an innovative new open dataset called DriveSeg.
Through the release of DriveSeg, MIT and Toyota are working to advance research in autonomous driving systems that, much like human perception, perceive the driving environment as a continuous flow of visual information.