The University of California, Berkeley has released a dataset of over 100,000 video sequences from more than 50,000 car rides to help improve autonomous driving technology. The video sequences are 40 seconds in length, and include annotations for over 100,000 vehicles, people, and traffic signage. In addition, the data includes videos depicting a range of weather conditions and driving scenarios, such as in cities, residential streets, and highways. The data also has annotations for different types of lane markings, which can help autonomous vehicles learn which areas of the road are drivable.
Improving Autonomous Driving Technology
Michael McLaughlin is a research assistant at the Center for Data Innovation. He previously worked at Oracle and held internships at USA TODAY and in local government. Prior to joining the Center for Data Innovation, Michael graduated from Wake Forest University, where he majored in Communication with Minors in Politics and International Affairs and Journalism. He is currently pursuing his Master’s in Communication at Stanford University, specializing in Data Journalism.
View all posts by Michael McLaughlin