The University of California, Berkeley has released a dataset of over 100,000 video sequences from more than 50,000 car rides to help improve autonomous driving technology. The video sequences are 40 seconds in length, and include annotations for over 100,000 vehicles, people, and traffic signage. In addition, the data includes videos depicting a range of weather conditions and driving scenarios, such as in cities, residential streets, and highways. The data also has annotations for different types of lane markings, which can help autonomous vehicles learn which areas of the road are drivable.
Improving Autonomous Driving Technology
by Michael McLaughlin June 11, 2018

Michael McLaughlin
Michael McLaughlin is a research assistant at the Center for Data Innovation. He researches and writes about a variety of issues related to information technology and Internet policy, including digital platforms, e-government, and artificial intelligence. Michael graduated from Wake Forest University, where he majored in Communication with Minors in Politics and International Affairs and Journalism. He received his Master’s in Communication at Stanford University, specializing in Data Journalism.
View all posts by Michael McLaughlin
previous post