Autonomous vehicle software company nuTonomy has released a dataset of over 1.4 million images called nuScenes to support research into computer vision and autonomous vehicles. The images depict 1000 scenes, each 20 seconds long, in either Boston or Singapore. The dataset also includes over one million bounding box annotations for 25 different types of objects as well as data from radar and LIDAR, a surveying method that uses laser light to measure distances. Including such data can help researchers combine purely vision-based methods for autonomous driving with sensor-based solutions.
Training Autonomous Vehicles to Drive in Diverse Locations
Michael McLaughlin is a research assistant at the Center for Data Innovation. He researches and writes about a variety of issues related to information technology and Internet policy, including digital platforms, e-government, and artificial intelligence. Michael graduated from Wake Forest University, where he majored in Communication with Minors in Politics and International Affairs and Journalism. He received his Master’s in Communication at Stanford University, specializing in Data Journalism.
View all posts by Michael McLaughlin