Researchers from the University of Bristol and the University of Catania in Italy have released a dataset of 100 hours of video recordings of individuals performing activities in the kitchen. The recordings include more than 20,000 unique narrations for the tasks, such as cutting peppers, an individual is performing. The researchers collected the recordings using cameras on individuals’ heads in 45 different kitchens. The dataset can help test the ability of AI systems to detect and anticipate tasks.
Teaching AI Systems to Analyze First-Person Videos
Michael McLaughlin is a research analyst at the Center for Data Innovation. He researches and writes about a variety of issues related to information technology and Internet policy, including digital platforms, e-government, and artificial intelligence. Michael graduated from Wake Forest University, where he majored in Communication with Minors in Politics and International Affairs and Journalism. He received his Master’s in Communication at Stanford University, specializing in Data Journalism.
View all posts by Michael McLaughlin