Researchers from the University of Alicante in Spain have released RobotriX, a dataset of 512 sequences of actions that robots performed in 16 virtual rooms, to help AI solve robotic vision problems such as object detection. To generate the data, human operators used virtual reality to control the robot agents as they performed actions, such as grasping a bowl, in the simulated environments. For each of the nearly 8 million frames in the dataset, the researchers provide high-resolution images, depth maps, and 2D and 3D bounding boxes around household items such as sinks, bags, and shelves.
Building Better Robots
Michael McLaughlin is a research analyst at the Center for Data Innovation. He researches and writes about a variety of issues related to information technology and Internet policy, including digital platforms, e-government, and artificial intelligence. Michael graduated from Wake Forest University, where he majored in Communication with Minors in Politics and International Affairs and Journalism. He received his Master’s in Communication at Stanford University, specializing in Data Journalism.
View all posts by Michael McLaughlin