Researchers from Stanford University, Princeton University, and the Technical University of Munich have published a new dataset called ScanNet consisting of 3D scans of 1,513 indoor environments, which include millions of annotations of household objects, that can serve as valuable training data for robotics developers as they work to improve the object recognition and navigation of indoor robots. The researchers used 3D cameras and took pictures from 2.5 million camera angles to make the scans, and then used Amazon’s Mechanical Turk crowdsourcing platform to annotate their contents.
Publishing 3D Data to Benefit Indoor Robotics
Joshua New is a senior policy analyst at the Center for Data Innovation. He has a background in government affairs, policy, and communication. Prior to joining the Center for Data Innovation, Joshua graduated from American University with degrees in C.L.E.G. (Communication, Legal Institutions, Economics, and Government) and Public Communication. His research focuses on methods of promoting innovative and emerging technologies as a means of improving the economy and quality of life. Follow Joshua on Twitter @Josh_A_New.
View all posts by Joshua New