Researchers at the Massachusetts Institute of Technology’s IBM Watson AI Lab have published a collection of one million labeled video clips called the Moments in Time Dataset to help train AI systems to identify and understand actions in videos. Each video clip is three seconds long and depicts people, animals, objects, and other natural phenomenon in a dynamic scene.
Teaching Machines to Understand What’s Going On In Videos
Joshua New is a senior policy analyst at the Center for Data Innovation. He has a background in government affairs, policy, and communication. Prior to joining the Center for Data Innovation, Joshua graduated from American University with degrees in C.L.E.G. (Communication, Legal Institutions, Economics, and Government) and Public Communication. His research focuses on methods of promoting innovative and emerging technologies as a means of improving the economy and quality of life. Follow Joshua on Twitter @Josh_A_New.
View all posts by Joshua New