Facebook and the Georgia Institute of Technology have released TextVQA, a dataset of images, questions, and answers, to foster the development of systems that can read text in images and answer questions about the images. The dataset includes more than 28,000 images, 45,000 questions about the images, such as “what is the title of the white book?” and 450,000 answers. Developing AI systems that can read and reason about the text in images could be helpful to visually-impaired individuals.
Developing AI Systems with Visual Reasoning
Michael McLaughlin is a research analyst at the Center for Data Innovation. He researches and writes about a variety of issues related to information technology and Internet policy, including digital platforms, e-government, and artificial intelligence. Michael graduated from Wake Forest University, where he majored in Communication with Minors in Politics and International Affairs and Journalism. He received his Master’s in Communication at Stanford University, specializing in Data Journalism.
View all posts by Michael McLaughlin