Researchers at Georgia Tech and Virginia Tech have launched the 2018 Visual Question Answering (VQA) Challenge, providing training data for participants to compete to develop the best AI system that can answer questions about the contents of images. The VQA dataset consists of over 256,000 images each with at least three questions about their contents, such as “who is wearing glasses,” and “where is the child sitting,” true answers for each question, and three plausible but incorrect answers per question. Participants have until May 20, 2018 to develop their submission.
Teaching Computers to Answer Visual Questions
Joshua New was a senior policy analyst at the Center for Data Innovation. He has a background in government affairs, policy, and communication. Prior to joining the Center for Data Innovation, Joshua graduated from American University with degrees in C.L.E.G. (Communication, Legal Institutions, Economics, and Government) and Public Communication. His research focuses on methods of promoting innovative and emerging technologies as a means of improving the economy and quality of life.
View all posts by Joshua New