Researchers at UC Berkeley have published the Natural Adversarial Examples dataset, consisting of 7,500 images of natural phenomenon designed to fool image classification algorithms. Adversarial examples significantly reduce a classifier’s accuracy due to subtle visual elements that convince the algorithm it is seeing, for example, a manhole cover, rather than a dragonfly. Testing a classifier’s resilience to adversarial examples can help researchers overcome common flaws in classifier design, such as over-reliance on color or background cues.
Testing How Well AI Deals with Adversarial Examples
Joshua New is a senior policy analyst at the Center for Data Innovation. He has a background in government affairs, policy, and communication. Prior to joining the Center for Data Innovation, Joshua graduated from American University with degrees in C.L.E.G. (Communication, Legal Institutions, Economics, and Government) and Public Communication. His research focuses on methods of promoting innovative and emerging technologies as a means of improving the economy and quality of life. Follow Joshua on Twitter @Josh_A_New.
View all posts by Joshua New