Researchers from Elemental AI, a company that creates business applications using artificial intelligence, have published a dataset to foster the development of generative models that can produce images when given a text input. The dataset includes nearly 300,000 high-definition images of humans wearing fashion items, such as pants, purses, and sunglasses. In addition, professional designers wrote paragraph-length descriptions that accompany each image, which researchers can use to train algorithms to recognize what pieces of clothing, such as slim fit blue jeans and khaki pants, look like. By learning how different descriptions correspond to different properties of clothing, models can then generate representations of the clothing based on text. Improved text-to-image synthesis models could help create computed-aided content.
Creating Images from Text
by Michael McLaughlin July 20, 2018

Michael McLaughlin
Michael McLaughlin is a research assistant at the Center for Data Innovation. He researches and writes about a variety of issues related to information technology and Internet policy, including digital platforms, e-government, and artificial intelligence. Michael graduated from Wake Forest University, where he majored in Communication with Minors in Politics and International Affairs and Journalism. He received his Master’s in Communication at Stanford University, specializing in Data Journalism.
View all posts by Michael McLaughlin
previous post