This week’s list of data news highlights covers February 2-8, 2019, and includes articles about a new search engine for DNA and an AI model that can detect if a human is angry in only one second.
A group of European and American researchers has created the Bitsliced Genomic Signature Index (BIGSI), a DNA search engine that makes it significantly easier to access microbial data. Even though researchers are continually collecting and sequencing bacteria genomes, it can take days to search through the data to see where and when a particular strain of bacteria has been present. Researchers can search BIGSI, which has 500,000 sequences, for a specific genome sequence in seconds.
Google used machine learning to develop Live Transcribe, a new app for Android phones that helps people who are deaf or hard of hearing. The app works in 70 different languages and uses speech-to-text technology to transcribe conversations in real time. Google worked with Gallaudet University, a college for deaf and hearing-impaired people, to design the app.
Researchers are increasingly finding ways to improve the ability of AI systems to understand humans’ intent in spoken and written conversations, such as accurately identifying tone and idioms. For example, researchers from Carnegie Mellon University and Tsinghua University in China developed an AI system that can identify if a human is happy with over 87 percent accuracy by analyzing the person’s facial and vocal patterns. In addition, researchers from the University of Pittsburgh created an AI system that judges whether written phrases are literal or figurative with roughly 75 percent accuracy after training the AI system on Wikipedia entries.
Researchers from the Allen Institute for Artificial Intelligence have developed an AI program called AllenAI that can play an online Pictionary-style game called Iconary. To train the system, the researchers had AllenAI watch humans play 100,000 games of Pictionary and taught it how to associate words with common characteristics, such as bread, fruit, and food. AllenAI uses computer vision, language understanding, and common sense reasoning to play Iconary with humans, who can either draw or guess phrases.
McCormick, the world’s largest spice company, used AI to develop new flavor combinations, including Farmers Market Chicken and New Orleans Sausage. The company worked with IBM to develop the AI system, which the companies trained on data about ingredients, seasoning formulas, sales, trend forecasts, and consumer tests of products. The AI system suggests new flavor combinations, and the system reduces the time it takes to create new products by up to two-thirds, according to McCormick.
Researchers from a collection of U.S. universities have developed an AI program that can identify microscopic marine organisms called foram as fast and accurately as most trained humans. The AI system takes pictures while shining light on the organism’s shell or fossils from 16 different directions, and it then uses this data to identify the species of foram. Studying foram can help researchers better understand the past properties of oceans, including their temperature, salinity, acidity, and nutrient concentrations, because different types of foram thrive in different environments.
The Metropolitan Museum of Art in New York City worked with Microsoft and the Massachusetts Institute of Technology to use AI to create better experiences for its visitors. For example, one AI application creates new pieces of artwork based on images of the museum’s artifacts. Another system uses voice recognition to listen to a person tell a bedtime story while displaying images from the museum’s collection that correspond with the spoken words.
A group of European researchers used deep learning to discover that humans may have an unknown ancestor that lived in Eurasia. The researchers simulated thousands of evolutionary histories to train their model on the demographic details, such as population sizes and rates of intermixing, that would produce different genetic patterns of modern day humans. The researchers then used AI to identify the evolutionary model that best fit actual human genomic data, finding that people of Asian descent likely had an unknown ancestor that contributed to their DNA 300,000 years ago.
Researchers are using AI to develop systems that can detect if DNA sequences have been slightly altered to create dangerous viruses or toxins. For example, the University of Virginia is building a program that compares 40 million records of sequences from 90,000 microbial species to learn the DNA sequences of known toxins, and the program can already match a sequence to a particular organism. Such programs could detect when bad actors attempt to pay DNA synthesizing firms to produce dangerous pathogens.
AI startup Affectiva has created an AI model that can detect a person is angry in 1.2 seconds by analyzing their speech. Affectiva used transfer learning to develop their model, initializing the weights of their model with SoundNet, a convolutional neural network that researchers had trained on two million videos, before fine-tuning their model’s weights by training it on a highly annotated dataset of 12 hours of audiovisual emotion data. Affectiva found that its model also performed well at analyzing Mandarin Chinese speech.