This week’s list of data news highlights covers November 16-November 22, 2019, and includes articles about using cellphone data to predict the spread of epidemics and building a robot that can sort trash.
Intermountain Healthcare, a hospital system in Utah and Idaho, has used data and AI to improve surgery outcomes while cutting costs. The hospital system developed an AI system that uses natural language processing to analyze electronic health records to determine how different factors, such as the surgeon and the equipment they used, affected surgery outcomes and costs. This process led to the development of new protocols to standardize surgeries—helping reduce readmissions rates for total hip and knee replacements by 43 percent since 2017—and revealed that some expensive versions of medical devices did not lead to better outcomes than their cheaper alternatives.
A researcher from the Vrije University of Amsterdam has developed an AI bot that scans Weibo, a Chinese social media platform, for posts that display suicidal thoughts and alerts a group of 600 psychologists, consultants, and volunteers who can reach out to the individual. The bot scans for posts containing words such as “death,” “release from life,” and “end of the world,” and classifies the posts into ten levels of severity. Since July 2018, the AI bot has helped prevent more than 1,000 suicides.
Researchers from Yamagata University in Japan and IBM have used AI to find an unknown hidden symbol that is part of the Nazca Lines, a collection of large etchings individuals created between 200 and 600 BCE in Peru. The groups trained a neural network to recognize known lines in etchings and then used the network to analyze lidar, drone, and satellite imagery. This method found the new etching, which is a small humanoid figure, in two months—previous methods usually took years to find new symbols.
RealPage, a firm based in the United States that creates software for the real estate industry, has developed an algorithm that predicts how likely a tenant is to pay rent. The firm trained the algorithm on 30 million leases, teaching it to find patterns in who moved out and did not owe the property money. The algorithm found that individuals with student debt were better renters than individuals with credit card debt and that a small credit history does not indicate an individual will be a bad renter.
Researchers from the Swiss Federal Institute of Technology have developed a system that uses AI to predict lightning strikes within ten to thirty minutes of the occurrence in a 19-mile radius. The researchers trained the system’s machine learning algorithm on air pressure, air temperature, humidity, and wind speed data from 12 Swiss weather stations between 2006 and 2017. The algorithm can predict lightning strikes with 80 percent accuracy and could help develop a real-time warning system.
A researcher from the Czech Academy of Sciences in Prague has developed a machine learning algorithm that revealed how much of Henry VIII William Shakespeare wrote, which is a literary controversy. The researcher trained the algorithm on four plays Shakespeare wrote and on plays written by John Fletcher and Philip Massinger, who historians have asserted may have helped write Henry VIII. The algorithm learned the styles of each writer and determined that Fletcher likely wrote half of the play while Massigner was probably not involved.
Argonne National Laboratory, a U.S. Department of Energy lab, is using a giant AI chip to find better drugs for cancer. Cerebras, a U.S. startup, created the chip, which is larger than an iPad and has 1.2 trillion transistors to process significant amounts of data without having to connect multiple smaller processors, which can slow the training of AI models. The laboratory is using the chip to develop a model that can predict how a tumor will respond to different drugs. The chip has helped reduce model training times from weeks to hours.
Researchers from MIT and the Swiss Federal Institute of Technology have shown that cellphone data can help predict the spread of an epidemic. The researchers used the anonymized cellphone data of 2.3 million people in Singapore and the typical number of mosquitoes and bite rates that cause the spread of dengue fever to simulate an epidemic. The researchers found that the simulated epidemic closely matched the spread of dengue in 2013 and 2014.
X, a subsidiary of Alphabet, has developed a robot that can sort trash. The firm trained the robot’s machine learning models using simulations and reinforcement learning, and the robot analyzes data from its sensors to safely navigate everyday environments, such as an office. The robot puts less than five percent of trash in the wrong location.
Google has developed a new service called Explainable AI, which consists of tools and frameworks that can help researchers interpret why a model made its decisions. The service quantifies each factor’s contribution to a decision by providing it a score. For example, a model that reviews loan applications would likely show account balance and credit score as the most decisive data.