This week’s list of data news highlights covers June 1-7, 2019, and includes articles about an AI system detecting bladder cancer and a system that uses facial analysis to detect rare diseases.
The White House Office of Management and Budget has released its Federal Data Strategy, which includes 40 actions agencies should take to improve their use of data. These actions include identifying datasets to prioritize for publication as open data, adopting or creating data standards to facilitate data sharing, and designing new data collections to maximize data reuse by future stakeholders.
Researchers from the University of Queensland and Curtin University in Australia have developed an app that uses AI to detect respiratory illnesses by analyzing the sound of coughs. The researchers trained the neural network the app uses on the data about the coughs and other symptoms of 850 patients. The app is between 81 to 97 percent accurate depending on the illness.
Researchers from the Imperial College of London have developed low-cost sensors that can detect if meat or fish is fresh and communicate that information to smartphones. The sensors, which cost two cents to make, detect the presence of gases such as ammonia, which can indicate spoilage. Researchers incorporated the sensors into near field communication tags that they placed on food packaging, and the tags stopped communicating with the phone once a certain level of gas was present, indicating the food had spoiled. The sensors could replace “use by” dates on food packaging, which can be inaccurate depending on how the food is stored.
A group led by researchers from the University of Florida have developed an AI system that outperforms physicians in diagnosing bladder cancer. The researchers trained the system on microscope slides of tissue from patients with bladder cancer and corresponding diagnostic reports. The system diagnosed slides with 95 percent accuracy, compared to 84 percent for pathologists.
Researchers from Queen Mary University in London have created a machine learning model that can predict if an actor has already experienced the year they will act in the most roles. The researchers developed their model using the profiles of more than 20,000 actors in the Internet Movie Database that had careers of at least 20 years. The researchers found that women were more likely than men to have large gaps between working.
Researchers from Northern Illinois University and the College of New Jersey have developed an AI system that can detect if a baby’s cry stems from hunger, fatigue, illness, or pain. The researchers used a speech recognition algorithm to detect the features of cries, finding that cries that result from similar reasons share common features. The researchers then designed a new “cry language recognition algorithm” that distinguishes between the sources of cries.
G6 Hospitality LLC, the parent company of Motel 6, has implemented an AI system that matches callers to booking agents, increasing its call center revenue by 4 percent. The system uses a caller’s phone number to access data that includes between 100 and 1,000 characteristics about the caller, such as how often they travel or upgrade hotel rooms, and if they are calling by cell phone or landline. The system then matches the caller to an agent who has had the most sales with callers with similar profiles.
Researchers from the German University of Bonn and Charité, a hospital clinic in Berlin, have developed a facial recognition system that can help diagnose rare diseases. The researchers trained a neural network on 30,000 photos of individuals with rare diseases, finding that the network could automatically detect the physical characteristics of certain diseases. The additional ability to analyze photos improved the accuracy of the system, which previously just analyzed genetic and patient data, by more than 20 percent.
Researchers from Stanford University, Princeton University, the Max Planck Institute for Informatics, and Adobe have developed an AI system that allows users to alter what a person says in a video by editing the transcript. The system analyzes an original video to align the transcript with units of sound and creates a 3D model of the speaker’s facial movements when speaking. When a user edits a video’s transcript, the system searches for lip movements that best align with the new words and edits the video to have the corresponding lip movements and sounds. The tool could help video producers by reducing the need to reshoot footage when a person misspeaks.
Researchers from Amazon have developed a new method that increases the ability of AI systems to understand vague commands by rewriting what the command likely means. When given a command such as “play their latest album,” the researcher’s method has a machine learning algorithm replace the word “their” with the most likely reference by analyzing previous rounds of dialogue. The researcher’s technique improved an AI system’s F1 score, a measure of the false-positive and false-negative rate, by more than 20 percent.
Image: Master Sgt. Michael Kaplan