This week’s list of data news highlights covers December 16, 2018 – January 4, 2019, and includes articles about AI reducing the amount of time it takes to manufacture chemicals and AI teaching robots how to walk in two hours.
Researchers from Amnesty International and Element AI, an AI software company, have created a machine learning system that found that over 7 percent of tweets sent to female politicians and journalists are problematic, either by being hurtful, hostile, or abusive. The system also revealed that black women politicians and journalists receive an abusive tweet every 30 seconds and are 84 percent more likely to be mentioned in abusive tweets than white women in the same profession. The researchers also released Troll Patrol, a public tool that uses their model to analyze if a tweet is abusive.
Researchers from the University of Washington attached removable sensors to bees to monitor their location and the temperature, humidity, and light intensity at their locations. The sensors, which form a miniature “backpack,” charge wirelessly and share information via radio waves. Besides revealing more about the lives of bees, which are in the midst of a global die-off, the backpacks could help farmers learn more about the conditions affecting their crops.
Researchers from Stanford University developed an AI system called DeepSolar that identifies the number of solar panel installations in satellite images with 93 percent accuracy, and the researchers used the system to find that there are 1.47 million such installations in the United States—nearly 500,000 installations more than the latest estimate. The researchers trained DeepSolar on 370,000 satellite images, which included images both with and without solar panels.
Researchers from Osaka University in Japan have developed an AI system that analyzes microscopic images of cells to accurately identify cancer types and if cancer cells are resistant to radiation. The researchers trained the system on 10,000 images of human cervical cancer cells and cancer cells from mice. When tested against a dataset of 2,000 images, the system achieved 96 percent accuracy.
Researchers at New York University have created an AI system that uses computer vision and an artificial neural network to quickly analyze chemical reactions. When combined with miniaturized chemical reactors, the system reduces the amount of chemicals necessary to analyze a reaction from up to 100 liters to a few small drops, limiting waste and improving safety.
Google and British choreographer Wayne McGregor are using AI to create new dance choreographies. Google and McGregor trained an AI system with hundreds of hours of video from McGregor’s archives and of the ten dancers in his company. After learning each dancer’s style, the system suggests 30 future choreographic sequences based on the dancer’s current pose in the video. By accounting for the style of the current dancer in the video and the style of the nine other dancers, the system can combine the styles of each dancer.
Researchers from the University of California, Berkley and Google have developed an AI system that teaches robots how to walk. While other researchers have used reinforcement learning extensively to teach robots locomotion skills, the process can be time consuming and result in damaged robots while simulations usually result in performance decreases. But the researchers used a technique called maximum entropy reinforcement learning, which encourages AI agents to explore a wider path of actions, to teach locomotion skills without simulated training. The researchers used the system to train a real-world four-legged robot to walk in the equivalent of about two hours.
Researchers from the University of California (UC), San Francisco, UC Berkeley, and UC Davis, have created an AI system that uses machine learning to predict the development of Alzheimer’s in patients as many as six years before doctors can. The researchers trained the system, which can predict the presence of Alzheimer’s with over 90 percent accuracy, on over 2,000 positron emission tomography (PET) scans. Radiologists typically look for decreasing levels of glucose in the brain to detect Alzheimer’s but this poses challenges because the disease slowly decreases glucose levels, meaning it is only detectable after the disease has progressed for some time.
California startup Humu has developed an AI system that analyzes employee surveys to identify behavioral changes that would have the biggest impact on making employees happier. The system then uses emails and text messages to remind employees to perform actions, such as a manager asking for staff input about a company decision, to improve employee satisfaction. Humu bases the system on economist Richard Thaler’s research showing that nudges can promote positive behavior and decisions that choose the best course of action, rather than the easiest.
Researchers are increasingly using neural networks to translate data from electrodes surgically placed on human brains into computer-generated speech. For example, researchers from Columbia University used AI to analyze data from five patients’ auditory cortexes as they heard recordings of stories and counting from zero to nine. The system then “spoke” the numbers with 75 percent accuracy. Additionally, researchers from the University of Bremen in Germany and Maastricht University in the Netherlands trained their neural network on recordings of six brain tumor patients speaking single-syllable words. The system then reproduced words from previously unseen brain data with 40 percent accuracy. Lastly, researchers from the University of California, San Francisco, were able to reproduce sentences, with 80 percent accuracy, from brain data of three epilepsy patients that read aloud.
Image: Rick Naystatt