This week’s list of data news highlights covers December 9-15, 2017, and includes articles about how NASA found a new planet with the help of AI and a pill that can monitor opioid usage.
Silicon Valley startup AEye has developed a new kind of hybrid sensor for autonomous vehicles that combines a camera, LIDAR, and chips that run embedded AI to allow the system to automatically prioritize where it focuses its attention. Other kinds of computer vision systems for autonomous vehicles can capture their surroundings with a high level of detail, but they focus on every detail in their surroundings equally, even when it is not necessary to do so, which is technologically demanding and can make these systems expensive. AEye’s system works more like the human brain, which prioritizes details at the center of a scene and monitors the peripheries for any signs of danger. The hybrid sensor’s onboard AI automatically adjusts the system’s focus based on the context of its surroundings.
NASA has announced that it has discovered an eighth planet orbiting a Sun-like star called Kepler-90 2,545 light-years from Earth thanks to machine learning software developed by Google. NASA’s Kepler Space Telescope captured light readings from Kepler-90 over a period of four years. By using machine learning to identify slight variations in these light readings, which indicate a planet passed in front of the star, NASA was able to identify that Kepler had eight planets orbiting it, rather than seven as previously thought. The machine learning system also identified new planets orbiting other stars, but the Kepler-90 discovery is particularly significant because it is now the only known star with eight planets orbiting it other than the Sun.
Researchers at IBM Research Australia and the University of Melbourne have developed an AI system that can analyze the electrical activity of a person’s brain and predict the onset of seizures with 69 percent accuracy. By analyzing data about an epilepsy patient’s brain from an electroencephalogram (EEG), researchers can identify patterns of electrical activity that are associated with the onset of a seizure. However, these patterns are different in every person and change over time, making predictions difficult. The researchers used data from implanted EEGs that continuously gathered data about brain electrical activity for an average of 320 days per patient and then used a deep learning system to identify personalized patterns of brain activity that signal an oncoming seizure. This approach could eventually be used to warn patients ahead of time via smartphone that they may soon have a seizure so they can be better prepare.
Researchers at the University of Leuven in Belgium and Facebook AI Research have developed a technique for training artificial neural networks so that they can differentiate between important and unimportant knowledge and “forget” unimportant knowledge to make room for learning new things. The technique involved measuring the outputs from a neural network and observing how sensitive they are to changes in neural connections within the network. With this information, the researchers were able to instruct the neural network to retain certain important connections and discard others to preserve certain knowledge, similar to how the human brain automatically forgets irrelevant information.
Doctors at Brigham and Women’s Hospital in Boston and health technology company EtectRx have developed an ingestible capsule with a wireless sensor that can fit over regular pills and transmit a signal when it has been ingested to study how patients take prescription opioids. The pill sends a radio signal to a Bluetooth-enabled adhesive patch when it comes in contact with stomach acid, which then can notify a wearer’s smartphone. By studying how patients take prescription opioids, doctors hope to be able to provide better guidance about medication adherence, such as instructing patients not to take the drugs before they go to bed, which poses health risks, and identifying if a patient is becoming addicted.
DeepMind has developed a technological test called Gridworld for AI systems that can check for nine safety features, including whether an AI system is capable of modifying itself or if it can cheat at playing a game. Gridworld uses a series of simple video games that require an AI system to manipulate blocks of pixels and assess the AI system based on how well it performs and adheres to the games’ rules. DeepMind has made Gridworld freely available for anyone to download.
Computer scientists at the University of North Carolina at Chapel Hill and Adobe Research have developed a machine learning system capable of generating realistic audio for short videos. The team trained their system on a dataset of over two million YouTube clips with distinct audio events, such as dogs barking or a helicopter flying, and with the source of the sound clearly visible, to teach it to identify how certain sounds correspond to certain visual actions. In tests, humans could not tell if the sound in a video was genuine or generated by the machine learning system over 70 percent of the time.
A team of engineers and poultry scientists at the University of Georgia and the Georgia Institute of Technology have developed an AI system that can analyze the sounds chickens make and identify if a chicken is content or in different kinds of distress. The team recorded noises from chickens in stressful situations, such as too much ammonia in the air or mild viral infections, and fed this data to a machine learning system. For example, the system can identify when chickens are stressed due to heat with near perfect accuracy, which could allow farmers to eventually monitor their flocks and automatically adjust the environment in response to the sounds chickens make.
A startup called AdVerif.ai has developed AI software capable of analyzing web pages and detecting likely-fake news stories, nudity, malware, and other kinds of problematic content. Advertisers are not always aware of what sites they advertise on, which allows some people to profit from creating fake news stories that drive high traffic to their site and expose more people to ads. AdVerif.ai can spot telltale signs a story is fake, such as too much capitalization in a headline, compare stories to a database of thousands of fake and real news stories, and even differentiate between satire and fake news, Advertisers can use the service to avoid inadvertently supporting such content.
Researchers at the University of Texas Center for Transportation Research, the University of Austin, and the Texas Advanced Computing Center have developed a deep learning system that can analyze video from traffic cameras, recognize objects such as people, cars, and traffic lights, and build models for how they interact with one another. This system could help urban planners and transportation authorities better address transportation challenges, such as why some one-way streets see more cars travelling the wrong direction, or how different traffic light patterns affect traffic volume.