10 Bits: the Data News Hotlist
This week’s list of data news highlights covers November 11-18, 2016 and includes articles about a machine learning algorithm that can predict if U.S. State Department documents should be classified and a system of autonomous aircraft that fight wildfires.
Google has developed a machine learning program called Rapid and Accurate Image Super Resolution (RAISR) that can analyze low-resolution image files and reproduce them at a higher resolution. This process, known as upscaling, is not new, however it can be very resource intensive and timely. RAISR is capable of upscaling images 10 to 100 times faster than existing upscaling technology without the need for large amounts of computing power, making it viable for consumer applications such as improving image quality from smartphone cameras.
Researchers at Brazilian think tank Fundação Getulio Vargas and Columbia University have developed a machine learning algorithm that can predict whether documents from the U.S. State Department should have been classified with 90 percent accuracy. The researchers had their algorithm analyze a million declassified State Department cables from the 1970s, which originally had various degrees of classification, so it could learn factors that increase a message’s likelihood of being classified, including the contents of the message as well as metadata, such as the sender and date.
Scientists at the University of Colorado Boulder and Northwestern University have developed a prototype adhesive sensor that can monitor a person’s heartbeat by detecting electrical activity and the sounds a heart makes, similar to a stethoscope. Because the sensor can detect subtle frequencies, it can identify anomalies in a person’s heart rate, such as a blood clot, or if adhered to a person’s throat, it can be used to interpret voice commands. The sensor is smaller than a penny, flexible, and though the prototype must be linked with a wire, the scientists are developing a version that can transmit data wirelessly via Bluetooth.
Lockheed Martin has developed a series of autonomous aircraft that can coordinate with each other to help combat wildfires without putting humans at risk. One of the aircraft uses cameras and infrared sensors to identify burning areas and shares this data with an autonomous cargo helicopter, which picks up and delivers water to the fire. A complementary system uses an autonomous drone to detect any people caught close to the fire and shares this information with an unmanned helicopter that then identifies a safe landing spot near the people to help them evacuate.
Researchers at Google’s DeepMind have developed a method for improving the performance of a machine learning system by teaching it to repeatedly evaluate its past decisions to better understand their relationship with positive or negative events, similar to the way animals dream. The researchers had their system, named Unsupervised Reinforcement and Auxiliary Learning agent (Unreal), attempt to complete a three dimensional maze game and attribute high amounts of points with positive feedback, and lows amounts of point with a negative feedback. The researchers had Unreal play the game repeatedly, and then by instructing it to devote particular focus to situations that resulted in the greatest amounts of positive feedback, Unreal was able to substantially increase its performance, and it can now play the game nearly as well as human experts, and ten times faster than the leading AI system designed to play the game.
The United Kingdom’s weather agency, the Met Office, and mapping agency, Ordnance Survey (OS), as well as the University of Surrey, have partnered to develop modeling tools that combine mapping data with weather data to create realistic models of how these factors could impact 5G wireless network deployments. 5G networks are more easily disrupted by interference from the built environment and weather than other lower-frequency networks, making the locations of network infrastructure very important for successful deployment. OS will use the tools to recommend ideal locations for 5G infrastructure, as well as model how new building projects could impact the function of the 5G network.
Researchers at the University of Buffalo have created a smartphone app that uses eye-tracking technology to determine if a child exhibits signs of autism spectrum disorder (ASD), which can cause children to avoid eye contact. The app shows users photographs of people in social scenes and the camera tracks their eye movement to record where users look. The app found that children with ASD exhibited more scattered eye movement patterns, and could determine if a child had ASD with an accuracy of 94 percent. The researchers will test the app with a larger study, and expect it to be useful for helping diagnose ASD earlier, which can prompt earlier and more effective treatment.
Walgreens has partnered with patient networking website PatientsLikeMe to incorporate the site’s user review data into the pharmacy’s website to help its customers make more informed decisions. PatientsLikeMe allows users to connect with people that have similar health conditions and share information with each other, such as their experiences with different medication. Walgreens will use this data to provide its customers with an array of useful information about their medications, including why patients choose particular drugs, the most common alternatives to specific medications, and the perceived effectiveness of these treatments.
Researchers at Stanford University have developed a small, ultra-low-power WiFi radio called HitchHike that uses 10,000 times less power than traditional WiFi radios. This technology is well-suited for when it is infeasible to regularly access and replace batteries in connected devices, a common problem in the Internet of Things. HitchHike has a range of 50 meters and can transmit data at rates as high as 300 kilobits per second using a technique called backscatter, which involves bouncing radio waves off of each other to transmit data over longer distances.
The nonprofit Bipartisan Policy Center has partnered with researchers at the California Institute of Technology and the Massachusetts Institute of Technology to gather data about the time people spend waiting to vote. Long lines have often plagued elections, and while policymakers are aware of the problem, they often lack useful data about where the lines occur and why. The project gathers data about line wait times, the number of voting booths per polling center, and other useful data for jurisdictions representing 20 percent of registered voters. The data can help election officials make more informed funding decisions and improve the efficiency of polling places.