This week’s list of data news highlights covers July 20-26, 2019, and includes articles about autocomplete software for code and smart home systems with intelligent hearing.
Researchers at Vrije University in Amsterdam have developed a machine learning method for transferring skills developed by an AI system controlling a robot with advanced sensor technology to more basic robots. In a lab setting, robots can rely on many advanced sensors to gather feedback as they perform tasks, however it is not feasible to deploy robots in the field equipped with such hardware. The researchers trained a robot with eight proximity sensors and a camera to navigate a simulated environment and then used a technique called transfer learning to enable a robot with just a camera to learn to navigate this environment significantly faster than a control robot.
Mohammed AlQuraishi, a researcher at Harvard University, claims to have developed an AI system that can predict the 3D structures of proteins from their amino acid sequences up to one million times faster than leading methods. DeepMind announced in 2018 that its AI system AlphaFold could predict protein structures quickly and accurately by combining two AI techniques: one to predict a protein structure’s features, and another to predict a structure that could incorporate these features. AlQuraishi claims his approach can perform both operations simultaneously, allowing it to predict a protein structure in milliseconds, rather than several hours, though he admits there is likely a reduction in accuracy compared to AlphaFold.
Researchers at the MIT-IBM Watson AI Lab have created an AI tool that can generate a renaissance-style portrait of a person based on a selfie. The researchers trained the tool, which is available at aiportraits.com, on 45,000 classical portraits to teach it traits about oil, watercolor, or ink portraits. Unlike similar tools that “paint over” a face in a new style, the researchers’ tool generates entirely original portraits that incorporate a user’s facial features.
Facebook has developed a tool called Map With AI, in partnership with crowdsourced mapping website OpenStreetMap, that can map unmapped roads from satellite imagery which human experts then review. Having mapping experts use this tool is significantly faster than having them manually create the maps. Facebook has successfully used the tool to map 300,000 miles of roads in Thailand in 18 months—less than half the time it would have taken 100 human experts. Facebook has also made Map With AI freely available.
A University of Waterloo student has created an autocomplete software for code called Deep TabNine. Deep TabNine developed the system with the help of a deep learning system developed by OpenAI called GPT-2 that can generate natural-sounding sentences about a particular topic. Unlike other coding autocomplete tools that parse through a user’s code to predict the next line, Deep TabNine uses statistical analysis to find patterns in previous code and predict what it likely to come next. Jackson trained Deep TabNine on 2 million repositories on GitHub, enabling it to support 22 different coding languages
Toronto-based nonprofit Ample Labs, which uses technology to help the homeless, has partnered with AI company Ada to create an AI-powered chatbot named Chalmers to provide homeless people with information about useful services. In a six-month test in Toronto, Chalmers was able to help more than 700 homeless people, directing them to 4,000 free meals and 800 shelter openings. Though Ada requires a computer or cell phone to access, 94 percent of homeless people have a phone.
Self-driving car company Waymo has partnered with fellow Alphabet subsidiary DeepMind to use the AI techniques DeepMind used to train Starcraft-playing AI to improve how it trains self-driving cars. This approach, called population-based training (PBT), accelerates the process of picking the right machine learning algorithms and parameters for a particular task. Waymo will use PBT to help recalibrate its self-driving algorithms as cars collect more data.
Researchers at the University of Washington have developed a machine learning system that can analyze sounds picked up by a smartphone or smart speaker’s microphone to detect warning signs of cardiac arrest. The researchers trained their system on 82 hours of 911 calls containing agonal breathing—a gasping sound associated with heart attacks—as well as normal breathing sounds like snoring, to teach it to identify agonal breathing while reducing false positives. The system, which can call for help in the event of a heart attack, was 97 percent accurate at detecting agonal breathing when within 20 feet of a person experiencing symptoms.
Researchers at Harvard University and MIT have developed an AI tool called the Giant Language Model Test Room (GTLR) that can detect whether a selection of text was generated by AI. GTLR analyzes statistical patterns in text and highlights words as green, yellow, red, or purple based on how likely it is for a certain word to follow another. This can be helpful to spot AI-generated text because AI systems are more likely to make predictable word choices than humans. In a test, humans working unaided could only spot 50 percent of AI-generated texts in a sample, while humans working with GTLR could spot 72 percent.
A British company called Audio Analytic is developing AI systems for audio sensors that can help smart home devices more intelligently understand their environment and serve as a home security system. Many smart home devices analyze audio but do not perform well at interpreting sounds other than human voices. Audio Analytics systems can interpret real-world sounds such as smoke alarms and windows breaking and trigger an appropriate response, such as notifying the police.