This week’s list of data news highlights covers January 20-26, 2018, and includes articles about how Singapore’s plan to train more data scientists and a machine elarning system that can help refugees find work.
Amazon has opened its first cashier-less grocery store, called Amazon Go, to the public in Seattle. The store uses a smartphone app and a system of machine learning algorithms and regular and infrared cameras to track customers throughout the store and register items they take from shelves. Customers scan a QR code with their Amazon Go app to open a gate into the store, and the system will automatically charge the credit card linked with their app for their items when they leave.
Singapore’s Economic Development Board has launched an Industry Transformation Roadmap to create 5,500 new jobs in the professional services sector annually through 2020 with a particular emphasis on high-growth fields such as data science and artificial intelligence. The roadmap includes plans to establish data-sharing agreements with companies such as Google and ride hailing company Grab, to improve the use of data in marketing, and to provide worker training in data-driven fields like programmatic advertising and information modeling.
Researchers at Tsinghua University in Beijing have developed a microchip called Thinker designed to dynamically adjust its computing and memory specifications to support different kinds of AI applications. AI applications such as image recognition or natural language processing use different kinds of neural networks with varying amounts of layers, and thus have different computational requirements. Thinker can support a wide variety of these applications, which typically require specialized and expensive hardware, and only needs a relatively small amount of power, making it feasible to use in many different devices.
Nike developed the design for its new Epic React Flyknit running shoe with the help of machine learning to create the structure of its sole. Running shoes have typically used spiked soles for traction, but rubber and foam could be just as effective while weighing less, if designed properly. Nike used machine learning design software to generate structural patterns based on the physical properties of rubber and foam that met specifications for softness, traction, and stability. The software also slightly modified the sole design to produce the intended results while accounting for the design requirements of differently sized shoes.
Researchers at Johns Hopkins University have developed an analytics tool called the Symptom-Disease Pair Analysis of Diagnostic Error (SPADE) that can identify diagnostic errors and their impact on patients. SPADE analyzed clinical and claims data from hundreds of thousands of hospital visits to identify which diseases patients’ reported symptoms could be related to. SPADE then tracked patients over time to identify when patients return to the hospital and receive a different diagnoses for the same condition, such as when a patient with a fever is diagnosed with a viral infection but is later admitted to the hospital with bacterial sepsis. SPADE is most effective at identifying incorrect diagnosis for acute and subacute disease that result in hospitalization, disability, or death within one year of the initial incorrect diagnoses.
Researchers at Stanford University have developed a machine learning system that can boost refugees’ chances of finding a job in their new home. The researchers trained their system on historical data about refugees, including their age and language skills, and where they settled to identify where particular refugees are likely to fare the best. For example, the data indicates that Afghan refugees are more likely to succeed in Denver, which has a large Afghan community, than Los Angeles, which does not. In a test on historical data about 900 refugees that entered the United States in the end of 2016, the system guided hypothetical settlement processes and was able to boost refugees’ probability of employment from 25 percent to 50 percent.
Robotics startup Doxel has developed autonomous robots that can navigate a construction site and use LIDAR to monitor progress and detect whether things are out of place or falling behind schedule. A robot can scan 30,000 square meters a week and deep learning software analyzes these scans to identify construction components and quantify progress. Managing construction projects is notoriously difficult, with 98 percent of large construction projects going 80 percent over budget and taking 20 months longer than planned, due in part to productivity in the construction industry stagnating for the past 80 years. In a trial, Doxel’s robotic system was able to increase the labor productivity of an office building construction site by 38 percent.
Chinese technology company Tencent has developed a go-playing AI program named Fine Art that was able to successfully beat the best human player in the world, Ke Jie, despite giving Jie a significant handicap. DeepMind’s go-playing AI system AlphaGo famously beat Jie in 2017 and was substantially more advanced than any other go-playing software at the time, but Jie was not given a handicap for that match. Fine Art’s victory is indicative of the rapid advancement of AI research in this field and could mean Fine Art would be competitive against DeepMind’s AlphaGo Zero, which is even better at Go than the original AlphaGo.
Researchers at the University of Zurich have developed a system for training an autonomous drone to navigate a city by using data collected from cars and bicycles. Autonomous drones typically navigate through a combination of mapping their nearby environments, identifying their location, and plotting their routes, which is effective but requires expensive sensors and is computationally demanding. The researchers were able to teach a neural network to analyze training data originally intended for autonomous cars, as well as data from bicycle-mounted cameras, and navigate a drone that only uses a single camera to fly through city streets.
Researchers at the University of Wisconsin and Microsoft are developing a search engine capable of searching for images encoded into DNA. Other researchers have already successfully demonstrated DNA-based data storage, however nobody has developed a method to retrieve and process data stored in DNA. The researchers will encode 10,000 images from social media as DNA molecules and label each encoded DNA segment with a special sequence of DNA molecules coated in magnetic nanoparticles, similar to how people label the contents of images online. When someone searches for a particular feature of an image, a specialized magnet could retrieve relevant strands of DNA and an algorithm could translate the sequences back into an image.
Image: Matti Matilla.