This week’s list of data news highlights covers December 8-15, 2018, and includes articles about IMB offering access to a quantum computer to anyone and AI helping coordinate disaster relief, including to refugee camps in Bangladesh.
Walmart is testing a kitchen robot assistant named Flippy to determine if it could use the robot in its in-store delis. Flippy, which uses AI from California startup Miso Robotics, places baskets of food, such as chicken tenders, into cooking oil to fry them. The robot shakes the basket to ensure the food cooks evenly and then moves the basket to a drip rack where a human tests the food’s temperature. Flippy, which can fry eights baskets of food simultaneously, allows human workers to spend more time on taking customer orders and prepping other foods.
Researchers from the University of Toronto and the Vector Institute, a not-for-profit AI research corporation, have developed a new neural network design that can model continuous processes better than traditional neural networks. The researchers wanted to predict the future health of patients, but traditional neural networks use discrete steps to model a process while patient medical records contain data at inconsistent intervals. The researchers solved this problem by replacing the layers of neural networks with calculus equations, which better measure continuous changes. This method allows researchers to specify how accurate they want their model to be, but unlike traditional neural networks, researchers will not know how long training their neural network will take.
IBM is offering online access to a quantum computer to increase literacy in quantum computing. Through drag and drop controls as well as a sliding scale, users can execute commands on a 5-qubit or 14-qutbit machine. Over 120 academic papers have used IBM’s application, and the firm offers user guides and interactive demos to teach people how to use the service.
Researchers from the Massachusetts Institute of Technology have developed a machine learning method that can detect transparent features, such as a small crack in a wine glass or contact lens, even in the dark. The researchers combined a physics-based algorithm with a neural network, which they trained on more than 10,000 grainy and low-light images of integrated circuit patterns, to reconstruct the patterns in the nearly pitch black images. This research could make it easier to illuminate “transparent” features, such as biological cells, in images with very little light.
The researchers and engineers that created the MLPerf benchmark suite, which objectively measures machine learning models, hardware, and cloud platforms, have released their first set of testing results. Nvidia’s technology achieved the best scores in six machine learning categories: image classification, object detection, speech recognition, translation, recommendation, and sentiment analysis. The benchmark measured how long it took to train a model to a certain level of quality on a predetermined group of datasets.
A San Francisco startup called Prisma Labs has developed an app that uses machine learning to automatically retouch selfies. The app recognizes features of the face, such as a person’s teeth or skin, and automatically enhances the photo through techniques such as whitening teeth or smoothing skin. The app also corrects for distorted images by reconstructing faces in 3D and fixing any disproportions.
SAP has developed an AI reporting tool called 4W-Wizard for the United Nations Office for the Coordination of Humanitarian Affairs (UN OCHA) that makes delivering disaster relief faster. UN OCHA used to spend up to a day or more each week manually cleaning the data it receives from other relief teams or NGOs, but 4W-Wizard uses machine learning to automatically categorize information, fix typos, and combine the data into one file. The tool, which has helped provide support for several relief missions, including for supporting refugee camps in Bangladesh, helps UN OCHA understand the type of help populations need.
UK startup Emteq has developed sensor technology for VR headsets and glasses that help monitor the physical health of the wearers. For example, individuals with facial paralysis often have to perform facial exercises in front of a mirror so they can improve control of their muscles. Many individuals do not feel comfortable looking at themselves while performing the exercises, but Emteq solves this problem by incorporating their sensor technology in VR headsets. The technology tracks an individual’s facial movements by measuring electrical changes that correspond with certain movements, such as a smile, and then projects these movements onto a VR avatar.
Organizers of a Taylor Swift concert used facial recognition at her concert in California’s Rose Bowl in May to monitor stalkers who may have attended the event. The facial recognition system was part of a kiosk that displayed highlights of Swift’s rehearsals, and it scanned attendees faces who looked at the kiosk. The system then compared the scanned faces to hundreds of images of Swift’s known stalkers.
Researchers from Microsoft, Stanford University, and the City University of Hong Kong have developed an AI system that makes caricatures of peoples’ faces. The researchers trained the system on thousands of human drawn caricatures and real photos of human faces, which they labeled according to facial landmarks, such as the shape of the jaw. The researchers then used generative adversarial networks to reproduce the geometry of a face in a photo in the style of a caricature.