This week’s list of data news highlights covers May 29 – June 2, 2017, and includes articles about an Internet of Things system in Boston making an intersection safer, and a charity using analytics to identify human trafficking hotspots.
Researchers at Ben-Gurion University in Israel have developed a brain-computer interface called MindDesktop that, unlike other brain-computer interfaces, can be used for general purposes. MindDesktop uses an off-the-shelf brainwave-sensor headset and software that maps different brainwave patterns to specific actions on a computer based on where on the screen users focus their attention. MindDesktop can control most functions of a computer using Windows and can type with a speed of 20 seconds per character, which is substantially faster than similar systems.
The city of Boston has partnered with Verizon to test a system of sensors that measure the city’s busiest intersection and gauge how this data could help improve efforts to make the intersection safer. In March, Verizon deployed 50 cameras and sensors at the intersection to measure pedestrian, car, and bike traffic, the speed of vehicles, and road conditions, and began pooling this data with bus-location and traffic light status data provided by the city. With this data, the Boston Transportation Department will be able to assess the impact of different interventions, such as adjusting traffic signal timing or installing bike lanes.
Researchers at the University of Cambridge have developed a machine-learning model called the Sheep Pain Facial Expression Scale (SPFES) capable of identifying pain in the facial expression of sheep. The researchers trained the system on a dataset of 500 sheep photographs to teach it how to detect signs a sheep is in pain, such as tightened cheeks and narrowed eyes. SPFES can estimate a sheep’s pain level with 80 percent accuracy. The system could help farmers better monitor animal welfare, and could eventually be trained to identify pain in other animals.
JetBlue Airways will test a facial recognition system at Boston Logan Airport for customers flying from Boston to Aruba to verify flyers’ identities instead of using a boarding pass. Customers can volunteer to use the facial recognition system at the gate, which JetBlue expects to be faster than the traditional approach of presenting and verifying a boarding pass. The test will start June 12, 2017 and run for two to three months.
Indian charity My Choices Foundation is using an analytics tool developed by Australian analytics firm Quantium to identify villages that have the highest likelihood of being targeted for human trafficking. Human traffickers are more likely to target people in challenging economic circumstances with false promises of good-paying jobs. The tool analyzes education and health data from India’s census as well as data about an area’s drought risk, poverty, and education and job opportunities to predict the risk levels of different villages.
Copenhagen startup UIzard Technologies has developed a neural network called Pix2Code that can analyze a mock-up of a graphical interface and translate it into code. Normally the process of developing a website involves a designer creating a template of how the website should look and a front-end developer attempting to generate code that accurately reproduces this design. Pix2Code bypasses this step by generating this code automatically, and is 77 percent accurate for iOS, Android, and web interfaces.
Researchers at the Imperial College of London and the Royal Free Hospital in London are developing a database of 3D models of faces to improve facial reconstructive surgery and make the results look more natural. The team initially created a database of scans of 12,000 volunteers’ faces with neutral expressions in 2012, and are now creating 3D models of 6,000 volunteers’ faces with a range of expressions that could be used to develop realistic templates for surgery. Reconstructive surgery can help restore function to a patient’s face, but the wide variance in facial structures can make it difficult to create natural-looking features for patients.
Several startups are developing machine learning systems capable of interpreting sign language and reproducing it as text. KinTrans, based in Dallas Texas, is piloting a system in banks and government offices in the United States and the United Arab Emirates that uses a 3D camera that can interpret American and Arabic sign language with 98 percent accuracy. SignAll has partnered with Gallaudet University in Washington, DC, to develop a database of sign language sentences and a system that uses a series of cameras to capture a signer’s entire upper-body, rather than just the hands, to improve its translations.
Researchers from the University of Adelaide in Australia have developed a machine learning system that can analyze radiological scans of patients’ chests and determine if they will die within 5 years with 69 percent accuracy, comparable to the accuracy of human doctors. The system analyses biomarkers in tissues that indicate overall tissue health and the presence of serious diseases in historical scans of 48 subjects aged 60 years and older. The researchers are now applying the system to a 12,000 patient dataset to further investigate its usefulness as a diagnostic aid.
Canada’s Department of National Defence has developed a system that allows underwater drones to transmit scans of the ocean floor with minimal distortion in near-real time using sound waves to help detect submerged mines. Water distorts radio waves too severely for it to be useful to transmit data underwater, and while sound waves are more effective, they can also become distorted over large distances. The system substantially compresses sonar imagery and break each image into 10,000 separate tiles, which it references against a database of sonar images to find visually similar matches and encodes each tile with a number to represent its match. The software then transmits these numbers to a receiver which reassembles a close representation of the original image.