10 Bits: the Data News Hotlist
This week’s list of data news highlights covers August 20-26, 2016 and includes articles about Singapore’s new self-driving taxis and a new method for tracking airplane luggage with the Internet of Things.
A group of technology and satellite companies have partnered to launched an initiative called SpaceNet to accelerate the development of analytics tools that can quickly analyze satellite imagery and extract useful information. Participating companies include Amazon Web Services, Nvidia, satellite imagery company DigitalGlobe, and CosmiQ Works, a company funded by the U.S. Central Intelligence Agency’s venture capital arm to work with commercial space startups to develop tools for the intelligence community. SpaceNet will make large datasets of high-resolution satellite imagery, analytics tools, and other resources publicly available to foster the development of machine learning systems that can make satellite data more useful.
Researchers at the Massachusetts Institute of Technology have developed technology called MegaMIMO 2.0 that allows access points (APs) for wireless routers to communicate with each other and take advantage of “collisions” that occur when multiple signals compete for the same bands of wireless spectrum. Normally, APs attempt to avoid collisions, which warp signals and can reduce coverage areas and slow Internet speeds. Instead,, MegaMIMO 2.0 constantly analyzes data about a wireless network’s performance and modifies APs’ signals to account for collisions so that the warping distorts the signal into its desired state, which then expands the signal coverage and improves speeds.
Researchers at Stanford University, the University of Washington, and Baidu have determined that voice recognition algorithms, such as those used by smartphone transcription apps, can now understand speech and produce it as text faster than humans. The researchers had Baidu’s speech recognition program Deep Speech 2 compete against humans to transcribe text on a smartphone after hearing the audio. For English, the software was significantly faster and had 20 percent less errors than humans for transcribing English, and 63.4 percent less errors for transcribing Mandarin Chinese.
Facebook has made the code behind its computer vision algorithms publicly available on GitHub to advance image recognition research and spur the development of applications that use computer vision, such as augmented reality. Facebook has shared three sets of code that take advantage of a machine learning technique called neural networks to interpret images: Deepmask, which detects objects in images; SharpMask, which delineates them; and MultipathNet, which then attempts to identify the objects. Combined, the code can allow software to understand an image on a pixel-level.
Autonomous vehicle startup nuTonomy has launched a self-driving taxi service in Singapore, beating out Uber, which will launch a similar program in Pittsburgh in several weeks, to be the world’s first self-driving taxi service. Passengers can hail a self-driving taxi from their smartphones to travel around a 2.5-square-mile stretch of the city, and the cars have a human driver ready to take over. nuTonomy plans to launch a full fleet of the self-driving taxis by 2018 and expects the prevalence of self-driving taxis will reduce consumer demand for cars and thus significantly reduce congestion on Singapore’s roads.
The U.S. Federal Bureau of Investigation (FBI) has implemented a new genetic analysis technique for the National DNA Index System, which stores DNA samples of suspects from local, state, and federal law enforcement agencies around the country, to improve their ability to identify suspects. To match DNA from a crime scene to a suspect, the FBI would attempt to match specific genetic patterns at 13 different locations on a sample with someone’s DNA. The new technique uses pattern matching but relies on more distinctive genetic markers, but fewer locations overall, to improve its reliability. Since the FBI began using the technique in May, it has generated 7,000 new potential matches in the system that the FBI can investigate further.
The City of Slidell in Louisiana is working with Louisiana Tech University to use robots equipped with specialized ultra-wide band (UWB) radar to identify the lingering damage to the city’s underground infrastructure, such as pipelines and tunnels, caused by Hurricane Katrina in 2005. The robots travel through sewer pipes and use UWB radar to measure corrosion and fractured infrastructure, as well as identify voids in the soil surrounding the infrastructure, which can all be caused by underground flooding. City workers can use software to generate 3D renderings of the underground environment based on the UWB radar data to target damaged areas.
Researchers at Google have developed a method for compressing images better than JPEG, the widely used compression standard, using an artificial neural network. JPEG makes image files smaller, but causes images to lose a significant amount of fine details. The researchers trained their neural network by having it analyze 6 million compressed images, identify the worst examples of where details were lost, and then predict how an image would look after compressing it to estimate what details would be lost. Using this approach, the neural network can selectively compress different sections of an image so as to not lose important details but still significantly reduce its file size.
The U.S. Office of the National Coordinator for Health Information Technology (ONC) has published the draft 2017 Interoperability Standards Advisory, which details the standards and specifications that can improve the way electronic health record EHR systems share data. Health information technology developers and service providers are not required to adopt these standards, however ONC publishes the advisory to encourage companies to improve the effective and secure flow of data throughout the healthcare system.
Delta Airlines has adopted a baggage-tracking system that relies on radio-frequency identification (RFID) chips embedded in paper tags to reduce lost luggage. Normally, luggage tags have barcodes detailing a customer’s flight information, and Delta staff use optical scanners to sort and handle baggage. However, these scanners only read a tag correctly 90 percent of the time meaning airline staff have to manually sort one out of every ten bags, which is error prone and timely, and can causes bags to miss flights. The new system can read RFID tags 99.85 percent of the time, which will substantially reduce the risk the airline could lose a bag.