In Depth The widely used "lasso method" of machine learning has just gotten a little smarter.

Published on August 23rd, 2013 | by Travis Korte

2

Statisticians Push Back Against the “End of Theory” Problem

For years, some commentators have worried that increasing volumes of data coupled with better and better automated prediction methods would lead to an “end of theory.” What they mean is that the sorts of insights traditional statisticians like to be able to infer from their models of the world (those observations that can be generalized and applied to other problems) are often absent from machine learning algorithms that automatically select hundreds or thousands of parameters. The machine learning methods often work extraordinary well for prediction, but they only give answers—they do not teach lessons.

Dr. Ryan Tibshirani, an Assistant Professor of statistics at Carnegie Mellon University, is trying to fix that. Tibshirani and his colleagues (including his father, famed statistical methodologist Dr. Robert Tibshirani) have developed a new method that hopes to satisfy both the prediction and inference sides of statistics, offering traditional statisticians insights while preserving the adaptability and predictive power of modern machine learning methods.

The machine learning technique they tackled is known as the lasso method, a widely used automated method that ensures models do not get too elaborate. The greatest enemy of predictive analytics (particularly in the “big data” arena) is overfitting, which occurs when a model adheres too closely to a given dataset and becomes less accurate when it is applied to new data; the lasso method helps keep models simpler and more extensible. The problem with the lasso method was that standard significance tests—which help statisticians determine whether a variable is really important or can be thrown out of the model—did not work on it, meaning that it was unable to produce some of the inferential contributions statisticians often demand.

Tibshirani and his colleagues developed a special significance test just for the lasso method (the technical details of which can be found here), and have pointed the way to future research into adding inferential capabilities to other predictive modeling techniques. Although this is only the first step, the promise of more insightful algorithmic methods is exciting. In complex environments such as biological and urban systems, the profusion of variables that might be contributing to a particular effect is enormous, and the value of “big data” prediction paired with generalizable inference may be great as well.

Photo: Creative Commons / William Clifford

Tags: , , ,


About the Author

Travis Korte is a research analyst at the Center for Data Innovation specializing in data science applications and open data. He has a background in journalism, computer science and statistics. Prior to joining the Center for Data Innovation, he launched the Science vertical of The Huffington Post and served as its Associate Editor, covering a wide range of science and technology topics. He has worked on data science projects with HuffPost and other organizations. Before this, he graduated with highest honors from the University of California, Berkeley, having studied critical theory and completed coursework in computer science and economics. His research interests are in computational social science and using data to engage with complex social systems. You can follow him on Twitter @traviskorte.



  • RajivB

    Nice info on “teaching lessons vs give answers”. I think both have their own place on practical problems that is constantly evolving due to capacity to store and retrieve huge amount of data. However, i don’t believe theory is going anywhere as there is more to statistics than just spitting out numbers from ML algorithms without the knowledge of how the numbers came to be. This indeed have applications in data mining standpoint but I don’t think it is overpowering “theory”. Ex: For some clinical studies, you are not going to get 10000 patients, so you need to design experiments for the limited no. of cases and you will need statistical theory.
    I have yet to see a good article on why a good sampling techniques won’t work on increasing volume of data/?or huge volume of data will give better results vs sampling? Would love to read one..

  • Travis Korte

    RajivB, I don’t think so either. But the “end of theory” is a narrative that’s been advanced fairly widely for the past five years or so, and these are some of the first technical results that seem to be working to counteract it.

Back to Top ↑

Show Buttons
Hide Buttons