Google has published a new tool called the What-If Tool that allows users to visualize possible bias in machine learning models without knowing how to code. With the What-If Tool, users can manually edit their data and a model’s parameters to visualize how changing them would affect the model’s prediction. This allows users to identify when a model’s data or parameters introduces bias into the model, which can have undesirable effects such as unfairly discriminating against certain populations.
Visualizing Bias in Machine Learning Models
Michael McLaughlin is a research assistant at the Center for Data Innovation. He previously worked at Oracle and held internships at USA TODAY and in local government. Prior to joining the Center for Data Innovation, Michael graduated from Wake Forest University, where he majored in Communication with Minors in Politics and International Affairs and Journalism. He is currently pursuing his Master’s in Communication at Stanford University, specializing in Data Journalism.
View all posts by Michael McLaughlin