Google has published a new tool called the What-If Tool that allows users to visualize possible bias in machine learning models without knowing how to code. With the What-If Tool, users can manually edit their data and a model’s parameters to visualize how changing them would affect the model’s prediction. This allows users to identify when a model’s data or parameters introduces bias into the model, which can have undesirable effects such as unfairly discriminating against certain populations.
Visualizing Bias in Machine Learning Models
Michael McLaughlin is a research assistant at the Center for Data Innovation. He researches and writes about a variety of issues related to information technology and Internet policy, including digital platforms, e-government, and artificial intelligence. Michael graduated from Wake Forest University, where he majored in Communication with Minors in Politics and International Affairs and Journalism. He received his Master’s in Communication at Stanford University, specializing in Data Journalism.
View all posts by Michael McLaughlin