Often when people talk about mathematics, you hear the word algorithm—a step-by-step process to accomplish a particular task. For example, the steps you use to divide or multiply with pencil and paper form two very familiar algorithms. Algorithms can be very useful, but lately, with so much data being created and shared, and with the increase in their use in critical areas such as hiring, credit, and health care, algorithms are under intense scrutiny about their fairness. People experience the effects of an algorithm’s conclusion, but the data and steps that form the basis for that conclusion are frequently hidden from them (as if inside a black box). Mathematicians and many others are demanding more openness and accountability so that algorithms can be examined to determine if or how much they exhibit bias in various ways, including racial, gender, age, and ethnic bias.
Researchers are proposing several ways to ensure fairness. One “big picture” approach would require algorithm designers to prove that their algorithms are unbiased before implementation rather than expecting users to uncover the bias themselves. Another would use techniques borrowed from sociology that can reveal bias by, for example, answering questions such as “If this person were of a different race, would they have been hired?” Whatever method is used or regulations adopted, the goal of researchers in this field is algorithms that are transparent and just so that the basis given for important decisions will no longer be “Because AI1 said so.”
1. Artificial Intelligence
O'Neil Risk Consulting & Algorithmic Auditing
Cathy O'Neil talks about the unfairness of most predictive algorithms.
For More Information: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Cathy O’Neil, 2016.