X
Innovation

Time to re-evaluate AI algorithms right from the design stage, experts urge

With AI bias and errant outcomes surging, a call for more human involvement. 'Even the people deploying these algorithms sometimes would be surprised that these things could happen'
Written by Joe McKendrick, Contributing Writer

By now, the inherent bias that all-too-often springs from AI algorithms is well-documented. What can be done to minimize this bias? Putting more human input into the development process would be a good start. 

Michael Kearns and Aaron Roth, professors at the University of Pennsylvania, suggest that the solution is to "embed precise definitions of fairness, accuracy, transparency, and ethics at the algorithm's design stage." In an interview published in Knowledge@Wharton, Kearns and Roth, co-authors of The Ethical Algorithm: The Science of Socially Aware Algorithm Design, are careful to note that software engineers or developers are not creating this issue.  "What surprised me most was that even the people deploying these algorithms sometimes would be surprised that these things could happen," says Kearns.

clouds-over-chicago-cropped2-nov-2015-photo-by-joe-mckendrick.jpg
Photo: Joe McKendrick

Algorithms by themselves don't have a moral character, Roth states. "The problem is the algorithms that we are putting into these pipelines are not old-fashioned hand coded algorithms," Roth says. "Instead, these are the output of machine learning processes. And nowhere in a machine learning training procedure is a human being sitting down and coding everything the algorithm should do in every circumstance. They just specify some objective function. Usually it's some narrow objective function like maximizing accuracy or profit."

As a result the algorithm delivers in accordance with a narrow objective function, which may have negative or unforeseen side effects. To fight this bias, it's necessary to identify groups that may be harmed by bias, then "literally write that into my objective function," Kearns says.   

Algorithms tend to discriminate not because the software developers are biased, the co-authors point out. "The source of bad behavior is not some malintent of a software engineer and that makes it harder to regulate," says Roth. "You have to figure out the source of the problematic behavior and how to fix it. One of them is bias that's already latent in the data." 

For example, he illustrates, Amazon discovered that its resume-screening algorithm was giving resumes with the word "women" or the names of women's colleges. "Nobody did this intentionally, but this was somehow predictive of decisions that human hiring managers had made at Amazon before," he says. "This is not surprising because machine learning algorithms are only trying to find patterns in the data you give them. There is no reason to think they're going to remove biases that are already present in the data."  

The best approaches to eradicating such bias is general awareness, as well as designating trained people to examine and audit AI output. "Have scientists and engineers who know this area look at your use of machine learning, your algorithm development pipeline, and be vigilant about making sure you're not engaging in bad privacy practices, in discriminatory training," says Kearns. "It is time that computer scientists and statisticians and others who are part of building these algorithms and models have a seat at C-level discussions about these things."

Don't wait for the call from your legal department, Roth adds. "If you want to make sure that your learning algorithms aren't vulnerable to legal challenge because of privacy issues or fairness issues, you have to design them from the beginning with these concerns in mind. These are not just policy questions, but technical questions. It's important to get people who understand the science and the technology involved from an early stage."

Editorial standards