Yes, AI Discrimination is Possible

Adage held our annual employee engagement retreat this past fall, a two-day program that features a keynote speaker from a client or industry partner. This year’s speaker was Reggie Henry, Chief Information and Engagement Officer at ASAE and a Black man.

During the Q&A portion of Henry’s presentation the following exchange occurred.

Molly from Adage: You mentioned before associations trying to deal with the ethics of AI and machine learning, which is something I didn’t know about until recently. Where are some areas you’re seeing that are really crossing ethical boundaries?

Reggie Henry: Recently, in reviewing some HR software, I tested modules that filtered out resumes. I put a group of resumes through the process to see what would come out on the other side. Then I had a group of HR professionals go through the resumes and pull out what they would have pulled out as the top three candidates and it was really clear when we got done that the modules that pulled out those resumes pulled out – almost 60 percent of the time – non-diverse populations of people.

Molly from Adage: Interesting. Not happy interesting…

Henry: It scared the hell out of me. And it didn’t matter what job it was. It also pulled out, more often than not, people over the age of 35 or 40. So you start to think, “now why is that happening?”

It wasn’t a one-off. Henry’s team conducted the same test on different awards application software packages and similar things happened.

How Can AI Discriminate

So how could technology have such an inherent racial bias? Well, the technology is built by humans, and we have these baked-in biases. Additionally, the technology industry is overwhelming male and white, so our technology is getting one, very specific perspective, and we are starting to see the effects of that.

Group of men working on their laptops.

Christina Hachikian, Executive Director, Rustandy Center for Social Sector Innovation at the University of Chicago Booth School of Business explains, “many companies have moved to algorithmic hiring and algorithms are designed by people who themselves have bias, so the algorithms get that same bias. We have to be careful about those processes.”

Molly Lee, the Adage employee that asked Henry the question first became aware of the human-created bias after reading this article which describes an algorithm used to generate high-resolution images that continuously white-washes those images.

After inputting a low-resolution image of President Barack Obama, the system upscaled the image (basically enhanced it from low resolution to high resolution) and the final image was a white male, not a biracial one.

Image Results from Pulse Algorithm

Image source: The Verge

“This problem is extremely common in machine learning, and it’s one of the reasons facial recognition algorithms perform worse on non-white and female faces,” explains The Verge, “Data used to train AI is often skewed toward a single demographic, white men, and when a program sees data not in that demographic it performs poorly. Not coincidentally, it’s white men who dominate AI research.”

MIT Technology Review agrees the AI bias is happening, and says it’s going to be very hard to fix. Among many reasons for this, one is that “the way in which computer scientists are taught to frame problems often isn’t compatible with the best way to think about social problems.” The author of that article, Karen Hao, concedes “it’s also not clear what the absence of bias should look like. This isn’t true just in computer science—this question has a long history of debate in philosophy, social science, and law. What’s different about computer science is that the concept of fairness has to be defined in mathematical terms, like balancing the false positive and false negative rates of a prediction system.”

As systemic prejudice has finally been given the national stage, it is clear that dismantling these systems will be complicated and require trial and error. But all signs point to equal representation as being an early key in the work. Henry agrees, saying “We really do have to be careful that we have all mixes of people working in AI who are developing these various algorithms and things to make sure that bias isn’t built into the thinking of machines.”

How to Fight Discrimination in AI

How can your company sidestep AI discrimination? Return to manual versions of certain functions says Hachikian. “See what the system is doing and intervene at that specific point — be a little more surgical about it” she says. In the case of Henry’s discovery, HR should stop using the programmatic resume culling and switch to blind hiring – a very effective method for creating an actually diverse workplace.

There’s a lot we can do as individuals, companies and voters to be anti-racist, and it comes down to this simple truth: humans create the technology so humans will have to create the solution.


Increase the ROI From Your Digital Agency

24 May 2024

Your digital agency partnership is a valuable investment. From regular check-ins to using account manager to the fullest, explore easy ways to ensure you're getting the largest ROI from your agency.

Scroll to top