Your Algorithm Hates You

By: David Lemayian | 05/31/2019
David Lemayian 2
Algorithms have bias baked into them, but users can do things to reclaim their digital space, says ICFJ Knight Fellow Lemayian (far left).

David Lemayian has been an ICFJ Knight Fellow since 2016 and works as the Chief Technologist for Code for Africa, an ICFJ partner. 

Some of the decisions algorithms make about our lives are fairly benign, such as those irresistible “Suggestions for you” on Netflix. But it gets far murkier when artificial intelligence (AI) and machine learning are used by businesses and governments for decision-making that affects our lives without us ever knowing about it. And worse, without us being able to appeal against those decisions.

These pieces of code are considered almost infallible by those using them. Banks and other lending institutions are determining your credit scores, companies and recruiters are considering whether to hire you, and your insurer is determining your premiums based on decisions made by AI.

And “when considering the role of algorithms in decision-making we need to think not only of cases where an algorithm is the complete and final arbiter of a decision process, but also the many cases where algorithms play a key role in shaping a decision process even when the final decision is made by humans.”

But many of these pieces of software have bias baked into them, what Joy Buolamwini, founder of the Algorithmic Justice League, calls “the coded gaze.” It’s a bias that perpetuates injustice, and also prompts a sense of fatalism  –  the view that we are powerless to do anything other than what we actually do in this AI-powered world.

“It is apparent that the ever-increasing use of algorithms to support decision-making, while providing opportunities for efficiency in practice, carries a great deal of risk relating to unfair or discriminatory outcomes.” Gender, race, tribe, and even location can result in a whole community not receiving benefits and then keep them at a disadvantage.

Putting it plainly, algorithms can be racist. As American Rep. Alexandria Ocasio-Cortez has said, algorithms “always have these racial inequities that get translated, because algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions. They’re just automated. And automated assumptions — if you don’t fix the bias, then you’re just automating the bias.”

It’s like a vicious AI knee-on-neck situation with the weight of the knee getting heavier with every run of its algorithm. And algorithms run a lot faster and affect a lot more people a lot more often than the Jim Crow laws ever did.

And this algorithmic bias, as Joy Buolamwini found out when going through a demo of a Hong Kong startup’s “social robot” that couldn’t detect her face, can “travel as quickly as it takes to download some files off of the internet.”

The startup used the same generic facial recognition software that she had previously used for her undergrad assignment at Georgia Tech, where she discovered that it didn’t work on her face— and she had to get her (white) roommate to stand in for her. At the time, on the other side of the world, she figured “someone else will fix it”. After going to Hong Kong, she knew that someone was going to have to be her. (For an excellent summary of how algorithm bias comes about, see Karen Hao’s article.)

What can we do about algorithmic bias? If you’re a software developer or data scientist, IBM Research has an open source toolkit that helps you check bias in your data models.

But it’s not just technologists who can do something about algorithmic bias. You can start reclaiming digital space by exploring your choice in technology services. For example, by using search engines like DuckDuckGo, because unlike the voracious data vampire that is Google, it doesn’t store your personal information to then use for targeted ads.

You can also petition and lobby your government to adopt a governance framework for algorithmic accountability and transparency policy where “Algorithmic literacy” is introduced into curricular, and standardised notifications (to communicate type and degree of algorithmic processing in decisions) are made a requirement.

Ultimately, we need to ask more of ourselves and tech companies. It’s not enough to just employ critical thinking – we also need to employ civic thinking in how we build and use these technologies.

This article first appeared in The Daily Maverick. 

News Category
Country/Region

Latest News

Leveraging AI to Boost Efficiency and Innovation in the News

The rapid development of artificial intelligence (AI) has generated excitement and fear alike within the news industry, prompting many to ponder what lies in store for journalism’s future.

If approached smartly and leveraged strategically, AI offers journalists and their outlets promising potential to boost efficiency and innovation.

In an ICFJ

ICFJ at the International Journalism Festival in Perugia, Italy

I’m writing to you from Perugia, Italy, where the biggest annual media event in Europe has just kicked off.

ICFJ staffers and dozens of ICFJ network members are here for the International Journalism Festival, to learn, connect and collaborate with other journalists and those who support them from across the

Guidance for Building Trust with the Communities You Serve

Trust in the media has fallen globally. 

Today on average, according to Reuters Institute’s 2023 Digital News Report, just four in 10 people say they trust news most of the time. Amid this decline, people are also more likely to avoid consuming news coverage.

One way journalists and news organizations