Your Algorithm Hates You

|
David Lemayian 2
Algorithms have bias baked into them, but users can do things to reclaim their digital space, says ICFJ Knight Fellow Lemayian (far left).

David Lemayian has been an ICFJ Knight Fellow since 2016 and works as the Chief Technologist for Code for Africa, an ICFJ partner. 

Some of the decisions algorithms make about our lives are fairly benign, such as those irresistible “Suggestions for you” on Netflix. But it gets far murkier when artificial intelligence (AI) and machine learning are used by businesses and governments for decision-making that affects our lives without us ever knowing about it. And worse, without us being able to appeal against those decisions.

These pieces of code are considered almost infallible by those using them. Banks and other lending institutions are determining your credit scores, companies and recruiters are considering whether to hire you, and your insurer is determining your premiums based on decisions made by AI.

And “when considering the role of algorithms in decision-making we need to think not only of cases where an algorithm is the complete and final arbiter of a decision process, but also the many cases where algorithms play a key role in shaping a decision process even when the final decision is made by humans.”

But many of these pieces of software have bias baked into them, what Joy Buolamwini, founder of the Algorithmic Justice League, calls “the coded gaze.” It’s a bias that perpetuates injustice, and also prompts a sense of fatalism  –  the view that we are powerless to do anything other than what we actually do in this AI-powered world.

“It is apparent that the ever-increasing use of algorithms to support decision-making, while providing opportunities for efficiency in practice, carries a great deal of risk relating to unfair or discriminatory outcomes.” Gender, race, tribe, and even location can result in a whole community not receiving benefits and then keep them at a disadvantage.

Putting it plainly, algorithms can be racist. As American Rep. Alexandria Ocasio-Cortez has said, algorithms “always have these racial inequities that get translated, because algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions. They’re just automated. And automated assumptions — if you don’t fix the bias, then you’re just automating the bias.”

It’s like a vicious AI knee-on-neck situation with the weight of the knee getting heavier with every run of its algorithm. And algorithms run a lot faster and affect a lot more people a lot more often than the Jim Crow laws ever did.

And this algorithmic bias, as Joy Buolamwini found out when going through a demo of a Hong Kong startup’s “social robot” that couldn’t detect her face, can “travel as quickly as it takes to download some files off of the internet.”

The startup used the same generic facial recognition software that she had previously used for her undergrad assignment at Georgia Tech, where she discovered that it didn’t work on her face— and she had to get her (white) roommate to stand in for her. At the time, on the other side of the world, she figured “someone else will fix it”. After going to Hong Kong, she knew that someone was going to have to be her. (For an excellent summary of how algorithm bias comes about, see Karen Hao’s article.)

What can we do about algorithmic bias? If you’re a software developer or data scientist, IBM Research has an open source toolkit that helps you check bias in your data models.

But it’s not just technologists who can do something about algorithmic bias. You can start reclaiming digital space by exploring your choice in technology services. For example, by using search engines like DuckDuckGo, because unlike the voracious data vampire that is Google, it doesn’t store your personal information to then use for targeted ads.

You can also petition and lobby your government to adopt a governance framework for algorithmic accountability and transparency policy where “Algorithmic literacy” is introduced into curricular, and standardised notifications (to communicate type and degree of algorithmic processing in decisions) are made a requirement.

Ultimately, we need to ask more of ourselves and tech companies. It’s not enough to just employ critical thinking – we also need to employ civic thinking in how we build and use these technologies.

This article first appeared in The Daily Maverick. 

Country/Region
News Category

Latest News

Data Journalism Project Surfaces Untold Stories of Climate Change on U.S. Coasts

As global temperatures warm, rising sea levels are already inflicting damage on the planet’s coastal regions. Today, more than 90 cities in the U.S. experience “chronic flooding,” a number expected to double by 2030, according to the World Economic Forum. The flooding will only increase in intensity over time, too — especially if countries don’t curb their global carbon emissions.

Brazilian Men’s Magazine Analyzes Reader Comments to Prove its Vast Impact

While online comments aren’t always known for civility, a digital men’s magazine in Brazil has used its comments section to build a thriving and supportive community that improves readers’ lives. Now, thanks to help from ICFJ Knight Fellow Pedro Burgos, the news outlet has the numbers to prove it.

Air quality sensors aid coverage of health and environment in Africa

A project that uses low-cost air quality sensors to monitor air pollution in seven major African cities is providing data that journalists are using for the first time to enhance coverage of related health and environmental concerns.