Is AL, our friendly social media algorithm, created to be biased? A few years ago, there was a social media scandal that really stoked people’s distrust and fear of algorithms. A plus-sized model was sharing nearly nude and artistically nude photos on Instagram. Since she is both a woman of color and full figured many people felt that the algorithm was censoring her because of hateful bias. Her predicament is extremely sympathetic because her posts were kind and affirming, and she undeniably suffers from bias in her daily life. It is unsurprising that people saw the injustice in censoring her and outrage spread throughout social media.
But is this the full story? How much do we actually know about this case? She was “banned” from Instagram but we are not sure what “banned” means because we are seeing her talk about it on her Instagram. When we look at her content, we notice that she is baring a lot of skin but the photos are artistic and within the bounds of IG’s terms of service. Also, we may notice other content made by thin white women that seems salacious and yet doesn’t get flagged. It makes a lot of sense to conclude that the difference is the model’s size and skin color and that the algorithm is programmed to be lenient on one and not the other. But we can’t conclude that with what we know.
Computers can’t “see” racial distinction and they don’t “understand” why a person’s weight stands out to IG users. Human distinctions are meaningless to a computer. So, no the IG algorithm is not biased in these ways and is not intentionally trained to identify skin color and weight for the reason of censoring content. But algorithms can be biased in another sense. They are made to listen and learn from people and in the case of social media, to connect people with content they like. It abstracts our preferences from our engagement and these patterns of engagement, implicit bias, and explicit bias may lead people to report women of color and plus- sized models more than others.
Let’s Abstract It!
Theoretically, If everyone on IG only interacted with cat memes then the algorithm would connect people with cat memes more than other content. If people liked cat memes about happy playful cats that are drawn in pencil more than photos of cats demanding their kibble, the algorithm might pull out two distinctions that seem important to users: media type (drawing vs photo) and cat temperament (happy vs demanding). Why people feel that these are important distinctions doesn’t matter to the algorithm – it only matters that people seem to like one over the other. In other words, AL the algorithm is only as biased as human input.
The point of this post is not to downplay the unfairness of who-sees-what on social media but instead to better understand why. We all want the web to be a healthy place to work and socialize and it’s hard not to feel strongly about all this. The thought that something is orchestrating our social interactions is scary and it’s just plain isolating when we can’t reach our friends and family. But for our mental health and the hope of a more fulfilling social media future, it’s important to keep a level mind in the matter.
On social media, an essentially algorithmic world, working with the algorithm can only lead to better outcomes. This starts by giving old AL a break. AL isn’t very smart or good with people (and I think he’s sensitive about it). He’s doing his best to bring us, humans, the content we like with no experience of being human himself. AL doesn’t want to ban women that drive engagement and make people like using IG. But this gets overridden when AL gets explicit and biased human input such as reporting artful nudes as if they were lewds. Considering our online actions more carefully can help avoid misunderstandings. This may include supporting creators you like instead of spending time on people who raise your blood pressure- like trolls and propagandists (talking to myself here). When we are more mindful of what we do on the internet we will have better outcomes.
While we are relatively safe from inborn bias, the potential for AL to “inbreed” and amplify human bias is very real. AL is not only a little dumb by human standards (Sorry AL!) but also amoral. And for this reason, we as users can throw up our hands and relax a little. We can only do so much. When we look around and see differences in how content is distributed and boosted, we are not (always) experiencing institutionalized bias (paid checkmarks do seem to pay…but citation needed). This is where we need information professionals to step in and, instead of bias, introduce ethics to AL and the tech industry. Which is both doable and already in the works. Overall, I feel positive about the future of social media. It can be personally gratifying and the technology that runs it can lead to a better future. It’s exciting because we don’t even currently know the full potential of emerging technologies like AL. It’s like the idiom “sky’s the limit” except we haven’t even seen the sky yet.

Leave a comment