Every day, our world becomes increasingly integrated with AI. It influences nearly every part of our lives, from work and education to music, art, and entertainment. The excitement and chatter about what AI is and all that it can do are so loud that they often drown out the sound of the enormous human effort keeping it running—like the steady hum of a machine you only notice once it stops.
But if you do listen, that hum can reveal a lot about the machine.
We at Fold Labs have been listening with a keen ear. Join us on our journey to uncover the human stories behind the seemingly superhuman AI.
Reading Ambedkar, finding out about AI ethics and having a brain that is always thinking about colonialism all at the same time, let me tell you, is not a good recipe to write a clear and articulate piece on any one of these things.
The more I find out about the workings of AI, the more I find myself mental-gasping and having my pupils dilate as I try to take it all in. Everything seems to be related to everything else. Figuring out the problem in the system and even just identifying the issue feels almost mammothian, but also rewarding like I’ve turned a key corner, like I’m about to discover something novel and I just have to dive into it deep enough to figure out what I have to do. Every time I jump into the water expectant and confident to find the biggest fish. Instead I come out with muddy shoes. Again.
What is Data Work?
Data work is the blood that flows through the wired veins of AI making it capable of everything it does. It's what fuels AI systems and keeps them running smoothly. Thousands of people across the globe spend hours reviewing, annotating, and moderating data that AI systems need to function properly. This can involve labelling images, detecting content that is irrelevant, obscene, illegal, harmful, or insulting, and training AI to recognize patterns. Whether it be Large language models (LLMs) like ChatGPT and Gemini or the AI algorithms that make every social media platform ‘safe’, all of them rely heavily on this unseen human labour.
The Hidden Cost of Data Work : Psychological Toll
Data work is not glamorous. It often involves moderating deeply disturbing content, including graphic images of violence, abuse, and other harmful material. Data workers spend hours flagging content to ensure it doesn't reach users. This continuous exposure to harmful content has severe consequences. Reports show that many data workers struggle with mental health issues, including PTSD, anxiety, and depression.
“On the job, for around $2.20 per hour, Motaung says he witnessed disturbing content including violent beheadings and the sexual abuse of children. He now regularly experiences flashbacks and nightmares, and says the requirement to watch videos of innocent people being kidnapped and murdered has left him with severe anxiety in public spaces”
Daniel Motaung, a former outsourced Facebook content moderator [1]
“...I love animals…I got this video where a woman placed a kitten in a large mortar and beat it to a pinkish pulp, to a point you would not recognize it was a living kitten...up to now the image is still in my mind. Thinking of it still makes me sick, and it will not leave my mind no matter how hard I try.”
Sokoro Thabiti, 32-year-old Kenyan ex-content moderator. [2]
“What has really messed me up the most is seeing hundreds of mutilated bodies from road carnage. I have seen people’s body parts sliced into pieces, some still alive experiencing the pain, some with parts of the brain splat on the road, some have had intestines hanging out as they hold on to them…This has made me become paranoid… (and) in constant fear whenever I’m on the road,”
Annika Bandile, 28 year old South Africa content moderator [2]
Despite knowing the costs of data work, neither the BigTech companies, nor their outsourcing partners have put any safeguards to support data workers in coping with the tormenting nature of their work. After worker demands, they have tried to check the mental health box by having in-house counsellors. However, according to data workers’ accounts, the number of counselling sessions are inadequate, available counsellors are unqualified, conversations with them are unsafe and their advice is pretty much useless.
“I always go to the counsellors and come out more triggered and more crazy. … (They) made me feel like I was someone who was mentally unstable…. They won’t even give you a space to cry, they are very judgemental, and they think you are faking it or trying to manipulate them so you can get a leave permit when you cry.
What’s worse is that the counsellors at Sama are unlicensed. They are not certified psychologists or real professionals in the field. They would tell you, you will get used to it and won’t be as sad through time.”Jandyose Mukasa, 31 year old content moderator from Uganda [2]
The Vicious Cycle of Exploitation
Big Tech companies, alongside their outsourcing partners, have systematically dehumanized data workers by treating them as disposable cogs in the AI machinery. Positioned as an economic opportunity for people living at the margins, they market this arrangement as a noble act of ‘employing the unemployable’ and creating work opportunities in ‘developing nations’.
The darker side of this ‘economic opportunity’ is that recruiting from these regions gives the companies huge bargaining power. They deliberately target people living at the margins desperate for work —people from slums, migrant camps, or economically unstable regions. These workers, often seen as "unemployable" by other industries, accept gruelling conditions because they lack financial stability or a support system to fall back on. Many are the primary earners for their families and cannot afford to quit, even when faced with severe exploitation. The precariousness of their situation makes them less likely to challenge the system, creating a cycle of exploitation that keeps them silent and disposable.
If and when they speak up or unionise, they are either fired, banned or the companies simply shut shop and move to another similar country.
“…in a lawsuit filed to the Kenyan High Court…the petitioners (ex-CMs) argue that Meta and Sama laid them off for trying to form a workers’ union, both as a retaliation and a way of disposing of the moderators because they were now enlightened about the job and their rights.
…BPOs only seek new recruits who are unenlightened about the job. This is evident as ex-moderators claim that, after the lawsuit, BPOs such as Majorel who signed as the new subcontractor for Meta have now moved to Ghana from Nairobi to recruit new data workers there.” [2]
I sit here thinking about the buzz that AI garnered; about the crazy-ly human things AI was doing, unexplained and unpredictable even for the people who built it. But this buzz was never clearly just black or white. With every great thing AI could do, and every thing AI could do greatly, sometimes even better than humans, there were also big fears and concerns about imagined dystopian futures. Futures where AI, with all the information we feed it, learning from all the ways in which we use it, (not so) slowly achieves general intelligence, ultimately seeping into our lives to such a deep level that we put our own brains to rest and rely on it instead.
These imagined futures don’t seem far, but also definitely don’t feel like they’re here already either. Except of course for the Scarlet Johanson sounding AI, laughing, flirting and baby talking to a dog. That was creepy as hell. And lives rent free in my head. But no, there was one fear that outpowered even that packaged-slightly-better brain death; my inherent stronger fear of being made redundant and left behind. I must admit that I have used more than the necessitated amount of AI in different aspects of my life. I have asked it for casual medical advice, used it to help me paraphrase my emails, and even made custom GPTs for work. You might argue that simply in doing this I am accelerating the path to my being made redundant by my own creation, but that is a conversation for another day.
In all this keeping up with AI, getting excited about its possibilities, fearing the very possible dystopian futures and feeling a lack of control in how we are collectively heading there, I never bothered to look behind the screen that held my AI, to see what is happening RIGHT NOW.
How did AI learn to be so human? How did it get here? What does the BTS of AI look like?
P.S. We are grateful to James (Mojez) Oyange for providing an authentic peek into the world of data work.
Thank You James for taking out the time to speak with us, encouraging us to write about Data Work and reviewing this essay. This article wouldn’t have been possible without you.
Citations
[1] Facebook Faces New Lawsuit Alleging Human Trafficking and Union-Busting in Kenya - Times
https://time.com/6175026/facebook-sama-kenya-lawsuit/
[2] Gebrekidan, F. B. 2024, ‘Content moderation: The harrowing, traumatizing job that left many African data workers with mental health issues and drug dependency’
Edited by Milagros Miceli, Adio Dinika, Krystal Kauffman, Camilla Salim Wagner, and Laurenz Sachenbacher, Creative Commons BY 4.0.
Retrieved from https://data-workers.org/fasica/
This post is part of the series ‘The Hum of the Machine’. This project is an enquiry into AI Ethics and Labour, led by Fold Labs.
More in this series –
Part 1 - Tuning into the Hum
Part 3 - Mechanisms of Invisibility (coming soon)