Every day, our world becomes increasingly integrated with AI. It influences nearly every part of our lives, from work and education to music, art, and entertainment. The excitement and chatter about what AI is and all that it can do are so loud that they often drown out the sound of the enormous human effort keeping it running—like the steady hum of a machine you only notice once it stops.
But if you do listen, that hum can reveal a lot about the machine.
We at Fold Labs have been listening with a keen ear. Join us on our journey to uncover the human stories behind the seemingly superhuman AI.
For the past year and a half, most of the AI chatter in our feed is dominated by AI based innovations. However, thanks to AI ethics advocates like Timnit Gebru, we also have a (much needed) splattering of posts that talk about the urgent human questions behind AI. It was through one such post that we were led to Data Workers Organizing - The African Content Moderators Union, a documentary that sheds light on the struggles faced by data workers at Sama in Nairobi, Kenya.
Watching this film prompted the Fold team to reflect on several important issues.
This post shares some reflections and questions it triggered for us.
1. Making AI labour visible
It struck us how mainstream AI conversation hardly made note of the kind of painstaking labour that goes into making gen-AI work. Wherever the process of building a gen-AI model is explained, it states that such models need a lot of pre-training and analysis of large datasets to work. However, they rarely go into how humans (from historically oppressed countries) are doing this pre-training work and the mental costs they bear in doing so.
All of us, using AI to transform how we work ought to have more awareness of the humans (beyond the Sam Altmans of the world) behind it.
How can this labour become more visible?
2. Data workers Experience Design
As data workers articulated the emotional, social and mental costs they are bearing in making gen-AI possible, we thought about the stark difference between experiences of humans building AI vs humans using AI. The stories felt sad. They also felt unsurprising, like a continuing thread in the story of labour experiences of our production systems and philosophy.
While altering production systems feels a bit beyond scope, we wondered - How can the data worker’s experience be tackled as a design problem? Can their work process be designed in a way that takes into account their fatigue, the impact of the nature of content they are analysing etc?
3. National & (international?) policies as safeguards
The AI juggernaut is unleashed, of course there’s no stopping it. The documentary says that data work is bringing the ‘unemployable’ into the fold of employment. Objectively, it is a good thing for a nation and an economy. But how to safeguard these employees against exploitation?
Adopting a systems lens, we know that it is futile to expect change to be triggered by AI organisations or the companies they outsource data-work to. They play by the rules of the system. The current rules (of production systems in general) allow for exploitation at the behest of profitability. Thus the rules need to change. This can only happen by way of labour laws and policies by host countries.
What are some ‘ways of working’ that can balance both profitability and labour-centricity, turning AI production into a labour+profit balancing system?
We highly recommend you take out half an hour to watch this very necessary documentary. Especially if you have been using AI in any way.
This post is part of the series ‘The Hum of the Machine’. This project is an enquiry into AI Ethics and Labour, led by Fold Labs.