The price of our free AI tools: workers without rights or visibility

21 Nov 2025 - Those who think generative AI works all by itself are mistaken. Behind the chatbots and image generators are over 150 million people who keep AI systems running, in often poor conditions. Students of the Smart Media Production Associate Degree recently heard experts' experiences on the subject at FLOOR AUAS. They also heard testimonies from data workers.
hvacontentservice_sixteenByNineLarge.jpg
Students watch excerpts from documentaries and interviews conducted by Martijn Arets with ghostworkers

Author: Lisette Wegener

In the Smart Media Production Associate Degree, students learn to create personalised, digital content for media companies. For study assignments, they often use generative AI.

They are given weekly lectures on the ethical aspects of how we use technology (the 'Smart Gen Club'). A special edition of such a lecture was the event Ghostwork: the invisible labour behind AI (video), at the FLOOR cultural platform of the Amsterdam University of Applied Sciences (AUAS). 

Made invisible

"Who among you knew that AI requires so much human labour?" lecturer Tessa Duzee asked the room full of students. Hardly anyone raised their hand. For many people, AI remains a 'black box', a kind of magic. Tech companies like to uphold that image. 

Students notice that behind the curtain, there is little magic. In countries such as Kenya and Ghana, data workers work long days in which, for example, they have to review violent images, are not allowed to turn off their webcam and do not get paid for any hours worked if they make a single mistake.

hvacontentservice_sixteenByNineLarge (1).jpg
Martijn: "Tech companies are upholding the image of 'autonomous AI' to keep investments up." Photo: Maartje Moerbeek

Deliberately kept out of the picture

Researcher Martijn Arets (AUAS Civic Interaction Design research group) explains how big tech deliberately keeps this group of workers, estimated at 150 to 430 million worldwide, out of the picture. "Because they don't want to be held accountable and prefer to operate in a regulatory vacuum."

There are roughly three types of data work: training AI by 'annotating' (labelling large amounts of content, sometimes shocking material), checking AI results, and 'impersonalising'. With the latter, the user thinks they are dealing with AI, when in fact it is humans answering or watching.

In the Belly of AI - Watch the trailer (by Antonio Casilli)

Humans operated Amazon's 'AI Supermarket'

Arets gives an example of the latter: In 2021, Amazon opened ‘completely AI-run supermarkets’ in the US. In reality, 1,000 people were guiding this process from India, so that Amazon could fake AI here.

hvacontentservice_sixteenByNineLarge (2).jpg
Ephantus Kanyugi of the Data Labelers Association, Kenya. Photo: Martijn Arets

No unions

Arets himself also spoke to data workers in Kenya about their working conditions, including Ephantus Kanyugi from Kenya (pictured). The emotional pressure is high – content moderators often struggle with PTSD – and it is often also very mind-numbing work. "At the same time, people try incredibly hard to do this work well, no matter how poorly they are paid."

hvacontentservice_sixteenByNineLarge (4).jpg
Frida Mwangi, founder of the Kenya Union of Gig Workers. Photo: Martijn Arets

Our AI runs on exploitation

People do not want to lose their jobs, but they do want better conditions, as stressed by Fiona Dragsta of the WageIndicator Foundation. Their research shows that workers in the Global South regularly earn ten times less than what they need for a decent living.

Denying responsibility

"OpenAI and Google AI deny all responsibility," Fiona notes. They do this very ingeniously: they work through Business Process Outsourcing companies (BPOs) or online work platforms, so they do not have to bear responsibility themselves. Governments give these BPOs free rein, as it brings in Open AI and Meta.

But once workers start organising themselves, as they did in Kenya in the Data Labelers Association, tech companies immediately move their operations to countries like the Philippines or Indonesia.

hvacontentservice_sixteenByNineLarge (3).jpg
Tessa Duzee and Fiona Dragstra of the WageIndicator Foundation, an organisation that collects, analyses and disseminates global labour market information for a more informed debate.

Tech companies are firmly protected 

"Why are there no legal proceedings in this area worldwide?" one student wants to know.

Most major tech companies are US based, and are firmly protected by the US government," Dragstra explained. "There are some EU politicians, the European Greens in particular, who are trying to get Big Tech to comply with EU regulations, but it is made very difficult for them."

But awareness does help, Fiona stresses - it creates pressure, and pressure ultimately creates change.

Counter-movement

Do we want to continue to be part of an AI system where data workers feed models with an abundance of information — much of which is of little value — under deplorable conditions?  

Nanda Piersma, Academic Director of the Centre of Expertise Applied AI, stresses the importance of critical awareness in this matter. "Should we actually farm out everything of value to AI?"

hvacontentservice_sixteenByNineLarge (5).jpg
Photo: Martijn Arets

She explains that responsible alternatives are being developed, and calls on students to help initiate change as content creators. For example, by making valuable content, such as the stories of data workers, visible against the 'junk' generated by big tech.

In their degree programmes, students use AI tools for video and images on a daily basis, mostly the major, free programmes. "How can we bring about change when almost all tech companies are based in the US?" asks student Moos.

There are very few good alternatives, or they are too expensive, Piersma admits. She therefore advocates investment at EU level to actually bring the responsible tools being developed to people.

Students and researchers conclude that people do not automatically choose a responsible alternative. It must also offer something extra, a unique selling point, like sustainable fashion compared to fast fashion. Piersma: "We need to find a way in Europe to make responsible IT valuable."

Student Max concludes by noting that in his programme, he would like to dive even deeper into how to really provide those alternatives. So that a change may be brought about – not only here, but also there.

Comments by students

hvacontentservice_twoByThreeLarge.jpg

Stijn Smit:

"We have talked a lot in class about the legal and ethical sides of AI, but not really about data workers. That people do not get paid when they made a mistake is really unheard of in my opinion; that really shocked me. It creates an image of lawlessness. You see the pattern of the Industrial Revolution repeating itself; with the big tech companies being responsible and people paying the price. Changing this – yes, I do feel that responsibility as a creator."

hvacontentservice_twoByThreeLarge (1).jpg

Elvin Spooren:

"In a way everyone knows that everything around AI seems a bit too good to be true, but I didn't actually know all about the dark side of AI. But is there any alternative? In terms of work, it is difficult, but as an individual my takeaway is: Be aware of what you are supporting."

hvacontentservice_twoByThreeLarge (2).jpg

Max Helmantel:

"This issue is serious and multifaceted. But it is difficult to tackle from here. It is naïve to think that ethical data practices or tools with a moral compass will gain impact, if US tech companies have no legal or financial incentives to develop these. Despite this, there are opportunities in Europe. If we set our own course with stricter regulations and moral leadership, we can make an impact on our own turf, by our human standards."

 

Responsible AI at the AUAS 

In the Responsible AI Lab, researchers consider what makes an AI system responsible, and what users, developers and organisations can do to bring this about. Ghostwork has recently gained recognition, prompting a relatively newer interpretation of responsible AI.

Nanda Piersma:  "It seems like it’s a waste of time and effort, but it’s good to put a spotlight on these issues, and for organisations to take into account when buying and deploying AI systems."

Loading...