AI is often presented as this abstract entity shaping our future, a form of innovation, efficiency and endless possibilities. But behind the hype lies a darker truth: without meaningful regulations and justice-centered design, AI is not neutral. AI reflects and amplifies the systems of power and exploitation that have long harmed marginalised communities.
Across sectors and societies, algorithms reproduce the same biases from the data sets, design decisions and frameworks used to train them. These systems reflect centuries of exclusion and discrimination that wrongly classify, rank and evaluate Black, Indigenous, and other racialized groups, women, LGBTQIA+ people, and economically marginalised communities (Amnesty International, 2025).
In predominantly Black neighbourhoods, for example, some AI infrastructure projects have worsened environmental injustice, with air pollution and energy-heavy data hubs disproportionately burdening communities that already face racialised health harms. This isn’t a glitch, it’s a continuation of systemic racism in action, where Black lives are treated as externalities in the name of technological progress (Milwaukee Community Journal, 2025).
It is important to also remember that AI is not autonomous, it relies heavily on human labour. This “hidden workforce” relies heavily on underpaid, exploited workers in nations still struggling with the legacies of colonization, such as Kenya, the Philippines and Venezuela to moderate, train and test AI models. Workers from these areas have long complained about the “precarity... low wages and psychological trauma that come from moderating disturbing content and being fired for unionizing.” The human toll of these labour practices are far away for a reason, companies choose these places by design to hide them from its consumers, upholding the polished veneer of Silicon Valley (Michelle Kim, 2025).
At the same time, digital technologies, increasingly powered by AI, have become tools of violence and silencing. For women and gender-marginalised people, online spaces are not neutral platforms of expression but battlegrounds where harassment, cyberstalking, gendered disinformation, and AI-generated abuse (like non-consensual deepfakes) are becoming pervasive tactics to intimidate, discredit, and marginalise (International IDEA, 2025).
The result is a feedback loop: AI systems built without diverse voices, accountability, or ethical guardrails reinforce the very inequities they claim to solve. For example, in decision contexts like healthcare, hiring, housing, and criminal justice, biased AI models can reproduce discrimination rather than dismantle it. Marginalised people are more likely to be misidentified by facial recognition, denied fair credit or housing opportunities, or mis-evaluated by automated healthcare tools, all because the data and power structures behind these systems were never designed for equity (Olga Akselrod, 2021).
These patterns are not accidental, they are systemic. Technologies that scale bias, that make violence cheaper and faster, that extract value from marginalised labour and data without accountability, are not exceptions; they are the logical outcome of innovation that prioritises profit and power over human rights.
We must reject the idea that AI will naturally lead to progress. Without transformative policy, enforceable accountability frameworks, and inclusive design practices that prioritise justice, AI will simply automate old harms at new speed. Only an approach rooted in equity, transparency, and human dignity can ensure that technology serves people, not exploitation.