A new era of colonialism? The dark side of AI

Artificial Intelligence is the thing on everyones lips nowadays. Working in big tech, it is all you hear about. The discourse tends to be fairly predictable – some people are very positive and optimistic about the large potential that can be unlocked in a variety of sectors using AI, and some people tend to be more cautious, and even scared about the exponential harm that will also come with it.

Those that are more concerned point to a plethora of issues that could/are arising from the new technologies. Privacy concerns, cybers attacks, lack of regulation and policy, and lack of a general technical understanding to what AI even is and how it’s affecting our lives.

However in this article, I wanted to tackle a topic that I recently learnt about that I believe touches on all of these concerns, but frames these concepts in a much darker, historical paradigm that seems to be playing out yet again on a global scale. AI colonialism – eloquently defined by Karen Hao in the MIT Technology Review as how “the impact of AI is repeating the patterns of colonial history”. These patterns include but are not limited to: data extraction and exploitation; labour exploitation, algorithmic discrimination/bias, and lack of independence of the ‘colonised’.

The MIT Technology Review released an enlightening series on the subject, looking at how AI technologies are negatively impacting populations in the global south. From how private companies deploying mass surveillance techniques is fueling a digital apartheid in South Africa, to how data-labelling workers in Venezuela are being exploited so that we can enjoy the AI tools that are being produced. The pattern remains reminiscent of colonial pasts – exploitation and control of poorer communities/nations to drive profits and power for the wealthier few.

Exploring the colonial qualities

Data extraction and exploitation: this refers to obtaining the vast amounts of data needed for AI systems from the Global South without adequate compensation or consent. Data is a very valuable asset, and collecting this data from poorer nations without their explicit consent, or paying a fair price for it bears a striking resemblance to the natural resource extraction during colonial eras.

Exploitation of labour: Models are trained on datasets that help the model to learn how to arrive at the desired outcome, and fine tune its performance. For a lot of these models, this data needs to be labelled and annotated. These tedious tasks are often done by people in the Global South, and usually they receive poor wages and working conditions, as well as also having to label explicit, graphic and violent images which is very damaging to their mental health.

Imposing of western ideologies: Many of the AI developments are being driven by a concentration of select companies at the top, and these companies are predominantly western-based. Imposing these technological solutions onto developing nations without giving them any influence or control in how the systems operate leaves them at the mercy of the creators of the tech, which is unethical and disempowering for those communities. A Senegalese AI expert on the UN advisory board of machine learning spoke on this very subject: “The biggest threat for me is colonization. We may end up with large multinationals in AI that will impose their solutions throughout the continent, leaving no room for creating local solutions.”

A brighter future for AI?

Although the reality of the situation is fairly bleak, we are still in the relatively early days of AI technology’s penetration into society – so there is still time to try and steer the ship into a different direction, so that the benefits can be more fairly distributed.

There are some avenues already being explored and organisations working on trying to reverse some of the current impacts of AI colonialism:

  • Lifting up local tech talent around the globe, so they can be part of the solution – organisations such as Andela and Gaza Sky Geeks help to nurture and connect local talent in areas with less opportunity to employers.
  • Working with nations in the global south to ensure there is more equitable solutions on the table for their contribution to AI development pipeline.
  • The Algorithmic Justice League is helping to fight against the harmful effects that AI is having on society. The organisation was involved in the creation of the highly successful documentary ‘Coded Bias’ and a whole host of other impactful projects looking to make outcomes from AI systems more fair and equitable to all.

The first step to tackling a problem is being aware of it, and then making others aware of it too. So if you are in the tech space or another influential position, then always try to consider if decisions you are making are helping or hindering the goal of reducing the affects of AI colonisaliam.

Thanks for reading!

Sources:

https://www.wired.com/story/abeba-birhane-ai-datasets/

https://aibusiness.com/responsible-ai/ai-and-the-risk-of-technological-colonialism

https://news.un.org/en/story/2024/01/1144342

https://www.technologyreview.com/supertopic/ai-colonialism-supertopic/#:~:text=An%20MIT%20Technology%20Review%20series%2C%20supported%20by%20the%20MIT%20Knight,that%20have%20been%20dispossessed%20before.

Discover more from letters from nowhere

Subscribe now to keep reading and get access to the full archive.

Continue reading