Back to All Events

Generative AI November Learning Lab: AI and Authoritarianism

In this session, we explored how AI is reshaping governance and enabling authoritarian practices. Political leaders are increasingly leveraging digital tools to surveil and control their citizens, while private sector companies are seizing more and more decision-making power over consequential political issues. Meanwhile, emerging technologies are being integrated into life-and-death government powers, such as public service delivery, policing, and warfare.

What opportunities does private philanthropy have to influence this reality?

Our speakers shed light on how new and largely invisible technologies can and are being used by both the private sector and nation states: how is AI used by global leaders to suppress dissent, control populations, and shape societal and political outcomes? What strategies and frameworks can help us prevent these kinds of practices during the hype wave of AI uptake?

We were joined by:

How technology facilitates abuses of power

In this session, our speakers examined authoritarianism in tech from two key angles:

  1. Marietje explained the ways in which the tech industry continually consolidates power through building new technologies and uses that power to erode democracy.

  2. Astha shared how governments can also leverage technology to facilitate this erosion, looking specifically at India’s biometric ID system and the wider dissemination of digital public infrastructure (DPI).

Looking first at the US, tech companies are seizing this post-election moment to bolster their already outsized power, and reform the way government is run — e.g. to make it ‘more efficient’ — in ways that are not always totally visible.

Big tech firms like Microsoft have market capitalisations comparable in size to the GDP of entire nation-states. They use this capital to buy influence — not just through traditional lobbying methods — but in more subtle and deceptive ways: for instance, to collaborate directly with regulators to create frameworks that the regulators can deem as ‘good enough’; or even to directly fund research, the work of think tanks, and entire civil society organisations in order to shape wider perceptions of tech governance, and what reasonable mitigation tactics might look like.

Marietje noted that tech companies are even beginning to reposition themselves as ‘AI companies’, and are sinking a large portion of resources into data centre expansion (we covered this in more detail in our Learning Lab on the environment). Data centres are significant because they are needed to train and run AI models, but are also huge physical infrastructure projects that have adverse effects on the environment and surrounding communities. Tech companies will present long, shiny, air-tight proposals to local government officials who are already overstretched, and may not be able to make informed decisions — especially when tech companies obfuscate their identities behind code names, as Meta did with ‘operation Tulip’ in The Netherlands a few years ago.

Unchecked control over large scale infrastructure projects like this are typical of authoritarianism: Astha described how Aadhaar, India’s biometric ID system, has slowly become a must-have for all residents — rather than an optional ID for those who need to access specific benefits or services.

Aadhaar is a key piece of digital public infrastructure (DPI) in India. It’s required for getting a birth certificate, for students to enroll in universities, and even for access to vaccines. It’s impossible to live without it. Rolled out in 2014, it represents a relatively new piece of digital infrastructure that is built on top of other pieces of legacy infrastructure, and the two don’t work well together — people often cannot verify their Aadhaar and are left without access to essential benefits, resulting in what Astha referred to as ‘Aadhaar deaths’. Aadhaar has also been used as a tool for defining Indian citizenship, which is vastly complex. For instance, over 900,000 people on the borders of Bangladesh had their Aadhaars frozen for five years while the government updated the national citizen registry. So, exclusion from the Aadhaar system is wide ranging, and also massively consequential.

It’s also important to understand that India’s DPI is not a monolith: it’s made up of several pieces, some governed by the state, and some by the private sector — and the way these pieces commingle is kept intentionally vague in order to avoid scrutiny. Furthermore, the phrase ‘digital public infrastructure’ is misleading, and there are still ongoing discussions on how to define ‘public’ — e.g. is it public because it’s for the public? Or run by the government? — and ‘infrastructure’, which raises questions around what kind of infrastructure, at what scale, and under who’s ownership.

In this session we were also joined by Martin Tisne, who shared his insights on building out the field of public interest AI, with a view of preventing a private sector stronghold by just a few market incumbents. There are four key components needed for public interest AI to thrive as a global commons:

  • Transparency and openness across the AI value chain

  • High quality data sets that are ethically acquired, in a participatory way

  • Smaller models that are purpose-built and fine-tuned to specific societal problems

  • And finally, for accountability to cut across all these components

Further recommendations made by Marietje and Astha for you to take away:

Work on dispelling the narrative that large US tech firms are good for democracy. When left unregulated, Big Tech does not stand up to authoritarianism — in fact, we may be about to witness how the opposite is true in the US. Marietje cites the lifting of the spyware executive order, among other factors, as a signal of the government abusing its power and using technologies to enhance domestic control.

Prepare to support & fund civil society organisations and key research. The upcoming Trump administration has already announced intentions to defund these areas, so it’s vital that research at the intersection of tech and democracy is adequately supported. Astha notes that research is often viewed as slow and ineffective, and therefore not a viable avenue to fund — which promotes a self-fulfilling prophecy of it not being an effective way to challenge power. A result of this in India is a complete lack of evidence of the harms of DPI.

Work to meaningfully understand experiences from the Global South. The defunding and shutting down of non-profits is already happening in India; there is no money or resources available to research the ways in which tech is being used to control citizens, and therefore this knowledge is not entering the public discourse. Places like India and Kenya unfortunately rely on progressive funds from the Global North, which, in the case of the US, are now likely to be prioritised for domestic use.