Back to All Events

Generative AI August Learning Lab: AI Narratives

The flurry of hype surrounding generative AI over the last 18 months has been extremely provocative; much of the mainstream perception of generative AI is grounded in promises of what’s to come, and not the capabilities of the technology as it is right now.

This continuous dissemination of narratives around how AI might change the world, give us superpowers, or bring about an extinction event (take your pick!) is primarily being driven from within the tech industry itself. This poses a significant challenge for civil society and the social sector to bridge that gap between their deep expertise about what is actually going on, and the public’s understanding of the issues.

This insight session explored the ways in which these narrative frames are constructed and who they benefit. Our speakers shared actionable insights on how we can shift the narrative landscape to favour a more rights-centred, pragmatic uptake of emerging technologies:

  • Hanna Barakat is a research analyst who will walk through her recent content analysis on the ways in which The New York Times reports on AI: her findings show that the NYT often bolsters industry narratives and de-centres expert voices from outside the industry.

  • Daniel Stone is a researcher at Cambridge University's Centre for Future Intelligence, and will demonstrate the power of metaphorical language in AI, and how this impacts both our understanding of complex issues, and the way we shape policy around those issues.

  • Jonathan Tanner is the founder of Rootcause, and will share his findings from a study that examined thousands of Youtube videos and print media articles discussing AI. His research shows that the most prevalent narrative frames in mainstream media focus on corporate news, and often exclude civil society voices.


Watch the recording
Password: AI


The stories we’re told about AI

Metaphors, stories, and the construction of narrative frames help us understand complex and novel topics of the day, as well as imagine potential futures. Narratives represent something that can be leveraged to influence people and challenge power — and an observation that came out of this session was that civil society groups are failing to do this effectively. There are two main reasons for this:

  1. In AI, civil society and academic voices are often eclipsed by industry voices in both traditional media publications, and in new media (such as YouTube), where discussions focus more on technical aspects and industry gossip, rather than social and political implications.

  2. Civil society groups tend not to work with the grain of common narrative frames, while the tech industry has successfully used these frames to their advantage. This makes sense because civil society will want to challenge the status quo, rather than go along with it.

Part of what makes narratives so powerful is that they are largely invisible: they’re hard to tease away from our preconceptions, and narrative constructions via metaphor can often hide in plain sight as literary flourishes. Dan shared the slide below showing a snippet of a speech made in the EU parliament, which describes AI progress as a “steady current”, therefore situating the sentence that follows as reasonable or logical next step — without ever identifying where or who this “steady current of AI progress” came from.

‘AI as progress’ was one of the four narrative frames that John identified in his research; the others being AI risk, the complexity of AI, and AI as something that needs to be regulated. Challenging these narratives and providing alternatives — instead of ignoring them completely — is something that civil society can afford to do much more of. John’s insights offered some starting points for this. For instance, challenging the idea that AI is synonymous with progress may not work; this narrative is deeply grounded in the societal expectations around growth and innovation; that ‘progress’ has a rigid definition and must be prioritised. Whereas ‘AI is too complex’ is far easier to challenge because it’s entirely possible to explain how AI works in a simple, accessible way.

Framing AI as a mystical technological asset that only an anointed few can understand is a common trope bolstered by mainstream media. Hanna’s content analysis of the New York Times demonstrates subtle, but powerful, distinctions made between two kinds of experts: execs from within the tech industry are frequently cited and straightforwardly referred to as experts, while academics and other non-industry voices are characterised as ‘outside’ experts, who’s viewpoints are skeptical of new technologies. What this does is centre technologists — those building AI systems, often for profit — as the upmost authority on how these technologies can and will benefit society.

This kind of reporting creates a feedback loop: tech execs are positioned as the ‘best’ experts, and so are invited to participate on more panels, and give more interviews, reinforcing the narrative that they know better than those engaging in deep thinking around the sociotechnical implications of AI systems.

What journalists choose to focus on also has a knock-on effect to what stories are told elsewhere; Alaphia, who is a media director at Luminate, noted her surprise at how a room full of screenwriters at a major film festival had little knowledge of fairly urgent problems such as the harms of online speech. This knowledge gap means that these stories aren’t being told in film & TV, and if they are, they aren’t being told well. Part of Alaphia’s work is to conduct industry outreach and help provide expertise for screenwriters so that storylines in popular media might focus more on sociotechnical issues, without being cliched, and without glamourising the work that tech firms do.

At the end of the session, John aptly pointed how it can often be hard to differentiate between power that exists because of the nature of how power operates, and power that is borne out of intentionally consolidating it, as with big tech firms. Making that distinction is key to leveraging narratives in a way that disarms tech industry players: e.g. who stands to benefit from saying that generative AI is something that needs to be regulated?

A call for regulation is both an expression of concern, and a way of legitimising the field as something that produces real products, with real use-cases — it side-steps any conversations that might be had about whether these systems are even worth building.