News
Erika Staël von Holstein and Olivia Gambelin

Artificial Intelligence: finding new narratives for the technology that will change our world

Facebook Twitter Created with Sketch.

Artificial Intelligence (AI) is rapidly transforming nearly every aspect of human life—from education and healthcare to scientific research, the arts, and military operations. This sweeping change not only redefines how we work and live but also poses significant challenges to the foundations of our societies and democratic institutions. The potential implications of AI on democracy were the primary focus of discussion during the second episode of Friends of Europe’s Policy Voices series on AI & Democracy. Hosted by Catarina Vila Nova, the episode featured insights from Erika Staël von Holstein, Co-Founder and Chief Executive of Re-Imagine Europa, and Olivia Gambelin, an AI ethicist and founder of Ethical Intelligence.

Erika Staël von Holstein emphasized the confusion surrounding the narratives about AI. “At Re-Imagine Europa, we focus on the impact of narratives in shaping the world we live in. When we examine the narratives around AI, we see that they are still very confused. The public at large doesn’t have a clear sense of what AI actually is”. Referring to the famous quote of Kate Crawford, she signaled that “Artificial Intelligence is not artificial nor intelligent. The wording itself is quite problematic.” She advocated for a more practical and less apocalyptic narrative about AI:  “We need a new narrative about Artificial Intelligence that is more reflective of what it actually is, and what it can do. A narrative where it is seen more as a tool for natural intelligence, for human intelligence, as opposed to this kind of science fiction narrative that we often have”. She also warned about the risks of repeating past mistakes: “AI is an extraordinary technology, and the hopes are huge. The question is whether we can think about these issues in the right way, to ensure that we don’t repeat the mistakes we made with digital technologies, where we moved too fast and overlooked some of the collateral consequences, leading to some of the biggest challenges in democracy today: rising polarization, increasing mistrust in all kind of elites all over the globe”.

Shifting the narratives about AI towards a more informed discourse is crucial to ensure that this technology contributes positively to democratic processes, rather than undermining them. According to Olivia Gamelin, the main ethical concern regarding policy and democracy in regard to AI is that the big tech companies like Meta, Amazon or TikTok “are becoming quite powerful due to the amount of data that they control, and the amount of resources that people are starting to depend on”. According to the expert, “we need, as a society, some kind of feedback mechanism into the companies”. As an example, she reminded how Apple decided during the Covid crisis that any contact tracing application had to use decentralized systems while the UK government decided that the NHS Health app was going to be a centralized system. “Apple did not allowed for the UK government app to be hosted on iPhones. That’s a significant number of people that were blocked from using their own government’s app”. According to her, this kinds of situations could be avoided establishing some type of democratic feedback in those kinds of instances.

Collective intelligence

Staël von Holstein also underlined the immense power of the big tech companies: “Is not only about technology itself – understanding how it works, how the data is used, how the business model is going to work. It`s not a search engine, is a creative engine. Companies have control from developing the sponsored content to the platform where is being streamed”. “Governments have very little control to say what is really happening behind the scenes. this is something that should be concerning to democracies”. AI is growing at such a high speed that “it is complicated to say how will policymakers have time to really understand what’s happening”. On the positive side, she believes that democracy itself can help to address this challenge: “We often frame thet narrative about democracy as democracy is the morally right think to do, something that is problematic because it distracts us from the fact that democracy is in fact the smartest thing to do because the more people who come, the more diverse experience we bring from collective intelligence the better solutions we can build. And this is also the case for how do we deal with the challenges of AI”. Gambelin agreed on the idea of a shared responsibility of governments, companies and society at large. “You should hold yourself accountable for nurturing a healthy relationship with technology”.

Staël von Holstein also emphasized the importance of accountability, urging that “if we set the right places for accountability in a responsive way”. “It’s up to us to decide how we reframe this technology and how we make sure that it delivers what we want it to do, while minimizing the risks of the broader consequences”. “As with all technologies, she added, AI is an amazing opportunity, and we need to make sure that, as humans, we think about it correctly,”