The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Critical Borders: Radical (Re)visions of AI

18 October 2021 - 21 October 2021
Exploring a variety of critical approaches to AI that interrogate how AI at the border, and the bordering processes of AI, differentially affect and produce disabled, queer, gendered, and racialised subjects.

Vicki Smith

Critical Borders: Radical (Re)visions of AI is a conference taking place at the University of Cambridge and online from 18-21 October, 2021.

The conference will commence 18 October with a presentation by Ruha Benjamin who will present the prestigious 2021 Obert C. Tanner Lecture on Artificial Intelligence and Human Values. In the following days, international keynote speakers, panelists, artists and attendees will critically interrogate issues of bordering in artificial intelligence (AI), examining both how AI operates at material borders (including national and bodily borders) and how AI produces or transgresses imagined, theoretical and ideological borders, including categories of race, gender, age and class.

This conference is convened by The Leverhulme Centre for the Future of Intelligence and the University of Cambridge Centre for Gender Studies.  It is generously funded by the Obert C. Tanner Lectures on Artificial Intelligence and Human Values; The conference conveners are also generously supported by the Leverhulme Turst, Templeton World charity Foundation, DeepMind and Christina Gaw

Speakers

Professor Ruha Benjamin (Tanner Lecture)

Ruha Benjamin is Professor of African American Studies at Princeton University, Director of the Ida B. Wells Just Data Lab and author of Race After Technology: Abolitionist Tools for the New Jim Code.

Black Mirror: Race, AI and Inequity in the 21st Century
From everyday apps to complex algorithms, technology has the potential to hide, speed, and deepen discrimination, while appearing neutral and even benevolent when compared to racist practices of a previous era. In this talk, Ruha Benjamin examines biased bots, altruistic algorithms, and their many entanglements, and provides conceptual tools to decode tech promises with historical and sociological insight. She also considers how race itself is a tool designed to stratify and sanctify social injustice and challenges us to question not only the technologies we are sold, but also the ones we manufacture ourselves. A video recording of the lecture is available here

Ruha-Benjamin-2

. . .

Professor Wendy Chun (Keynote)

Wendy Hui Kyong Chun is the Canada 150 Research Chair in New Media at Simon Fraser University, and leads the Digital Democracies Institute.  She is also the author of Discriminating Data (2021), Updating to Remain the Same: Habitual New Media (2016), and Programmed Visions: Software and Memory (2011).

Wendy-Chun

. . .

Tanner Lecture Respondents

Dr Mónica G Moreno Figueroa
Senior Lecturer in Sociology, University of Cambridge

Monica Figueroa

Professor Sennay Ghebreab
Sennay Ghebreab is an Associate Professor of Socially-Intelligent AI at the University of Amsterdam, program director Master Information Studies, and founder and scientific director of Civic-AI Lab. 

Sennay Ghebreab

Dr Shakir Mohamed
Shakir is a research scientist and lead at DeepMind in London, an Associate Fellow at the Leverhulme Centre for the Future of Intelligence, and Honorary Professor of University College London. Shakir is also a founder and trustee of the Deep Learning Indaba, a grassroots organisation aiming to build pan-African capacity and leadership in AI.

Shakir Mohammed

Artists and the ART Exhibition

The art exhibition will be held on Thursday, 20 October at Jesus College, West Court in the ground floor Bawden room.  The showcase will include the following works:

im here to learn so :))))) - Zach Blas

Zach Blas is an artist, filmmaker, and writer whose practice spans moving image, computation, theory, performance, and science fiction. Blas engages the materiality of digital technologies while also drawing out the philosophies and imaginaries lurking in artificial intelligence, biometric recognition, predictive policing, airport security, the internet, and biological warfare. im here to learn so :)))))) (2017), a four-channel video installation and collaboration with Jemima Wyman, resurrects Microsoft AI Tay to consider the gendered politics of pattern recognition and machine learning. 

Zach Blas - I'm here to learn so :)))

Zizi - Queering the Dataset - Jake Elwes

Jake Elwes is a London-based media artist whose work explores his research into artificial intelligence. His practice looks for poetry and narrative in the success and failures of these systems, while also questioning the code and ethics behind them. His current works in the Zizi Project explore AI bias by queering datasets with drag performers to simultaneously demystify and subvert AI systems.

Jake-Elwes

Being 1.0 - Rashaad Newsome

Rashaad Newsome's work blends several practices, including collage, sculpture, film, photography, music, computer programming, software engineering, community organizing, and performance, to create an altogether new field. Using the diasporic traditions of improvisation, he pulls from the world of advertising, the internet, Art History, Black and Queer culture to produce counter-hegemonic work that walks the tightrope between social practice, abstraction, and intersectionality.

Rashaad Newsome - Being 1.0

. . .

Conference format

The event will be composed of a mix of panels and invited talks, including keynotes, as well as film screenings and exhibitions of artwork. The conference will take place as a hybrid event, with attendees having the option to attend online (for free) or in person at Jesus College, University of Cambridge (for a minimal fee).

. . .

CONFERENCE Programme

Monday, 18 October (Evening only)

Venue: Robinson College, Cambridge - Auditorium
Time: 4:45pm-8pm (registration open from 4pm)

The Tanner Lecture on Artificial Intelligence & Human Values
by: Ruha Benjamin
The event includes a short welcome, the lecture, and then an intermission followed by two respondents and a final Q&A discussion with all speakers.

Tuesday, 19 October

Venue: Jesus College, Cambridge - Frankopan Hall (Conference) & Bawden Room (Artist Exhibition)
Time: 9:00am-5:30pm

  • Panel I: AI, Nationalism, and the Borders of the Nation State
  • Panel II: AI, Nationalism, and the Borders of the Nation State: EU Borders
  • Panel III: AI Fiction and Fact
  • Panel IV: Embracing the Plagiarised Future
  • Keynote Lecture: Professor Wendy Chun
     (Followed by discussion and Q&A)

Wednesday, 20 October

Venue: Jesus College, Cambridge - Frankopan Hall (Conference) & Bawden Room (Artist Exhibition)
Time: 9:30am-5:30pm

  • Panel V: Liminality
  • Panel VI: Bodily Borderlands
  • Panel VII: Bodily Borderlands: Medical
  • Artist Talks  (3:30pm)
  • Art viewing (4:30-5:30pm Jesus College, West Court ground floor, Bawden room)

Thursday, 21 October (half day)

Venue: Jesus College, Cambridge - Frankopan Hall (Conference) & Bawden Room (Artist Exhibition)
Time: 9:30am-12:30pm

  • Panel VIII: Patents & Geopolitics of AI
  • Panel IX: Art, Fashion & Style: AI & Aesthetics

Note on COVID-19

Currently, we are planning for an in-person event that will also be streamed online. However, circumstances can change at short notice and we will defer to the government guidelines. Please refer to the University of Cambridge COVID19 page for up-to-date information.

More information on the conference

The aim of this conference is to critically interrogate issues of bordering in artificial intelligence (AI). This conference examines both how AI operates at material borders, including national and bodily borders, and how AI produces or transgresses imagined, theoretical and ideological borders, such as categories of race, gender, age and class. Compelled by Gayatri Spivak’s insistence that we attend to borders (Spivak 2016), taking into account when border crossings are a violation and when they are pleasurable, we ask: what kinds of border-crossing are induced by AI, and what kinds are prohibited? Which borders does AI reinforce, and which borders does it render obsolete? We aim to explore the tension between the possibilities of transgressing boundaries, especially in the context of binary categorisation, and the risks of equating boundary subversion with emancipatory political practices.

By “radical” we mean all manner of anti-racist, feminist, inclusive, queer, justice-focused scholarship that accounts for the intersectional nature of power. We welcome all kinds of interventions with broad and differing stances towards what constitutes radical work and where its priorities lie. For example, in its critique of power, feminist scholarship, methods and activism has robustly interrogated conditions of marginality and the shifting dynamics of inclusion and exclusion. It has provided a set of methodologies to examine how borders emerge, who these borders include and who they exclude, and the radical politics that arise within these border zones. Feminist, anti-racist, and queer scholarship by scholars like Sara Ahmed and Gloria Anzaldúa has illuminated the formative role of emotions in generating bodily and national borders (Ahmed 2004) and provided groundbreaking re-imaginings of the border and what it means (Anzaldúa 2007).

We will explore a wide variety of critical approaches to AI from the margins that interrogate how i) AI at the border and ii) the bordering processes of AI, differentially affect and produce disabled, queer, gendered, and racialised subjects.

Themes

The first theme is ‘The Prison as Border: AI, Carceral Technologies and Prison Abolition’. Driven by Angela Davis and Gina Dent’s argument that ‘the prison is itself a border...that un-does the illusions of the powerful nation-states on the one hand and the seeming disorganization and chaos of capital's travels on the other’, we ask how AI technologies contribute to the ‘specific political economy’ of the prison (Davis and Dent 2001: 1236-1237).

The second theme is ‘AI, Nationalism, and the Borders of the Nation State’. Presentations in this theme will examine how AI development feeds into a colonial logic of expanding the national “frontier” through narratives of progress; how the refraction of AI across borders may fundamentally challenge the power of the state; or how AI nationalism retrenches national processes of bordering. 

The third theme is ‘Bodily Borderlands’. How does AI trouble the boundaries between the body and technology, the fleshly and the machinic? More importantly, how does AI trouble the concept of “the body” in and of itself? Conceptual borders between bodies, hardware, and software have inhibited accurate theorisations of how discrimination is encoded into AI. 

The fourth theme is ‘Fiction and Fact’, interrogating the dynamic interplay between AI technologies and narratives, stories, and imaginaries about AI.

The fifth theme is ‘Liminality’, and examines how AI’s classificatory procedures create spaces of illegibility and in betweenness. 

Programme Committee

  • Dr Stephen Cave, Executive Director of the Leverhulme Centre for the Future of Intelligence;
  • Dr Kanta Dihal, Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence and Project Lead for the Global AI Narratives and Decolonizing AI research projects;
  • Professor Jude Browne, Director of the University of Cambridge Centre for Gender Studies and Principal Investigator for the Gender and Technology Project;  
  • Dr Eleanor Drage, Research Associate for the Gender and Technology Project; 
  • Dr Kerry Mackereth, Research Associate for the Gender and Technology Project; 
  • Tonii Leach, Research Assistant for the Global AI Narratives project 

Funding

This conference is generously funded by the Obert C. Tanner Lectures on Artificial Intelligence and Human Values; Christina Gaw; the Leverhulme Centre for the Future of Intelligence; and the University of Cambridge Centre for Gender Studies.

.     .     .

Image credit: Vicki Smith, Peaceful Mind, 2017, Oil on Canvas, 36 X 48 in. Bau-Xi Gallery, Toronto, Canada