On February 19 and 20, experts in AI and security from around the world took part in a Leverhulme Centre for the Future of Intelligence (LCFI) workshop in Oxford on risks associated with the misuse artificial of intelligence.
The CFI workshop, co-chaired by Miles Brundage of the Future of Humanity Institute at Oxford and Shahar Avin of the Center for the Study of Existential Risk, explored the likelihood and impact of various ways in which AI could be used in dangerous ways, such as automated propaganda, hacking, and weapons, and identified a series of possible solutions to be explored further in the future.
Workshop participants came from a variety of regions of the world, including Asia, the Middle East, Europe, and North America, and from universities such as LCFI affiliates in Oxford, Cambridge, and UC Berkeley, non-profits such as the Electronic Frontier Foundation, and AI industry leaders like Google, OpenAI, Microsoft, and DeepMind. A research agenda and report from the workshop will be published in the coming months.