The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Artificial Intelligence and Robotization

Chapter in academic handbook by Martina Kunz, Seán Ó hÉigeartaigh

Artificial Intelligence and Robotization. Chapter in Robin Geiß and Nils Melzer (eds.), Oxford Handbook on the International Law of Global Security (Oxford University Press, 2020)

Abstract
This chapter provides an overview of the international law governing applications of artificial intelligence and robotics which affect global security, highlighting challenges arising from technological developments and how international regulators are responding to them. Much of the international law literature thus far has focused on the implications of increasingly autonomous weapons systems. Our contribution instead seeks to cover a broader range of global security risks resulting from large-scale diffuse or concentrated, gradual or sudden, direct or indirect, intentional or unintentional, AI or robotics-caused harm. Applications of these technologies permeate almost every domain of human activity and thus unsurprisingly have an equally wide range of risk profiles, from a discriminatory algorithmic decision causing financial distress to an AI-sparked nuclear war collapsing global civilization. Hence, it is only natural that much of the international regulatory activity takes place in domain-specific fora. Many of these fora coordinate with each other, both within and beyond the UN system, spreading insights and best practices on how to deal with common concerns such as cybersecurity, monitoring, and reliability, so as to prevent accidents and misuse.

Download Chapter in academic handbook