This research exercise aims to bring an interdisciplinary approach to the question of regulating autonomous weapons systems.
One strong present incentive for development of high-level AI comes from its military potential, in autonomous weapons.
Concerns about this development have already been raised among the AI community, among lawyers and ethicists and in informal expert discussions at the UN. In 2017, UN discussions on autonomous weapons will be taken up for the first time by a Group of Governmental Experts in the framework of the Convention on Conventional Weapons (CCW).
The LCFI research exercise aims to connect the communities concerned with the military applications of AI and to link them to relevant expertise in international law and the history of arms control.
This exercise is also a starting point for the study of many wider legal questions that AI presents. For legal systems long have considered who ought to have decision-making authority, but the decision-makers have been human beings, acting individually or as institutions. AI presents the question whether, and to what extent, we should entrust decisions (including decisions that carry significant and irreversible consequences) to non-human actors. There are therefore links to several other projects, such as Trust and Transparency, Agency and Persons, and Politics and Policy.