The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

In-depth EU AI Toolkit

This AI Act Toolkit project is developing a step-by-step pro-justice compliance tool for developers of high-risk AI

Complying with the AI Act can be a daunting task for providers of high-risk AI, as it requires extensive documentation, including a risk management system, data governance and declaration of conformity.  
  
Our tool will assist anyone who is actively involved in this critical endeavour, streamlining the process of conducting the mandatory internal control and documentation while also providing crucial guidance on ethical and social considerations related to their products. This tool integrates and reorganizes the requirements of the AI Act into step-by-step instructions. Following the workflow of the tool to complete assigned tasks at every step, the tool strives to ensure that all mandatory documentation is complete and in line with regulatory guidelines. This means that completed documents can be downloaded from the tool and used for compliance or public disclosure. They can also be uploaded onto a dedicated section of the tool for citizens and prospective clients to see. In this way, they can demonstrate what steps they have taken to ensure that they do more than just conform to the legal minimum required by the Act.  
  
This tool goes beyond mere legal compliance. It prompts product managers to think critically about how AI relates to structural inequality and engage meaningfully with the ethos of the Act. This means asking AI practitioners to consider both technical and sociotechnical responses to risk, which will involve gaining an awareness of how systems can be harmful even when error or bias-free. The tool will prompt AI practitioners to consult with ethics experts from the product ideation stage onwards and will also include learning devices like short videos which give examples of risks.  
 
The tool itself will be built and maintained by AI service and education-provider Ammagamma, who share our values. They received funding for this project from the local government of the Emilia Romagna region in Italy. 

Distinctive Features of the Tool 

Moving beyond the myriad of impact assessments and checklists, which often only flag concerns after harm has already been inflicted, our tool encourages companies to be proactive about building good technology. It champions the combination of legal and ethical considerations and shows they must be integrated within the fabric of AI projects from the project ideation stage. Our approach involves predicting, identifying and addressing risks throughout every phase of the AI product development lifecycle. 

Step-by-step Multimedia Support 

For each step we will provide some combination of short educational features including educational videos, templates, links to existing resources, step-by-step guides and other types of scaffolding to support toolkit users in moving through the process. Short videos are one of the best ways of communicating information to a time-pressed audience, so we aim to introduce issues that product managers might not be familiar with. These include, for example, why a technical approach to de-biasing often does not work, and what measures should be taken to ensure the project is not based in pseudoscience. We have listed some of the videos in the overview document, but these are not exhaustive.  


Who is the Toolkit For?  

This toolkit is designed for AI practitioners who are striving to not only align with the requirements of the AI Act but also to transcend these mandates by delving deeper into the ethical dimensions of AI development and deployment. It’s predominantly designed for product managers who oversee development, but the tool should also be usable for data scientists, engineers, and business leaders.  

 
Principles Underlying the Tool

Feminism, anti-racism, pro-justice.  

The steps in the overview doc are not only based on the Act’s requirements, but also on feminist, anti-racist and disability scholarship. For example, the steps regarding complaint are influenced by Sara Ahmed’s work on complaints mechanisms. While the EU is clear about its dedication to principles like transparency and accountability, it’s up to companies to decide how exactly to put these into practice. We use feminist, anti-racist and disability to concretise these principles into practical instructions.  

Combining technical and structural responses to injustice  

One of the key aspects of this project is to combine computer science and humanities approaches to the improvement of AI. This means combining the best technical methods (from for example Hugging Face and DAIR), with a deeper understanding of how AI relates to structural inequality more broadly. We provide mechanisms to clearly explain technical choices and their implications, ensuring accountability and fostering trust with stakeholders. Beyond mere documentation, we guide projects in understanding, choosing and reasoning key design choices from socio-technical perspectives. 
 

Next project

Centre for Drones and Culture