Freiburg researchers collaborated on European Law Institute model rules on artificial intelligence, to be presented in a webinar in April
The European and global legal community is currently engaged in an intensive debate on how to appropriately regulate the development and the use of artificial intelligence (AI). The aim should be to make optimal use of its potential, while at the same time avoiding undesirable risks for fundamental rights and the public good. Scholars at the European Law Institute (ELI) have now developed model rules to provide comprehensive and risk-adjusted impact assessment of algorithmic and, in particular, AI-based decision making systems used by public administration. Public authorities use AI for tasks such as fighting money laundering and tax evasion or for improving public services in the area of energy supply and healthcare. The ELI model rules complement the more general legislation currently being debated in EU legislative bodies on the marketing of AI technologies in the EU common market for the specific context of government AI systems. "The ELI model rules are inspired by EU law and compatible with existing EU law but were designed to be independent of EU law, allowing them to also be used as a model by countries outside of the EU," explains Jens-Peter Schneider from the University of Freiburg’s Institute of Media and Information Law.
Guide aims to boost confidence in AI technology
Schneider was one of three reporters appointed by the ELI to lead an international project team, whose members also included Freiburg early-career researcher Jonathan Dollinger. The ELI unites more than 1600 fellows from law and legal practice as well as around 110 institutional members from across Europe and sees itself as a legal policy think tank. "The ELI model rules propose regulating AI used by public administration in a way that does not hinder innovation but at the same time provides solid safeguards to boost citizen confidence in the use of technology in this area," explains the Freiburg jurist.
Differentiation between systems instead of a one-size-fits-all approach
Technologies like AI can play an important part in modernizing public administration and improving the ways in which it functions. AI applications depend on data, so it is necessary to ensure their transparency, correctness, and reliability. "Trustworthy AI in public administration requires a high level of reliability with regard to the technologies used, as well as the protection of citizens against discrimination and other violations of fundamental rights," says Schneider of the problems raised by algorithmic decision-making.
The main idea underlying the EIL model rules is impact assessment. As the wide variety of situations in which algorithmic decision-making is employed precludes a one-size-fits-all approach, the model rules distinguish between high-risk systems that require impact assessment, low-risk systems that do not, and systems that cannot be classified without examining the specific context in which they are being used. The researchers took this system of levels as a basis for developing 16 articles for the ELI that formulate the impact assessment procedure and specify additional protective measures for high-risk systems. "In the case of high-risk systems, for example, the articles specify a review of the government impact assessment by independent experts and public participation - unlike the regulatory proposals currently under discussion at the EU level," says Schneider. In addition, the model rules respond to the particularly dynamic risks of machine learning by requiring impact assessments that are both tied to specific situations and periodically repeated.
Webinar on the ELI model rules on 13 April 2022