get_queried_object(); $id = $cu->ID; ?>
Avatar photo

Dette er et utdrag av et innlegg som ble holdt av Datatilsynets direktør, Line Coll, i EU-parlamentet den 7. mars 2023. Anledningen var et seminar om «EU AI Act», en ny forordning foreslått av EU-kommisjonen for å regulere kunstig intelligens. Coll snakket om erfaringer fra Datatilsynets sandkasse for kunstig intelligens. Innlegget ble holdt på engelsk, og vi publiserer innlegget på språket det ble holdt.

In relation with the upcoming regulation of AI in the EU, by passing of the AI Act, the Norwegian Data Protection Authority was invited to the European Parliament and asked to give some reflections based on our experiences from our regulatory AI sandbox. This blogpost is an extract of the speech we gave.

Introduction

We have been running a regulatory sandbox for AI since 2020, and based on our experience from this we can tell you one thing:

  • Artificial intelligence can indeed be explosive!
  • AI is a power full tool, which can be technically explosive.
  • The technical side, however, goes hand in hand with the ethical and moral aspects of AI, and AI can be ethically explosive. We have seen AI that shortly after launch – we are talking hours – develops from being rather neutral to being extremely right wing political and discriminating.
  • The resent case in the Netherlands also showed us that AI can be politically explosive. The so-called child benefit scandal in the Netherlands, where the tax authorities used an algorithm to detect fraud, resulted in a large group of minority families wrongly being accused of fraud. This had of course dramatic consequences for the families involved. In addition, the case proved to be politically explosive as well, resulting in the government stepping down.

The goal of our regulatory sandbox has been to promote the development of innovative and responsible AI solutions and to help Norwegian companies in both private and public sector in their development and application of ethical and privacy friendly AI solutions.

The positive impact of many of the AI technologies we have examined in our sandbox can be as fast and as mesmerizing as fireworks – the beautiful side of explosions: we have seen cases spanning from predicting heart attacks to effectively fighting money laundering and financing of terrorism.

However, just like fireworks, AI technologies should be handled with care, as they may sometimes have far-reaching and disruptive negative consequences too, in particular for our fundamental rights and freedoms.

What is a regulatory sandbox?

What does a sandbox look like in practice, some of you might wonder. For us it is a method. It means allocating resources and providing dialogue-based guidance to dedicated companies over a period of 4 to 6 months – often in an early phase of the AI project of the company.

A sandbox provides a controlled environment that facilitates the development and testing of innovative AI technologies under the direct supervision and guidance of a competent authority.

In other words, a sandbox is a playground where controlled “explosions” of AI can safely be tested before they happen in the real world.

It gives us as data protection authority a unique chance to dig into AI-related privacy questions where there is little legal and practical precedence.

For example, we worked with the Norwegian welfare administration on a project where they wanted to use AI to ensure better follow up of individuals on sick leave. In this project we learnt about the importance of transparency towards the case handler in the welfare administration in order for them to make an informed decision whether or not to follow the algorithmic suggestion provided. We also learnt that the legal basis for using historical sick leave data to train the algorithm is unclear in the GDPR. For us, this is very concrete and useful takeaways.

A sandbox is also an opportunity for the authorities to gain first-hand insights into new and complex technologies, which is key to provide better case-handling, investigations and guidance to both companies and citizens.

How to succeed with a regulatory sandbox?

Based on our experience, we have identified three main success factors, in order for a sandbox to achieve its goals:

  • The AI project should be used for the greater good and the process towards inclusion in a sandbox must ensure fair competition,
  • Transparency, and
  • Accountability.

The first success factor: use for the greater good and ensuring fair competition

As for the first success factor: The AI project should be used for the greater good and the process towards participation must ensure fair competition.

To ensure that sandboxes foster both innovation and sound competition, it is key to ensure that applicants are assessed based on open and transparent criteria, and that the criteria ensure the selection of projects with the greatest benefits for society, and not just for the benefit of the participating company or organization itself.  

To ensure this, in Norway, we ask ourselves and the applicants the following questions before admitting a project to our sandbox:

  • Does the project actually concern artificial intelligence?
  • Is the project likely to have a significant positive impact on individual citizens or society at large? Will the output or result of the sandbox project benefit or be relevant for other organizations as well?
  • Is the applicant likely to benefit from participating in the sandbox, beyond limiting its compliance risks?
  • Does the project fall under our supervisory remit?

These were our selection criteria. Other selection criteria can of course also be added to such a list. We consider it, however, beneficial if we, after the adoption of the AI Act, have a common regulation of some basic generic criteria, which are applicable across Europe. In order to achieve this, these common criteria should be found in the AI Act itself, and not left to be decided at a later stage by the European Commission.

The next success factor: Transparency

To get the most out of a sandbox, not only for the concrete participants but also for the rest of society, it is essential to disseminate the key learnings acquired within the sandbox.

Therefore, the publication of a report after the ending of each concrete sandbox project should be the rule rather than the exception.

Publication must of course be done without giving away trade secrets and other sensitive information. However, the explicit consent or agreement from the participating company should not be a precondition to publish reports.

In Norway, we have published reports, both in Norwegian and in English, after each of the 11 projects that we have handled within our sandbox so far. This has been welcomed by other companies who have similar projects in the pipeline – and by others – such as academic environments and the public sector who are keen to learn more about AI.

For us, a key success of the sandbox has been to scale the effect of individual projects by sharing assessments and learnings from each project: by doing so, we are helping many by helping one.

The last success factor: Accountability

From our perspective, the accountability principle should remain a cornerstone of compliance, and companies should remain responsible for the conformity of the AI technologies they apply. This should also be the case even if they have participated in a regulatory sandbox.

In other words, participating in a sandbox should not become or be seen as a tool to obtain a compliance rubber stamp and restrict possible fines at a later stage.

In the draft AI Act, there are sections that will, if included, provide a general liability waiver for organizations and companies that have participated in a sandbox. In our opinion, this solution would deviate from the accountability principle, which is a fundamental and basic principle in compliance.

Therefore, we believe that no general liability waiver for those who have participated in a sandbox should be included in the AI Act.

The Role of Data Protection Authorities

To ensure that the “explosions” of AI are properly controlled in a sandbox, it is key to make sure that sandboxes are run by authorities that have the appropriate expertise and regulatory competence.

Given that many, if not all, AI technologies heavily rely on the use of personal data, data protection authorities would be the natural regulatory body in relation to use of AI that involves personal data in some form, under the AI Act.

Data Protection authorities would be particularly well placed to act as regulators with respect to AI – thanks to their decades of experience with regulating emerging technologies and their extensive experience with balancing different rights and interests.

In any event, even if data protection authorities will not be assigned the main supervisory competence under the AI Act when finalized, data protection authorities will need to be associated to the operation of an AI sandbox as long as it concerns the processing of personal data. This will have to happen every time an AI system involves the processing of personal data or otherwise falls under their supervisory remit, as it was identified in the Commission’s proposal.

Our final remarks

We would like to give you three key takeaways from our sandbox experience.

  • First, we need to be open and transparent in regard to takeaways and learnings from sandbox projects. We need to be open about what we learned in each project, as it is important to bear in mind that the sandbox process is quite resource intensive. In order to defend using our limited resources on sandbox projects, we need to help more than one organization at a time. Helping one by one company would be too resource intensive for us. The sparks and positive explosions – the beautiful fireworks – generated by an AI project in the sandbox need to reach as many as possible outside the sandbox itself in order to foster innovation and spread good practices.
  • Second, we need to pay attention when we select the projects, we admit in a sandbox. The selection criteria must be fair and should in our opinion be the same throughout Europe. If we fail to provide fair and transparent selection criteria, sandboxes may risk limiting rather than strengthening competition.
  • And, third, data protection authorities are well suited to run regulatory AI sandboxes. We know how to carry out impact assessments. We have decades of experience as regulators of emerging technologies, and some of us are already successfully running AI sandboxes. Any alternative choice regarding the assignment of regulatory competences in this field should therefore be well pondered.

Thank you for your attention.

Fra direktestrømmen under seminaret om AI Act i EU-parlamentet tirsdag 7. mars.