Skip to ContentSkip to Navigation
Jantina Tammes School of Digital Society, Technology and AI News

AI Act makes Europe forerunner, yet there are plenty of concerns too - event video added!

16 February 2024

With the new AI Act, Brussels aims to curb the growing dangers of artificial intelligence. During a well attended meeting at House of Connections, co-organised by the Jantina Tammes School and the Security, Technology and e-Privacy (STeP) Research Group, several experts raised questions about the European AI Act. 'Still, the regulations will almost certainly be passed: let's make the best of it.'

The event, 'Navigating the EU AI Act: Innovation, Regulation, and the Future of the Digital Society', featured three panel discussions including experts on artificial intelligence, European legislation and cybersecurity. Oskar Gstrein, Data Autonomy Theme Coordinator at the Jantina Tammes School, kicked off the afternoon. 'Since last December, there has been a political agreement on the AI Act. But there is still a lot of room for interpretation. This is why we have a lot to discuss today.'

Pioneering

According to panellist Frederik Zuiderveen Borgesius, professor of ICT and Law at Radboud University in Nijmegen, there is plenty to criticise about the text. 'It is one of the most complicated laws I have ever seen,' he said. But he also stressed the positive intentions from Brussels. The Radboud researcher pointed out that the European Union is a pioneer in regulating modern topics such as AI, online privacy, and online platforms. The United States, for example, has far fewer laws. 'The AI Act will now almost certainly be passed: let's make the best of it,' concluded Zuiderveen Borgesius.

The new AI Act aims to curb the spread of disinformation, including deepfakes, and contains regulations on facial recognition. In the AI Act, a a so-called 'risk pyramid' is used: the higher the risks, the stronger the restrictions. This should, for example, prevent governments from constructing a social credit system based on artificial intelligence. At the same time, the use of facial recognition remains possible under (very strict) conditions.

Big Tech out of the picture

Panelist Michael Veale, Associate Professor in Digital Rights and Regulation at the University College London, was also critical of the new European regulations. In the AI Act, artificial intelligence is seen as a 'product', but according to Veale, this is only partly the case. He stressed that large platform providers such as Microsoft, Google and Meta are much more than just producers. This leaves them out of the picture in the new European regulations. Instead, the new act mainly targets small and medium-sized companies that apply AI technology of the Big Tech companies in their sector, Veale said.

This view was echoed by Zuiderveen Borgesius in a brief interview during the break afterwards. 'European product safety regulations are working just fine. Consumer products are generally safe.’ However, the Radboud researcher argued, the decision to also classify AI as a product is questionable. 'This is something the EU has not handled as smartly. I do think it has been an honest mistake. It's not because of the Big Tech lobby.'

Wait-and-see

During the panel discussions, several experts elaborated on the possible effects of the European AI Act. Both enterprises and governments are gearing up for the new regulations, explained Bart Beima (AI Hub Noord Nederland) and Tim van den Belt (Dutch Authority for Digital Infrastructure - RDI). However, the discussions showed that there is still a lot of uncertainty about the consequences of the AI Act. Ultimately, much will depend on how judges interpret the Act, according to those present.

Biases

In the second panel discussion, led by Malcolm Campbell-Verduyn of the Faculty of Arts, Elise Niemeijer of the Dutch Vehicle Authority (RDW) similarly shared how the institution is preparing for the AI Act, and how RDW is currently dealing with artificial intelligence. In addition, Catherine Jasserand (DATAFACE, University of Groningen) talked about her research on facial recognition and ‘the necessity principle’. Meaning: is there evidence that AI is really necessary, or are there alternatives that work just as well, and do not pose dangers to people's fundamental rights?

Malvina Nissim, Professor of Computational Linguistics and Society at the University of Groningen, pointed out the flaws of large language models (LLM), as the training data always contain so-called 'biases'. These models, which are at the basis of popular products such as ChatGPT, are trained using all available information (texts) on the internet, but is this representative? For instance, Nissim pointed to research showing that less than 15 per cent of Wikipedia contributors are women. Ultimately, this will influence the 'output' of large language models.

According to the UG researcher, it is therefore crucial to look at the whole chain when regulating, covering input, the system as well as output. ‘The AI Act is needed, but are we looking at the right things?’ wondered Nissim. She also pointed out that many of the developments in AI are taking place outside the EU, which means we are looking at 'someone else's' product. Therefore, besides regulating, we need to invest in research, networks and digital literacy of the population to keep matters in our own hands, the scientist stressed.

Liability

In the final panel, led by organiser Nynke Vellinga (researcher at the STeP research group), liability and risk of AI were discussed. Who is responsible when things go wrong? Albert Verheij, Professor of Private Law at the University of Groningen, addressed, for example, the applications of AI in a hospital environment. What happens when ICT staff have integrated AI negligently, and what if a doctor makes a mistake because of an AI-based advice? Indeed, much will depend on a judge's assessment, Verheij stressed as well.

Panel member Jurriaan Parie then discussed the Algorithm Audit initiative. He introduced the concept of 'algoprudence', or jurisprudence for algorithms, which was new to most attendees.  Based on several case studies, the non-profit organisation Algorithm Audit provides advices for ethical algorithms, thus working on jurisprudence for AI.

Finally, cybersecurity expert Erik Rutkens, reiterated the complexity of the matter. According to him, software codes contain many errors, most of them unintentional. The question then arises: who is responsible if something goes wrong? It remains unclear whether there is sufficient capacity and knowledge to assess this.

Interdisciplinary

In the end, all present agreed on the importance of interdiscplinarity. After all, a lawyer is not an expert on AI, while an AI expert is normally not up to date with the latest European legislation. This was also what organiser Oskar Gstrein concluded after the event. 'You can see today that there is a great need for discussions on artificial intelligence by people from a variety of fields. This underlines the importance of an interdisciplinary approach.'

The discussion on the AI Act provided interesting insights, said the UG researcher. 'Usually legislation lags behind, but not in this case. It is fascinating to see how people react to this. You notice how hard it is for a regulator to get it right.' According to Gstrein, there was a demand in society to do 'something' about artificial intelligence. 'With the AI Act, we now have a European response. I don't think much can go wrong for the EU. The regulations are now becoming part of the process.'

Last modified:16 February 2024 11.09 a.m.

More news

  • 13 May 2024

    Trapping molecules

    In his laboratory, physicist Steven Hoekstra is building an experimental set-up made of two parts: one that produces barium fluoride molecules, and a second part that traps the molecules and brings them to an almost complete standstill so they can...

  • 29 April 2024

    Tactile sensors

    Every two weeks, UG Makers puts the spotlight on a researcher who has created something tangible, ranging from homemade measuring equipment for academic research to small or larger products that can change our daily lives. That is how UG...

  • 16 April 2024

    UG signs Barcelona Declaration on Open Research Information

    In a significant stride toward advancing responsible research assessment and open science, the University of Groningen has officially signed the Barcelona Declaration on Open Research Information.