Our website uses cookies to create a better user experience. To optimize the website we ask that you accept the cookies:

I agree I disagree
2021 2022 2023

2023: ACTIVITY REPORT OF THE CYBER SECURITY COALITION

"The debate on AI should go even broader in 2024"

The ethical and societal implications of Artificial Intelligence (AI) will continue to grow in the coming years. Therefore, it will be crucial that we have a broad societal debate on how to best deal with it. Regulatory initiatives play a guiding role in this process. "This is precisely why I believe the European AI Act can become very impactful,” states Professor Nathalie Smuha (KU Leuven/New York University).

Nathalie Smuha

Professor at KU Leuven & New York University

Artificial Intelligence, and generative AI more specifically, has really broken through in the past year. This immediately sparked a public debate about the desirability, the ethical implications, and the potential dangers of the technology. It also brought the need for a regulatory framework for AI to the forefront. Within academia, this discussion has been going on for several years. “I have been working on this since 2017. Today, we can really say it has become hyped up,” begins Prof. Dr. Nathalie Smuha. She specialises in the legal, ethical and societal implications of AI, and currently holds an Emile Noël Fellowship at New York University School of Law. 

The world’s first comprehensive AI regulation 

As a researcher, Smuha is well placed to oversee the current debate. “What I miss in the present discussion is the acknowledgement that AI is ultimately human work. It is often pretended that this is a technological reality we can no longer escape, hoping it won't harm us too much as humans. That narrative is obviously not true, because we as humans ultimately determine what place AI has in our lives, and not the other way around.”  

The most palpable translation of this area of tension is the push for an AI-specific legislative framework. Just before the end of 2023, the European Parliament and Council reached a political agreement on the framework that AI applications in Europe must comply with. “This compromise is the result of a legislative proposal submitted in 2021, thus before ChatGPT's boom. It also builds on a set of ethical guidelines drafted by an expert group, which I coordinated in 2019,” Smuha explains. The EU AI Act will be formally adopted by both the Parliament and Council in the coming months, becoming the world’s first comprehensive AI regulation. 

Race to regulate AI  

The EU AI Act must be seen in the context of the rapid development of AI worldwide. "In Europe, we view AI and the need to tackle its impact on human rights differently from the US, where the emphasis is generally more on industry and innovation. The discrepancy is culturally determined, and related to disparate views on the roles of regulation and government, which is driven by our historical frame of reference and the political climate.”  

“Therefore, this distinction is also felt in academia, creating a distinct view on AI regulation. In the US, they take a different starting point; as they often trust the market more than the government, they are hesitant to adopt new laws that could hamper innovation. At the same time, more and more countries – including the US – are contemplating new binding or non-binding AI regulations,” Smuha says. “In other words, besides the current race to implement AI, we should also talk about a race to regulate AI.” 

Hence, the fact that Europe is the first authority to come up with a comprehensive legislative framework is of great importance. After all, this clearly reinforces the EU's regulatory position and allows it to take a leading role in terms of AI standard-setting worldwide. “Given that even AI applications developed outside Europe will eventually have to comply with this framework if they want to be marketed in the Union, I believe there could be a movement whereby other regions of the world will adopt a homogeneous set of rules, similarly to the “Brussels-effect” we have seen with the GDPR. If Europe had not been the first to come up with such an initiative, it would have been much harder to create this dynamic. This is precisely why I believe the EU AI Act can become very impactful. However, we still need to see what kind of impact this will be." 

Extending the debate even wider 

If we want this dynamic to succeed, it is crucial that the debate be opened up even further in the coming months and years. “Europe has been accused by the rest of the world of being too preoccupied with regulation. And even the best regulation will not be enough to deal with the risks of AI. Therefore, we must also ensure more training and awareness-raising. That means looking at the long-term and asking ourselves what kind of society we want to live in, as well as involving the voice of the general public more than we have so far. In other words: the debate on AI should be even broader in 2024.”  

This debate will also involve cybersecurity. “The more we use AI applications, the more vulnerable we become, and therefore the more need these is for solid cybersecurity systems. This applies, for instance, to self-driving cars, but equally to robots or advanced chatbots. The cybersecurity sector must thus also take up an active role in this broad debate,” Nathalie Smuha concludes.