Our website uses cookies to create a better user experience. To optimize the website we ask that you accept the cookies:

I agree I disagree
2021 2022 2023

2023: ACTIVITY REPORT OF THE CYBER SECURITY COALITION

AI cyber security challenges

“AI is a copilot who makes us switch gears faster”

The number of cyberattacks has increased by a quarter in 2023, shows the annual report of technology company Microsoft. The main culprit appears to be AI, which allows hackers to launch attacks faster and on a much larger scale. And thus a new approach is required, in which Microsoft will use AI to better protect itself and its customers. Meet the Secure Future Initiative.

David Dab

National Technology Officer at Microsoft

ChatGPT is just over a year old and the arrival of the smart chatbot caused great enthusiasm about AI worldwide. Now that more and more people are using the technology of generative AI, more and more companies are gradually realizing its added value. David Dab, Microsoft’s National Technology Officer: “AI is not new, but we have passed an important turning point, because for the first time many people can understand and see what the impact of AI can be. ChatGPT has really democratized AI, not just in terms of awareness, but sheer power and ease of use.” 

Yet the technology raises concern among many people. “It is simply not always used for good purposes. For example, hackers can develop key components of a large-scale, sophisticated attack with the snap of a finger,” David continues. In response, Microsoft has set up the Secure Future Initiative. “It has three pillars, focused on AI-based cyber defences, advances in fundamental software engineering, and advocacy for stronger application of international norms to protect civilians from cyber threats. With regard to our products, we will use AI to detect threats faster and more effectively. After all, only by fighting with equal weapons can we make the lives of hackers a lot more difficult and their criminal business less attractive.” 

AI technology can also help address the shortage of cyber professionals. “We cannot ignore it: we will not get there with people alone. That is why AI in cybersecurity is an absolute must. At Microsoft we’ve developed our own AI tool: Microsoft Security Copilot. It is an assistant that can take over a number of routine tasks from the cyber professional. AI can make very accurate vulnerability analyzes and predictions based on data. This should help cyber professionals identify suspicious activities more quickly.” 

Stronger together 

The use of AI in new and more secure software is an important step forward. Microsoft is also taking measures to combat identity fraud. “This is crucial because in the past year the number of cases of identity theft has increased tenfold. Therefore, we will also develop a new protocol and more consistent method for account verification. Finally, if we look at the speed at which the attacks are coming our way today, we simply need to respond faster and implement our security updates more quickly to ensure that there are no holes in our defence wall. AI will contribute to this as well.” 

In this way, Microsoft hopes to better protect itself and its customers against the increasingly bold attack methods of hackers. “In that context, cooperation between the public and private sectors must also be intensified. I see an important role for the Cyber Security Coalition in this because it is precisely by sharing experiences with each other and joining forces that we can make progress faster. More than ever, cyber security is a shared responsibility,” concludes David Dab. 

"Every machine learning stage should be protected diligently”

Artificial Intelligence (AI) is an incredibly useful technology in many domains. But it can also be deployed by cybercriminals to disrupt machine learning models, potentially affecting the final applications. Despite the possibly serious consequences, this aspect of the relationship between AI and cybersecurity is vastly underexposed. “We cannot and should not leave software developers alone in this challenge,” Sabri Skhiri of Euranova states.

Sabri Skhiri

CTO and Research Director at Euranova

The explosion in the use of AI is undoubtedly the most discussed topic of the year within the IT world. This new reality generates a myriad of new cybersecurity challenges. “Traditionally, this tension materialises in two ways. On the one hand, there is the use of AI to create smarter attack models. There are numerous examples of this. On the other hand, you also see an increase in the use of AI models for remediation during or after an attack,” clarifies Sabri Skhiri, CTO and Research Director at Euranova, a Walloon company that develops AI models for a wide range of international customers.  

Bypassing face recognition technology 

“But there is a third - as yet hotly underexposed - modus, where AI gets in the way of cybersecurity. And that is when AI is deployed to disrupt the machine learning process,” he continues. “It is very impactful, because of the consequences on the ultimate operation of the software model. You can compare it to a traffic light being covered and thus not visible, which means that the data that ends up in a traffic model about the crossroads does not match the reality. As a result, the optimisation of traffic flows, which was the purpose of the model, will not be achieved.” 

Other, potentially more harmful, examples of this use of AI can be found in face recognition. For example, researchers from Vietnam built a 3D face mask capable of bypassing Apple's Face ID technology. An even stronger illustration is the Italian startup Cap_able, which is marketing a clothing line that - when it encounters the AI algorithms in cameras performing face recognition - ensures that the wearer is no longer identified based on their face, but instead is recognised by the software as an animal. “These developers want to address what they see as the problematic nature of facial recognition. Such examples show the complexity of the issue, and the need for more attention to this area of tension,” Skhiri explains. 

Security by design 

This is precisely why Skhiri advocates more active awareness of the potential consequences of interacting with AI during the development phase of software models. “We cannot and should not leave software developers alone in the challenge. The potential consequences are too serious. More and more demonstrative use cases are also emerging. Each such case is implicit evidence that people underestimate this aspect of the relationship between AI and cybersecurity.” 

In other words, the construction process must always be accompanied by clear security tailored to this development. “We must strive for a real translation of the principle of security by design. In the current reality, however, it is not as obvious as one might think,” Skhiri continues. “The sharp growth of AI has clearly led to a race between companies, which are therefore freeing up very large amounts of resources and manpower and, as a result, are sometimes not sufficiently aware of the risks they are exposing themselves to.” 

Sharing own experiences  

Euranova itself came into extensive contact with this complex field of tension in recent months. For instance, for Eurocontrol, the body responsible for traffic safety in our airspace, the Walloon firm developed a new AI model to increase the efficiency and safety of air traffic. “The objective was to increase predictability, which would allow better anticipation of potential problems,” explains Eric Delacroix, co-founder and CEO of Euranova. 
 
Such highly sensitive and therefore highly secure datasets are, in practice, very clearly isolated from the rest of the world. “This means that the case cannot really be used as an example to demonstrate the issue. Nevertheless, it did contribute to us as a company starting to think about the issue even more intensively,” Skhiri adds. “And that is precisely why we joined the Belgian Cyber Security Coalition this year. After all, we believe very clearly in the model of knowledge sharing.” 

“The advent of generative AI enables a new perspective”

The impact of generative artificial intelligence (AI) on cybersecurity is hotly debated. But the nature of the discussion needs to change if we are to be able to truly seize the opportunities AI promises. “It will play an important role in orchestrating the most efficient recovery path.”

Vinod Vasudevan

Global CTO MDR & Global Deputy CTO Cybersecurity Services for Eviden

The complex relationship between generative AI and cybersecurity has become one of the most discussed topics within the industry. Currently, the main focus is on using AI to improve cybersecurity detection. This makes sense, because the technology has proven extremely effective in this domain. “However, what gets too little attention within this debate is the use of AI for automating responses in the event of large-scale and complex attacks,” Vinod Vasudevan remarks. He is Global CTO MDR & Global Deputy CTO Cybersecurity Services for Eviden, the ATOS business leading in digital transformation, cloud, big data and cybersecurity. 

As a result, today’s discussions are missing out on important opportunities. “The advent of generative AI enables a new perspective: we can deploy AI assistants to manage attacks, working together with security analysts. But this means we need to begin looking at AI more as a knowledge base for experts, who can thus learn to respond even better to complex attacks. We should tell that story more often,” Vasudevan states. 

Paradigm shift on the horizon  

In other words: the use of AI will change the way the cybersecurity field operates in the coming years, requiring a further expansion of its role. For instance, the further roll-out of self-healing endpoints will cause a paradigm shift in managed detection and response (MDR). "Moreover, AI will play an important role in orchestrating the most efficient recovery path for a business after a major cybersecurity incident,” Vasudevan explains.  

“This will include the use of AI for determining the Recovery Time Objective (RTO), Recovery Point Objective (RPO), application interdependencies, and business operations linkages, and using this understanding to generate recovery steps for resuming different business lines,” he continues. In fact, this paradigm shift is de facto necessary, as the use of multi-cloud, hybrid infrastructure will be increasingly pushed in the coming years. Recovery paths capable of dealing with such complex reality are by definition built on AI algorithms. 

“Approaching this environment through AI will also allow MDR to increasingly focus on prevention in the near future,” Vasudevan says. That preventative approach, even on a day-to-day level, will have a major impact on MDR operations. “A practical example of prevention is the use of policy management in MDR. Because cloud has thousands of configurations that keep changing, a small alteration can potentially lead to costly breaches. It could be as simple as an S3 bucket permission becoming public; this bucket could be storing critical customer information, which the organisation might not even realise. It is therefore important to continuously push configuration policies across all cloud workloads, detect critical configuration changes, and alter the configuration to secure state.” 

Countering fragmentation  

For this shift to come to fruition, however, the organisation of the cybersecurity sector also requires work. It remains too fragmented, according to Vinod Vasudevan: “Cloud and digital transformation have made the IT landscape dynamic, leading to exploitable opportunities for cybercrime syndicates. Consequently, numerous specialised areas of attack have emerged. Therefore, the market is characterised by a continuous need for innovation, which is met by a large ecosystem of innovative start-ups. This in turn leads to fragmentation.” 

So while this fragmentation is a logical consequence of the relative youth of the sector, it must decrease in the near future. “The implementation of innovative architectures such as Cybersecurity Mesh Architecture (CSMA) will contribute to increased maturity,” Vasudevan adds. “In this way, the sector will also be progressively able to meet security and recovery challenges - which will only increase exponentially with the upcoming AI revolution.” 

"The debate on AI should go even broader in 2024"

The ethical and societal implications of Artificial Intelligence (AI) will continue to grow in the coming years. Therefore, it will be crucial that we have a broad societal debate on how to best deal with it. Regulatory initiatives play a guiding role in this process. "This is precisely why I believe the European AI Act can become very impactful,” states Professor Nathalie Smuha (KU Leuven/New York University).

Nathalie Smuha

Professor at KU Leuven & New York University

Artificial Intelligence, and generative AI more specifically, has really broken through in the past year. This immediately sparked a public debate about the desirability, the ethical implications, and the potential dangers of the technology. It also brought the need for a regulatory framework for AI to the forefront. Within academia, this discussion has been going on for several years. “I have been working on this since 2017. Today, we can really say it has become hyped up,” begins Prof. Dr. Nathalie Smuha. She specialises in the legal, ethical and societal implications of AI, and currently holds an Emile Noël Fellowship at New York University School of Law. 

The world’s first comprehensive AI regulation 

As a researcher, Smuha is well placed to oversee the current debate. “What I miss in the present discussion is the acknowledgement that AI is ultimately human work. It is often pretended that this is a technological reality we can no longer escape, hoping it won't harm us too much as humans. That narrative is obviously not true, because we as humans ultimately determine what place AI has in our lives, and not the other way around.”  

The most palpable translation of this area of tension is the push for an AI-specific legislative framework. Just before the end of 2023, the European Parliament and Council reached a political agreement on the framework that AI applications in Europe must comply with. “This compromise is the result of a legislative proposal submitted in 2021, thus before ChatGPT's boom. It also builds on a set of ethical guidelines drafted by an expert group, which I coordinated in 2019,” Smuha explains. The EU AI Act will be formally adopted by both the Parliament and Council in the coming months, becoming the world’s first comprehensive AI regulation. 

Race to regulate AI  

The EU AI Act must be seen in the context of the rapid development of AI worldwide. "In Europe, we view AI and the need to tackle its impact on human rights differently from the US, where the emphasis is generally more on industry and innovation. The discrepancy is culturally determined, and related to disparate views on the roles of regulation and government, which is driven by our historical frame of reference and the political climate.”  

“Therefore, this distinction is also felt in academia, creating a distinct view on AI regulation. In the US, they take a different starting point; as they often trust the market more than the government, they are hesitant to adopt new laws that could hamper innovation. At the same time, more and more countries – including the US – are contemplating new binding or non-binding AI regulations,” Smuha says. “In other words, besides the current race to implement AI, we should also talk about a race to regulate AI.” 

Hence, the fact that Europe is the first authority to come up with a comprehensive legislative framework is of great importance. After all, this clearly reinforces the EU's regulatory position and allows it to take a leading role in terms of AI standard-setting worldwide. “Given that even AI applications developed outside Europe will eventually have to comply with this framework if they want to be marketed in the Union, I believe there could be a movement whereby other regions of the world will adopt a homogeneous set of rules, similarly to the “Brussels-effect” we have seen with the GDPR. If Europe had not been the first to come up with such an initiative, it would have been much harder to create this dynamic. This is precisely why I believe the EU AI Act can become very impactful. However, we still need to see what kind of impact this will be." 

Extending the debate even wider 

If we want this dynamic to succeed, it is crucial that the debate be opened up even further in the coming months and years. “Europe has been accused by the rest of the world of being too preoccupied with regulation. And even the best regulation will not be enough to deal with the risks of AI. Therefore, we must also ensure more training and awareness-raising. That means looking at the long-term and asking ourselves what kind of society we want to live in, as well as involving the voice of the general public more than we have so far. In other words: the debate on AI should be even broader in 2024.”  

This debate will also involve cybersecurity. “The more we use AI applications, the more vulnerable we become, and therefore the more need these is for solid cybersecurity systems. This applies, for instance, to self-driving cars, but equally to robots or advanced chatbots. The cybersecurity sector must thus also take up an active role in this broad debate,” Nathalie Smuha concludes.