Our website uses cookies to create a better user experience. To optimize the website we ask that you accept the cookies:

I agree I disagree
2021 2022 2023


"Every machine learning stage should be protected diligently”

Artificial Intelligence (AI) is an incredibly useful technology in many domains. But it can also be deployed by cybercriminals to disrupt machine learning models, potentially affecting the final applications. Despite the possibly serious consequences, this aspect of the relationship between AI and cybersecurity is vastly underexposed. “We cannot and should not leave software developers alone in this challenge,” Sabri Skhiri of Euranova states.

Sabri Skhiri

CTO and Research Director at Euranova

The explosion in the use of AI is undoubtedly the most discussed topic of the year within the IT world. This new reality generates a myriad of new cybersecurity challenges. “Traditionally, this tension materialises in two ways. On the one hand, there is the use of AI to create smarter attack models. There are numerous examples of this. On the other hand, you also see an increase in the use of AI models for remediation during or after an attack,” clarifies Sabri Skhiri, CTO and Research Director at Euranova, a Walloon company that develops AI models for a wide range of international customers.  

Bypassing face recognition technology 

“But there is a third - as yet hotly underexposed - modus, where AI gets in the way of cybersecurity. And that is when AI is deployed to disrupt the machine learning process,” he continues. “It is very impactful, because of the consequences on the ultimate operation of the software model. You can compare it to a traffic light being covered and thus not visible, which means that the data that ends up in a traffic model about the crossroads does not match the reality. As a result, the optimisation of traffic flows, which was the purpose of the model, will not be achieved.” 

Other, potentially more harmful, examples of this use of AI can be found in face recognition. For example, researchers from Vietnam built a 3D face mask capable of bypassing Apple's Face ID technology. An even stronger illustration is the Italian startup Cap_able, which is marketing a clothing line that - when it encounters the AI algorithms in cameras performing face recognition - ensures that the wearer is no longer identified based on their face, but instead is recognised by the software as an animal. “These developers want to address what they see as the problematic nature of facial recognition. Such examples show the complexity of the issue, and the need for more attention to this area of tension,” Skhiri explains. 

Security by design 

This is precisely why Skhiri advocates more active awareness of the potential consequences of interacting with AI during the development phase of software models. “We cannot and should not leave software developers alone in the challenge. The potential consequences are too serious. More and more demonstrative use cases are also emerging. Each such case is implicit evidence that people underestimate this aspect of the relationship between AI and cybersecurity.” 

In other words, the construction process must always be accompanied by clear security tailored to this development. “We must strive for a real translation of the principle of security by design. In the current reality, however, it is not as obvious as one might think,” Skhiri continues. “The sharp growth of AI has clearly led to a race between companies, which are therefore freeing up very large amounts of resources and manpower and, as a result, are sometimes not sufficiently aware of the risks they are exposing themselves to.” 

Sharing own experiences  

Euranova itself came into extensive contact with this complex field of tension in recent months. For instance, for Eurocontrol, the body responsible for traffic safety in our airspace, the Walloon firm developed a new AI model to increase the efficiency and safety of air traffic. “The objective was to increase predictability, which would allow better anticipation of potential problems,” explains Eric Delacroix, co-founder and CEO of Euranova. 
Such highly sensitive and therefore highly secure datasets are, in practice, very clearly isolated from the rest of the world. “This means that the case cannot really be used as an example to demonstrate the issue. Nevertheless, it did contribute to us as a company starting to think about the issue even more intensively,” Skhiri adds. “And that is precisely why we joined the Belgian Cyber Security Coalition this year. After all, we believe very clearly in the model of knowledge sharing.”