Cybersecurity and the race to leverage AI

Artificial intelligence (AI) is being touted as a key weapon in the fight against cybercriminals, but in reality it has some way to go before it becomes a fully-fledged part of any enterprise’s arsenal.

AI has enormous potential in cybersecurity, but it is still largely unrealized. Nonetheless, a number of vendors are already promoting AI in their marketing messages, when the truth is they only have a few AI capabilities and are largely using machine learning (ML).

AI is basically a system capable of intelligent behavior – that can sense, reason and adapt, for example. In contrast, ML learns from large data sets to find common patterns and similarities. Its algorithms improve through being exposed to more data. However, they are connected, since AI uses ML to mimic human intelligence, observing its surroundings, for example, to make informed decisions.

Using big data

At present, the cybersecurity industry can’t rely solely on AI to address all security cases as there isn’t enough data available yet to fully train algorithms. This will come in time. At Orange Cyberdefense, for example, we are currently working on building an AI solution using big data from cybersecurity applications. The more data we have, the higher level of training we can attain. This is an area we are continually assessing, both in terms of use cases and detection models.

Orange Cyberdefense has invested heavily in research, development and start-ups over the past few years. We are actively working on improving the detection and analysis capabilities of AI by training AI algorithms. This will help meet our goal of achieving 100 percent accuracy in terms of threat detection.

For example, AI using automatic analysis and learned algorithms can validate emails to check for malicious phishing. It can do this much faster and more accurately than humans, thereby providing fully-automated qualification.

At the stage the industry is at now, we are talking about augmented intelligence for humans, brought by machine learning (ML) systems. These ML algorithms train on large data sets to seek out suspicious activity on networks, for example, on a per-use case basis. Humans are still required to control, validate and review the data from these systems.

The power of ML

We use machine learning extensively at Orange Cyberdefense. ML algorithms, for example, can be trained on massive lists of malware to scan for malicious programs. Obviously this is an on-going process as the lists must be continually updated in line with emerging threats.

Every day, enterprises are being flooded by network data and cybersecurity events that need to be analyzed and in some cases remediated. It is impossible for humans to effectively analyze all these alerts. ML can be trained to pick up anomalies in traffic flow, providing humans with the intelligence they need for smart decision making and so forth.

ML is also invaluable in supporting human expertise by replacing routine tasks, such as matching the level of emergency security required for incidents. This frees up analysts to spend time on higher level investigation work.

Analyst firm Gartner forecasts that 75 percent of security software tools will incorporate prescriptive analytics based on heuristics, AI or ML algorithms by 2020. However, it will be a few more years before we see AI come into its own in the security space.

What to expect from AI

AI will provide advanced detection qualification and analysis alongside remediation when it arrives, but in the latter it will need to categorically know what it is allowed to do and not do, for example, what context to remediate in. Fully automated remediation needs to make sure that the impact won’t be worse than the attack itself.

At Orange Cyberdefense, we are working on two different training systems for detection and remediation to address this point. Why? Because remediation needs 100 percent accuracy every time or it can have some serious uncontrolled side effects. With false positives, for example, you could miss a detection, and it will not be potentially catastrophic.

We have a long journey to travel, however, before we get to AI-controlled cybersecurity, made up of machines talking to machines with fully-automated detection and remediation. This will bring with it cultural change, which will alter contractual and operational models. This is something we are preparing our customers for now.

Enterprises will need to accept that cybersecurity decisions are being made by machines. They will need to ask themselves if they trust machines making decisions on their behalf. This, without doubt, is one of the biggest challenges for AI.

In addition, it takes time to train AI systems. Customers will therefore need to trust partners to access their data in order to train algorithms, to bring the security that they demand to their systems.

AI will fundamentally change cybersecurity operations. Jobs and the expertise required will be different. Automation and interaction between humans and machines will need to be fully supported by enterprises, and it is something they will need to adapt to sooner rather than later.

Find out the six steps you need to take to get on top of cyber threats.

Rodrigue Le Bayon
Rodrigue Le Bayon

Rodrigue Le Bayon is Head of SOC at Orange Cyberdefense. He has spent his entire career within the Orange Group and has been helping develop the Orange cybersecurity activities since 2008.