Over the past few years, we have seen artificial intelligence move from the shadows into the mainstream. From the smart music speakers in our front rooms to the digital assistant on our phones in our pockets, we probably interact with or are guided by AI on a daily basis – often without awareness of its impact or influence.
The ability for a system to rapidly classify an input, as well as learning from experience and by example means that security vendors have been quick to leverage AI to underpin their threat analysis process. This has led to increased accuracy, faster threat recognition and an overall reduction in risk when pitted against even the best human-derived threats.
The very same reasons that security vendors have adopted AI have made the technology attractive to cybercriminals, as having an attacking system that can very rapidly classify its target, learn from its failures and exploit its successes, all without human interaction, is a perfect storm scenario for any attacker.
AI vs AI
This leads to a very strange and unknown scenario where we will see attack AIs pitched against detection AIs. Unlike a human whose choices are guided by experience, skill and bias, an AI will often deploy strategies that are unexpected. If we think of the headline AI victories over humans such as the 4-1 defeat of Go Grandmaster Lee Se-Dol by Google’s AlphaGo or IBM’s Deep Blue victory over Garry Kasparov, this shows that AI thinking can be very unpredictable, often playing a crazy almost suicidal or oversimplified move no right-minded human will make. This left-field approach is then built on as part of a larger winning strategy.
As AI is pitched against AI, the battle ground may become a highly misunderstood place for humans, with attack and defence strategies not being understood and, as such, knowing who is winning will be almost impossible to call. This toing and froing may, in fact, never end, leading to a continual battle scenario, and even when we think the battle is won this may be just the next stage in the attacker’s strategy, with the real battle being fought on a completely different front.
What should we do?
It is therefore important that we adopt security strategies that do not fully remove the human touch. We need to adopt multiple systems that provide enough information from sources so that we can determine if the threat has indeed become a breach, so the impact of said breach can then be understood.
Once again the assessment process is highly likely to be underpinned by some kind of AI-based decision engine that highlights possible threats that ordinarily might have been missed by the human eye. However, if an attacker AI suspects they will be monitored by an AI, they will likely take a strategy that successfully evades detection.
You may ask how we operate in this new AI frontier. In my opinion, we need to take a pragmatic approach that assumes that one or more of our defences have been breached and the intruder is well hidden. We also have to validate all activity and any attempt to access systems and data rather than depend on one single activity at one time.
In short, we need to adopt a zero trust model and only allow access once the burden of proof shows that the request is valid and the response will be equally uncorrupted.