AI Security VS AI Cybercrime – A Game of Cat and Mouse

With cybercrime continuing to rise and become ever more complex, Philip Bindley, Managing  Director of Cloud and Security at Intercity, discusses how artificial intelligence (AI) is being  ‘weaponised’ by cyber criminals. 

Earlier this year, the UK government released the results of its seventh Cyber Security  Breaches Survey exploring cyber security policies and the threat landscape for businesses of all sizes. The report found that 39% of UK businesses identified a cyber-attack in the last 12  months, with phishing named as the most common threat, having affected 83% of businesses. 

The creation of phishing and spear-phishing attacks, where artificial intelligence is used to curate the content of an email to make it look more genuine, is just one example of how cybercriminals have weaponised AI so they can infiltrate an organisation and raise the cyber security threat level.  

These increasingly sophisticated emails often use AI to emulate the tone of voice and mimic  content of previously sent emails. They may even include a topic that is currently being discussed with in the organisation – perhaps a new product or service, which someone from outside of the organisation wouldn’t, or at least shouldn’t, know. The purpose is to make these emails, and any action required by employees, as convincing and realistic as possible to encourage someone to act, giving cybercriminals access to the organisation’s wider systems and data.  

Another example is through altering AI algorithms and creating intelligent malware that learns how to bypass the security tools used by an organisation in a way that doesn’t look suspicious.  Over time, this technique allows cyber criminals to slowly infiltrate an organisation and cause harm. 

How can organisations use AI to fight back? 

Traditional mechanisms for identifying cybersecurity threats have often relied on recognising a pattern or a known threat. AI can augment that in a number of ways by learning what “normal”  behaviour looks like and creating a baseline for the kind of events that occur in everyday operations. In this instance, AI would be used to flag events that fall outside of the norm to make users aware of the potential threat. 

However, this method of identifying threats, such as a piece of ransomware, relies on the AI detecting activity that does not normally occur and is therefore often unreliable in detecting true threats. For example, this might be a device trying to communicate with a system based in a different country or large amounts of data being sent to a destination that normally does not receive such payloads. 

Visibility and vigilance are the key. AI-based technologies help by allowing organisations to spot activity that might not alert to a potential breach but could be the precursor to something more serious. Here are some notable examples: 

• Email scanning: The common consensus is that traditional methods of email scanning are not always effective when it comes to identifying potentially malicious emails. However, AI email scanning is forward-looking and can understand whether or not an email is potentially malicious based on its content and tone, and the actions it is trying to solicit. 

• Antivirus products: AI in antivirus products is predominantly designed to detect anomalies, creating a challenge in finding the right balance between the ‘machine’  making all the decisions and the level of human intervention.  

Overzealous technology could, potentially, cause and block legitimate traffic and programs. Having a blend that uses both AI and human oversight is arguably the best way forward. Endpoint Detection and Response is the favoured approach for combining AI and human interaction, whereby technology on the endpoint is deployed to protect and inform against known threats based on AI and the machine. It’s then down to human interaction to further diagnose the situation and make a choice on the best course of action.  

• Machine Learning (ML): ML provides a huge advantage when looking to spot behaviour that is out of the ordinary. In security products, ML is designed to learn how a system normally reacts under various conditions, helping the AI to sharpen its anomaly detection powers.  

However, ML is less helpful when it comes to detecting a potential piece of malicious code or a suspicious email based on patterns or acquired knowledge of what something bad might look like. That said, both Networks and Endpoints can benefit from a layer of ML that creates a picture of what normal patterns of network traffic or device activity looks like and can report abnormalities, or even block traffic and access to systems, based on the detection of something unusual. 

• Natural language processing (NLP) apps: NLPs form another layer in the security paradigm that make the detection of malicious code easier. There are often patterns in the way in which harmful code is written that point to a likely threat based on the approach of the attacker. Common patterns in the language or code will also point to certain individuals or groups that are creating the malware, to identify the attackers  ‘voice’ and a potential individual behind the attack. 

What does this mean for the future of cybersecurity? 

There is a constant battle between cyber-criminals and cyber-defenders, often with humans and machines working together to try to outsmart each other. But we are moving into an era where, due to the sheer volume and complexity of these attacks, the more that can be automated and continually improved through AI, ML and NLP, the more we will see defences improve. However, on the other side of the fence, cybercriminals will be using the same 

technology advancements to further their cause. Which in turn, leaves each side constantly trying to be one step ahead of the opposition. 

Keeping ahead of the cyber-criminals will continue to be the game of cat and mouse it always has. However, the playing field is only going to get bigger. The move to automation with autonomous vehicles and smart cities, will exponentially increase the surface area for cyber attacks. This only serves to underline the continued need for AI but also shows how the stakes are likely to get even higher. 

+ posts

CIF Presents TWF – Professor Sue Black

Newsletter

Related articles

Building a people-centric strategy to unlock AI’s potential

Today, there is a real atmosphere of excitement for...

Beyond Borders: Cloud Solutions for Universal Interoperability

In the journey towards transforming ways of working, businesses...

The Future of Marketing: Automation vs Innovation

Does AI Understand Your Brand Voice? AI is dropping jaws...

AI Act – New Rules, Same Task

The first law for AI was approved this month...

Time to Ditch Traditional Tools for Cloud Security

Reliance on cloud technologies has significantly expanded the attack...

Subscribe to our Newsletter