Cybersecurity strategies need to change in order to address the new issues that Machine Learning (ML) and Artificial Intelligence (AI) bring into the equation. Although those issues have not yet reached crisis stage, signs are clear that they will need to be addressed โ€“ and soon โ€“ if cyberattackers are to be prevented from obtaining a decided advantage in the continuing arms race between hackers and those who keep organizationsโ€™ systems secure.

ML and AI can magnify existing vulnerabilities and open the door to new attack strategies. At the same time, though, they offer new tools to help organizations secure their systems from incursions from those with malicious intent. That means that the battlefield in this arms race is rapidly shifting, and organizations will need to shift their strategies in order to keep up.

Unfortunately, top leaders have been slow to recognize this shift. In fact, many are remaining two steps behind instead of merely one, as they treat these technologies as little more than useful enhancements to existing systems instead as the gamechangers that they are. Ivan Novikov points out in a 2018 Forbes article:

It would be foolish to assume that attackers and intruders would forgo such an effective tool as AI to make their exploits better and their attacks more intelligent. Itโ€™s especially true now when itโ€™s so easy to use so many machine learning technologies out of the box, leveraging open-source frameworks like TensorFlow, Torch or Caffe.

Thus, cybersecurity will need to incorporate ML and AI into cybersecurity strategies in order to combat the increasing sophistication of tools that cyberattackers could adopt. Not only that, but the entire paradigm of cybersecurity will need to shift in order to effectively combat shifts in strategies that these technologies bring.

An increasingly expanding attack surface

The stakes are too high to risk falling behind in the battle against cyberattackers. Our world is highly interconnected digitally. Industrial control systems (ICS), Smart City systems and the Internet of Things (IoT) offer enormous benefits to businesses, governments and consumers.

Unfortunately, all those connected devices also increase the attack surface that cyberattackers can use to compromise systems. The idea of using such devices to cause physical harm or damage might seem remote. Why would attackers want to hack into the systems that offer convenience to our lives? The answer is simple. Disrupting systems that we rely on heavily could send a powerful statement and even cause panic if it succeeded on critical enough systems on wide enough scales.

A decade ago, the Stuxnet worm set the Iranian nuclear program back years by destroying Iranian uranium-enriching centrifuges despite the seemingly impenetrable security on the facility. Power grids โ€“ and especially troubling, nuclear power plants โ€“ have been penetrated by hackers.

Hackers have achieved small-scale breaches of water distribution facilities and dams. Vulnerabilities have been demonstrated for cyber-connected cars, heart monitors, insulin pumps, mass transit systems and much more.

A massive disruption of the internet was achieved in 2016 by exploiting the vulnerabilities in hundreds of thousands of internet-connected digital cameras and DVD players and using them in a massive DDoS attack on prominent websites. The 2017 WannaCry ransomware shut down critical equipment at hospitals around the world.

A tepid response

Unfortunately, too many organizations are more reactive than proactive when it comes to cybersecurity. Many ICSes rely on little more than โ€œsecurity by obscurityโ€ to deter hackers. The original thought was that no one would be interested in hacking into targets that were of interest only to the systemsโ€™ stakeholders. The combination of the system being of perceived as low value to hackers and using protocols unique to the system was deemed to be enough to make the systems more trouble than they were worth to hackers.

As those systems are increasingly connected to the internet, however, protocols used to access them increasingly incorporate โ€œoff the shelfโ€ connectivity tools, rendering their protocols more widely used and more familiar to hackers. As ransomware attacks have demonstrated, systems compromised by attacks may not even be specifically targeted by the attacker. They may merely contain a known vulnerability. Such vulnerabilities are increased when system stakeholders consider their systems secure until after the unthinkable occurs.

Neither are systems secure merely because the consequences of hackers taking control of them is minimal. If hackers can inject false data into systems that feed data into control systems, they can instigate control errors that are just as damaging as if the hackers gained control of the control system itself. For example, if hackers intercepted pressure readings from a pipeline monitoring system and replaced it with data that indicated that the pressure was too low, the monitoring system โ€“ or the humans who controlled it โ€“ could mistakenly raise the pressure to a point where the pipeline bursts.

Add to this the looming threat that AI-enhanced hacking tools could make hackersโ€™ efforts even more effective and the situation is sobering. Consider the findings of Accentureโ€™s 2018 State of Cyber Resilience study, as reported in a Forbes article:

Accenture surveyed 4,600 enterprise security practitioners representing companies with annual revenues of $1 billion or more in 15 countries. 83% of respondents to the survey agree that advanced technologies are essential and they would commit funding to them if they could. But only 40% are investing in AI, machine learning, and automation technologies to improve their security defenses.

Many organizations may be spending too much on technologies that do not minimize the impact of cyber attacks. โ€ฆ By better analyzing data and applying advanced threat intelligence, organizations can start to anticipate threats and adopt a more proactive approach to defensive strategies.

Instead of spending in areas that improve defenses against the threats that are already on the horizon, the majority of organizations are spending on strategies that ML and AI are rapidly rendering obsolete. This massive disconnect between what security practitioners in major enterprises think and what their enterprises are doing suggests that many organizations are choosing to fall behind in the arms race. In doing so, they put themselves in jeopardy.

Increased cybersecurity threats from AI-enabled hacking tools

A 2016 competition clearly demonstrated what AI and ML can do either to compromise or to better protect organizationsโ€™ systems. The DARPA Cyber Grand Challenge gathered seven top cybersecurity teams with the goal of developing advanced, autonomous systems that could detect and patch software vulnerabilities before hackers could find and exploit them. Teams incorporated ML and AI into their cybersecurity tools to enable them to identify and patch vulnerabilities quickly. They also probed their competitorsโ€™ tools to find and exploit those toolsโ€™ vulnerabilities, thereby eliminating those competitors.

Ultimately, the competition demonstrated the effectiveness of ML and AI in enhancing cybersecurity. Ironically, it also showed how effective ML and AI can be in enhancing hacking tools. Corey Nachreiner, CTO of network security firm WatchGuard Technologies said of the competition:

While this was a research tournament to help the โ€˜good guys,โ€™ the contest proved that machines can automatically find and exploit new vulnerabilities rather quickly. In other words, it illustrated one way malicious threat actors might leverage AI for an attack as well as how defenders can leverage it for defence.

The first documented ML-aided cyberattack was detected in India in early 2017. Malware inserted into the network used ML to observe user behavior patterns and mimic them. That made it harder for traditional security tools to detect the malware it as it scanned the network for further vulnerabilities. The malware also learned the tone and writing style of human users so it could compose emails similar enough to individualsโ€™ styles to make its spearphishing emails look legitimate enough to fool even the account ownerโ€™s closest associates.

AI-aided hacking tools are already available on the Dark Net. Some learn user behavior for spearphishing purposes, like the cyberattack in India mentioned above. Others mimic well-known browsers to submit multiple login requests in a way that makes them look like they are coming from multiple computers instead of just one.

As long as non-AI-aided phishing techniques prove effective enough for cybercriminals to succeed with them, attackers will continue to avoid the extra cost and time expenditures that ML and AI tools require to set up. That avoidance should not lull organizations into complacency about securing their systems from such tools, though. As non-AI-aided techniques grow less successful, we can expect cybercriminals to turn to more advanced, ML options. These options could conceivably even add information mined from targetsโ€™ social media and online shopping accounts to enhance their efforts. Jonathan Sander, Vice President of product strategy at security management firm Lieberman Software says:

Imagine if [machine learning-enhanced hacking software] could predict our likely answers to security questions in order to reset passwords for us automatically to hijack accounts without having to steal the data from the source. Imagine if it could even text us and pretend to be our kid asking for the Netflix password because they forgot it.

Such hacking software is well within the realm of possibility. Jeremy Straub, Associate Director of the North Dakota State University Institute for Cyber Security Education and Research adds:

AI โ€ฆ could help human cybercriminals customize attacks. Spearphishing attacks, for instance, require attackers to have personal information about prospective targets, details like where they bank or what medical insurance company they use. AI systems can help gather, organize and process large databases to connect identifying information, making this type of attack easier and faster to carry out. That reduced workload may drive thieves to launch lots of smaller attacks that go unnoticed for a long period of time โ€“ if detected at all โ€“ due to their more limited impact.

AI systems could even be used to pull information together from multiple sources to identify people who would be particularly vulnerable to attack. Someone who is hospitalized or in a nursing home, for example, might not notice money missing out of their account until long after the thief has gotten away.

In addition, an AI-aided attack would not require human instructions when it encounters resistance, or when it finds a previously available vulnerability patched. It can automatically scan for new vulnerabilities or alter its approach to better conceal its presence when it is identified and blocked.

This could greatly increase the speed of incoming attacks, beyond what current human-based defense strategies can handle. This would require an increase in AI-enhanced defense โ€“ and perhaps even retaliatory โ€“ tools, taking the arms race to yet another level.

A 2017 Wired article pointed to the imminent weaponization of AI. Cybersecurity researchers have already demonstrated such capabilities in an automated malware manipulation environment that rewrites its code whenever necessary to keep anti-virus software from discovering it, while still maintaining its malicious capacity. The article warns:

With several new such tools in development, and competitions fueling innovation, it is not hard to imagine that the next few steps in this evolutionary ladder can create an autonomous system that will adapt, learn new environments and identify flaws, which it can exploit. This will be a true game changer.

IBM cybersecurity researchers gave TechRepublic a glimpse of this future of AI-aided hacking in a recent article:

[W]hat happened in the last few years with AI becoming very much democratized, and very widely used, was that the attackers also started to study up on it, and use it to their advantage, and weaponize it.

At IBM Research, we developed Deep Locker, basically to demonstrate how existing AI technologies already out there in the open source can be easily combined with malware powered attacks, which are also being seen in the wild very frequently, to create entirely new breeds of attacks.

Deep Locker is using AI to conceal the malicious intent in benign unsuspicious looking applications, and only triggers the malicious behavior once it reaches a very specific target, who uses an AI model to conceal the information, and then derive a key to decide when and how to unlock the malicious behavior.

First of all, it can be any kind of feature that an AI can pick up. It could be a voice recognition system. We’ve shown a visual recognition system. We can also use geolocation, or features on a computer system that are identifying a certain victim. And then these indicators, we can choose whatever indicators there is, can be fed into the AI model, from which then the key is derived, and basically the decision is made on whether to attack or not.

This is really where many of these AI powered attacks are heading, to โ€ฆ bring a new complexity to the attacks. When weโ€™re studying how AI can be weaponized by attackers, we see that their number of characteristics are changing compared to traditional attacks.

On the one hand, AI can make attacks very evasive, very targeted, and then they also bring an entire new scale and speed to attacks, with reasoning, and with autonomous approaches that can be built into attacks to work completely independently from the attackers.

Lastly, we see a lot of adaptability that is possible with AI; AI can learn and retrain on-the-fly what worked, what didnโ€™t work in the past, and get [past] existing defenses. The security industry and the security community needs to understand how these AI powered attacks are being created, and what their capabilities are.

Iโ€™d like to compare this to a medical example, where we have a disease, and itโ€™s mutating again, that we have AI powered attacks this time, and we need to understand what is the virus, what is the mutations, and where are its weak points and limitations in order then to come up with the cure or the vaccine to it.

AI-enhanced hackingโ€™s effect on cybersecurity practices

The use of AI in hacking tools changes the face of cybersecurity. With risks that AI-enhanced cyberattacks create, organizations that hold huge storehouses of sensitive customer information may want to rethink their use of centralized servers. They may want to use them mainly to process the information but distribute sensitive data to multiple locations.

This would reduce the types of massive data breaches, such as Target and Equifax, that have exposed personal data of millions of customers once a single system was compromised. By distributing data across multiple secured systems, any successful breaches could expose the personal data of fewer customers. Even better, it could render the data less useful by fragmenting it over multiple servers.

Another current security practice that will become more vulnerable to AI-aided hackers is knowledge-based authentication (KBA). AI-enhanced hacking tools could pull public and private data from social media, marketing data, credit reports and transaction history to build profiles on targeted individuals. They could then use those profiles to guess passwords from static security questions such as date of birth, motherโ€™s maiden name, pet names and more, getting past those KBA protections and into individualsโ€™ accounts.

Thus, Know Your Customer (KYC) practices will need to extend beyond account open processes. Elements of KYC will need to be applied to online transactions to ensure that each transaction is being performed by the authorized person and not by attackers.

Countries and organizations are increasingly moving transactions and identification processes to cyberspace. While this is convenient, it doesnโ€™t take into account the fact that vast numbers of consumers are still far from tech savvy. When hackers have ML- or AI-aided hacking tools at their disposal, such consumers are even easier targets for identification theft.

This lack of tech savviness on the part of customers will force organizations to take even greater precautions to ensure that their customers remain safe. One such approach is that of the mobile banking app Monzo, which handles online transactions by requesting a photo of the customerโ€™s ID accompanied with a self-video to verify the person on the ID through facial recognition. Such efforts to practice KYC throughout the life of the account will be necessary to defeat the more sophisticated identify theft opportunities that AI opens for cybercriminals.

Using AI to enhance cybersecurity

Hackers are not the only ones turning to ML and AI to enhance their efforts. Cyber-security professionals are enlisting automation and machine-powered analysis, too, in their efforts to secure systems.

Accentureโ€™s 2018 State of Cyber Resilience study mentioned earlier identifies โ€œautomated orchestration capabilities that use AI, big-data analytics and machine learning to enable security teams to react and respond in nanoseconds and milliseconds, not minutes, hours or daysโ€ as one of the โ€œbreakthrough technologies that can make a differenceโ€ in securing organizationsโ€™ networks. One example of this is the cybersecurity correlation automation (CSCA) analytics platform that Wells Fargo & Co. has been using since 2015:

The system uses open source technologies to ingest, analyze and visualize data coming from a variety of internal and external information sources, including server configuration data to social media.

Machine learning is used to compare incoming data against policy- and behavioral-based norms, then adjust the behavior model based on the new information. It uses a technique known as graph analytics to show relationships among disparate data points, spot unusual activity, then score each problem based on its level of risk.

This automates routine analysis and allows human analysts to focus on only those service tickets that require human insight. Rich Baich, Wells Fargoโ€™s chief information security officer, says that this has improved the quality of analystsโ€™ reports and other products by 30 percent according to internal metrics.

This application of machine learning to cybersecurity has been lauded by some experts and criticized by others. The criticism is warranted when machine learning is treated as a self-contained solution to cybersecurity. That is because, by itself, machine learning will always lag behind the adaptations that attackers make when their efforts meet resistance.

When ML-enhanced cybersecurity systems learn one attack approach, hackers simply develop a new one. And if systems are designed to cover this possibility by treating any activity that is at all out of the ordinary as in incursion, they can flag such a great number of aberrations that humans who respond to the alerts start to experience โ€œalert fatigue.โ€ They examine flagged actions more carelessly, skeptical of the many cries of โ€œwolfโ€ that system continually supplies.

What has proven more successful than the pure, machine-learning approach to cybersecurity is the one developed by MITโ€™s Computer Science and Artificial Intelligence Laboratory (CSAIL) and machine-learning startup PatternEx. Their AI cybersecurity platform โ€“ called AI2, or AI โ€œsquaredโ€ โ€“ works together with human analysts to predict cyberattacks. A CBS News article describes it:

This system was tested on 3.6 billion pieces of data, or โ€œlog lines,โ€ that were produced by millions of users over a three-month period. AI2 sifts through all the data and then clusters them into patterns through unsupervised, machine-learning. Suspicious patterns of activity are sent over to human analysts who confirm whether or not these are actual attacks or false-positives. The AI system then takes this information and includes it in models to retrieve even more accurate results for the next data set โ€“ so it gets better and better as time goes on.

Their success rate of 85 percent is about three times better than what systems that rely solely on machine analysis have accomplished, and it produces five times fewer false positives. One of the developers, Kalyan Veeramachaneni, explained what led to AI2:

We looked at a couple of machine-learning solutions, and basically would go to the data and tried to identify some structure in that data. You are trying to find outliers and the problem was there were a number of outliers that we were trying to show the analysts โ€“ there were just too many of them. Even if they are outliers, you know, they arenโ€™t necessarily attacks. We realized, finding the actual attacks involved a mix of supervised and unsupervised machine-learning. We saw thatโ€™s what worked, and thatโ€™s what was missing in the industry. We decided that we should start building such a system โ€“ machine-learning that also involved human input.

Rethinking cybersecurity paradigms in the age of AI

This kind of AI use in cybersecurity is leading to dramatic shifts in the way that practitioners view cybersecurity. Traditionally, cybersecurity has been viewed from a perspective of trying to build a fortress around the network that hackers will find to be impenetrable. The problem with that paradigm is that cybersecurity practitioners have never been able to achieve that impenetrable perimeter.

Headline after headline about massive data breaches suffered by major corporations testifies to the utter failure of this approach. It takes only one vulnerability in the perimeter for hackers to gain access to the network. Once in, hackers generally spend months inside, exploring the network to find ways to achieve their goals. During those months, their presence usually goes undetected, because cybersecurity efforts were so focused on the perimeter.

This pattern of hackers penetrating networks and then spending months before any damage is noticed is borne out by my personal experience working with a wide variety of clients. When assessing their networks, my team and I rarely find a system that does not have evidence of unauthorized users having established some foothold in the network, even if no attack has yet been detected. Merely hardening the perimeter is not working.

As AI-enhanced hacking tools increasingly enter the equation, the fortress approach of trying merely to secure network perimeters will become even more insufficient. Hacking tools that learn on the fly how to defeat obstacles designed to defeat past hacking strategies will always remain one step ahead of defenders because they will come up with approaches that defenders are not prepared to block.

Providing adequate cybersecurity increasingly employs AI, too, in partnership with human security experts, to proactively stay ahead of hackers instead of reactively remaining one step behind. With AI, cybersecurity practitioners can add another layer of security in the interior, searching for suspicious activity that suggests an intrusion.

Such an approach acts like the human bodyโ€™s immune system, which seeks and destroys harmful entities that have entered the body. Even if hackers penetrate the perimeter, AI-enhanced security in the interior can detect and eliminate intruders during the lengthy times that hackers have been able to roam freely around the network in the past.

Nicole Eagan, CEO of Darktrace, a global AI cyber defense company founded by Cambridge University mathematicians and ex-British spies, explains how AI in the interior of networks can help defend against even novel exploits โ€“ the โ€œunknown unknownsโ€ as she calls them โ€“ that wouldnโ€™t be detected by software that searches only for signs of past, known exploits:

The big challenge that the whole security industry and the chief security officers have right now is that they’re always chasing yesterdayโ€™s attack. That is kind of the mindset the whole industry has โ€“ that if you analyze yesterdayโ€™s attack on someone else, you can help predict and prevent tomorrowโ€™s attack on you. Itโ€™s flawed, because the attackers keep changing the attack vector. Yet companies have spent so much money on tools predicated on that false premise. Our approach is fundamentally different: This is just learning in real time whatโ€™s going on, and using AI to recommend actions to take, even if the attackโ€™s never been seen before. Thatโ€™s the big transition that Darktrace is trying to get folks โ€ฆ to make: to be in the position of planning forward strategically about cyber risk, not reacting to the past. โ€ฆ

When we start working with companies, it changes their mindset about security. It gives them visibility theyโ€™ve never had before into the goings-on of the pattern of life of every user and device inside their network. It lets them see their network visually in real time, which is an eye opener. They also realize that you can catch these things early. The average attacker is in a network 200 days before real damage is done. Youโ€™ve got a lot of time.

We talk a lot about the human immune system. Weโ€™ve found itโ€™s a very effective analogy because boards of directors can understand it, non-technical people can understand it, as well as deep technical people. Weโ€™ve got skin, but occasionally that virus or bacteria is going to get inside. Our immune system is not going to shut our whole body down. Itโ€™s going to have a very precise response. That is where security needs to get. It needs to become something that, like our immune system, is just in the background always running โ€“ I donโ€™t have to think about it.

In the same way that the early AI-aided hacking tools are learning individual usersโ€™ behavior in an effort to mimic them and go undetected, AI-enhanced cybersecurity is moving in the direction of learning individual usersโ€™ behavior to better recognize aberrations that may point to cyberattacks. We can expect to see this move in the direction where cybersecurity works not just at an enterprise level, but also at the level of individual users. With ML and AI, this vision could be possible.

The continuing arms race

We can expect cyberattacks to grow more sophisticated in the use of ML and AI. Neither side will ever be able to claim final victory. Both sides will continue to adapt to the strategies of the other. Steve Grobman, CTO of McAfee, states:

With cybersecurity, as our models become effective at detecting threats, bad actors will look for ways to confuse the models. Itโ€™s a field we call adversarial machine learning, or adversarial AI. Bad actors will study how the underlying models work and work to either confuse the models โ€“ what we call poisoning the models, or machine learning poisoning โ€“ or focus on a wide range of evasion techniques, essentially looking for ways they can circumvent the models.

The most effective defense will be a combination of AI to sort through data and flag it for human analysis, with that human analysis feeding back into AIโ€™s machine learning systems to improve its future predictions. This ongoing circle will provide the best defense for critical cybersystems.

It is essential that organizations look to these new paradigms and new strategies to defeat the shifting focus of cyberattacks. With our world growing ever more cyberconnected, the stakes grow higher. This new battlefield in the AI cybersecurity arms race must not be left solely in the hands of the attackers.

Avatar of Marin Ivezic
Marin Ivezic
Website | Other articles

For over 30 years, Marin Ivezic has been protecting critical infrastructure and financial services against cyber, financial crime and regulatory risks posed by complex and emerging technologies.

He held multiple interim CISO and technology leadership roles in Global 2000 companies.