A growing number of today’s entertainment options show protagonists battling cyber-attacks that target the systems at the heart of our critical infrastructure whose failure would cripple modern society. It’s easy to watch such shows and pass off their plots as something that could never happen. The chilling reality is that those plots are often based on real cyberthreats that either have already happened, are already possible, or are dangerously close to becoming reality.
Cyberattacks occur daily around the world. Only when one achieves sufficient scope to grab the attention of the news media – such as the WannaCry ransomware attacks of early 2017 – does the public get a brief glimpse of how widespread vulnerabilities are. Those of us who are actively involved in strengthening cybersecurity see the full scope of the problem every day.
Our modern world of cyber-physical systems
Our lives increasingly revolve around Cyber-Physical Systems (CPSes). That term goes much deeper than you might think. It’s not simply a matter of computers controlling large mechanical systems as is the case with the Industrial Control Systems (ICS). Today’s CPSes, such as the Internet of Things (IoT), integrate computational devices into an increasing range of everyday physical objects and even biological systems.
Picture the power plant or water plant that provide your electricity and water. Those systems have single-purpose computers embedded with each switch or each valve. Each computer monitors system conditions and determines whether to open or close that switch or valve to keep that part of the system running optimally.
They monitor and control systems at a level that humans would find too granular and too tedious to warrant their undivided attention. They also send a constant stream of data upward in the system to provide actionable information to more complex computers that control larger parts of the process.
Or, let’s bring this closer to home. Let’s say you have a pacemaker or heart monitor or insulin pump to make up for the shortcomings of your heart or pancreas. In such a case, your body has become part of a CPS, with a mechanical device, guided by an embedded device, monitoring and automatically compensating for your organs’ limitations.
Here, too, the internal components are part of a larger system. They report their data to systems controlled by your doctor, who can monitor your condition remotely and adjust your devices if needed.
CPSes are increasingly prevalent in all aspects of modern life. If you drive a car with the latest safety features, they monitor traffic and apply the brakes if they detect a possible collision. They control the way your appliances operate. They work behind the scenes of your city’s traffic system to monitor traffic flow and time traffic lights to minimize gridlock. They operate in virtually every aspect of your life – often without you even realizing it.
With the spread of connected devices through all aspects of daily life comes increased vulnerability. These devices are designed to communicate and, as such, can potentially be compromised through cyber-kinetic attacks.
Such cyber-initiated attacks have already caused physical damage to power plants, gas pipelines, water facilities, emergency notification systems, apartment buildings, transit systems, factories and more. Researchers, including my own teams, have also demonstrated the potential for determined hackers to hack into the systems of – and even take limited control of – the more recent models of cyber-enhanced automobiles, drones, or digital railways.
Why cyber-physical systems are vulnerable
With the growing move toward connecting more and more formerly standalone pieces of equipment to cyberspace, that equipment has become vulnerable. The motivation for connecting them is sound – cyber-enabling equipment and devices helps them work together more efficiently, gathers more relevant data about their interactions and expands their potential functionality.
That ability to communicate, however – if left unprotected – provides a potential entry point for unauthorized parties to hijack the device.
- The Stuxnet worm destroyed uranium enrichment centrifuges in an Iranian nuclear power plant.
- Security flaws in consumer electronics devices enabled the 2016 attack on major U.S. websites that was dubbed “the attack that brought down the internet” even if it was only for a day.
- A bored Polish teenager took control of a city’s tram system in 2008 and carelessly rerouted trams into crashes that caused multiple rider injuries.
- An Australian wastewater engineer took remote control of parts of the wastewater equipment of the town that terminated him and released hundreds of thousands of liters of raw sewage into lakes and rivers throughout the town over a period of weeks in 2000 before his involvement was discovered.
- Multiple hospitals had to shut down critical equipment or postpone operations not only during the WannaCry ransomware attack, but also in scattered ransomware attacks in the months that preceded it.
These are just a small sample of documented cyber-kinetic attacks. I’ve been tracking many more of the key historic cyber-kinetic incidents and attacks here. You would think that such incidents would motivate improved security for this ever-expanding web of interconnectedness, but that has not been the case.
Wishful thinking and denial
Connecting every key component of a particular physical process to computer monitoring and control offers greater efficiencies for the process. Making that data available on an open network offers those controlling the process the convenience of having the data they need at their fingertips no matter where in the world they are. Motives are the same whether the physical process is a manufacturing process, temperature measurement and control, a chemical process, traffic control, adjustment of abnormal heart rhythms, or a myriad of other options.
Thus, use cases consistently favor connecting more devices and increasing accessibility. Building more comprehensive cyber-connections becomes the chief priority and security is overlooked.
As physical processes are increasingly being monitored or controlled by embedded computational devices, those physical processes become hackable in the same way as the embedded devices controlling them.
Security of such CPSes is often considered to be effectively covered merely by tossing out the industry adage “security by obscurity.” This term implies that the system’s design is sufficiently different enough from other companies’ systems that no hacker would want to spend their time figuring out how to compromise it.
The fact that security systems of multiple factories, utilities, smart buildings, connected vehicles and even nuclear power plants have been breached demonstrates that adage to be wishful thinking.
Determined hackers have shown a willingness to attack any system in which they can find a vulnerability. In fact, when we assess the security of industrial operations, we rarely find a system that hackers have not already infected with some type of malware or backdoor that they could use at any time to inflict further damage.
A similar form of denial applies to the health-preserving technologies described earlier – the implanted medical devices like pacemakers, defibrillators, heart monitors and insulin pumps. Here, too, use cases encourage connecting them to the cyberworld. What could be better than having such devices feed ongoing data to medical personnel and alert them of problems before those problems become serious?
Such devices undergo strenuous testing to ensure that they function as designed. That, however, is as far as testing goes. Device testing does not take into consideration the possibility of some third party gaining access to a device and causing it to malfunction.
To date, no case has been documented of such sabotage. That, however, doesn’t prove that such sabotage has never happened. Unfortunately, if such sabotage ever occurred, it would be almost impossible to identify that it was sabotage instead of a simple device malfunction.
Yet this vulnerability was considered a real enough threat that when then-U.S. VP Chaney had a defibrillator implanted in his chest in 2007, the doctors disabled its remote functionality as a precaution against a potential assassination attempt. Despite this awareness a decade ago, testing cybersecurity of implanted devices today remains overlooked by most medical device manufacturers.
The challenges of securing critical systems
Outright failure to test security of connected devices is not the only problem. Providing security for CPSes is far more complex than providing security for traditional, information-only systems. If something goes wrong when testing security of an information-only system, the worst that happens is that people lose access to the system’s data until the problem is fixed. But when systems control functions that could mean life or death for people, even a brief failure could be catastrophic.
Past cybersecurity attention focused primarily on three aspects: maintaining data confidentiality, integrity and availability, with the strongest focus on confidentiality. Connecting devices that control aspects of our physical world to cyberspace requires that greater focus land on integrity and availability.
When dealing with systems that affect our physical world, keeping outsiders from discovering what data these devices are processing is far less important than keeping outsiders from changing the data to make the system err in what it does or, even more important, keeping outsiders from blocking data so the system completely fails to provide its essential services.
Connecting critical physical systems also adds more elements to this traditional three-element paradigm of security concerns. Control of the system is not an issue when it comes to traditional information systems. Outsiders gain no benefit from wrestling control of the system away from its administrators. Leaving vulnerabilities that allow outsiders to take control of a connected vehicle or an implanted medical device, on the other hand, could be fatal.
Similarly, with a traditional information system, the introduction of fake data may be a minor inconvenience to the authorized users. But fake information that says that the water pressure on a dam is much less than it really is could cause the system not to take the proper action, putting the dam at risk of collapse.
Finally, with a traditional information system, no risks ensue from installing security protocols that create delays for authorized users in gaining system access. When dealing with security for a remote device to which a medical professional needs quick access in a medical emergency, though, creating a workable balance between security against unauthorized users and ease of access for authorized ones can be a matter of life or death.
Rethinking our security approaches
The continued growth of CPSes as an integral part of our physical well-being forces not only security professionals, but all stakeholders in our journey into a highly connected world to rethink traditional security concepts and solutions. Security must not take a back seat to rushing new technologies to market as quickly as possible. Hoping that past security approaches or – worse yet – blind, wishful thinking will prevent the disasters that inadequate security can bring is not an option.
Ignoring the reality of vulnerabilities will not restrict them to the realm of fiction. The threats are real. Many have already occurred. Many others are not far removed from dominating our news instead of our entertainment. Only by recognizing the new challenges that our connected world poses and coming together to address them will we be able to make our leap into this new way of life secure and safe, and get the fullest benefits from it.
Originally published on HelpNetSecurity on December 15, 2017.