The Edge of Risk Menu Search
Technology Cybersecurity Regulation

These Three Organizational Vulnerabilities Put Critical Infrastructure At Risk

Senior Director of Security Advisory Services for Rapid7

When it comes to cybersecurity functions and the people who manage risk within critical infrastructure, it’s important to have a holistic understanding of prominent vulnerabilities and their potential impact, as well as opportunities for risk reduction from the standpoint of organizational culture and alignment.

The cybersecurity of critical infrastructure is a shared responsibility of system operators, control engineers, information technology staff, and cybersecurity professionals, among others. This shared ownership is incredibly helpful, but it also creates challenges that can result in unmanaged risk, including differing perspectives on operational priorities, interpretations of compliance requirements, and vastly different views on risk itself.

Vulnerability can easily be introduced in a few ways. One occurs at the seams between business units: specifically, at the hand-off points and at points where a defined responsibility assignment matrix is not established. Another happens when alignment on direction cannot be agreed upon across committees. This impacts the employees lower in the hierarchy of an organization and misaligns organizational goals that support company objectives. The last example is a byproduct of regulatory compliance, specifically in the era of multiple similar standards and the translation between how they are written and how they are implemented.

Hand-Off Points

Most critical infrastructure facilities are designed, built, and transitioned into production by third parties. Often, when constructing new assets such as power plants, substations, and manufacturing facilities, the third party will leverage the same designs approved by the original equipment manufacturer at each installation. This not only includes the specifications of the facilities themselves, but also the cyber systems used to control and monitor the critical infrastructure they house.

What this leaves is the same exact footprint of network devices, servers and control systems, creating a well-known map any attacker can follow. Additionally, many manufacturers of large industrial machinery provide remote monitoring of the asset, as well as remote control from their centralized control room. That connectivity creates a backdoor to the control network and the systems running the operations.

When workflows are not developed to show the specific hand-offs, significant vulnerability is introduced from basic functions like systems administration. An adversary using a known blueprint now has reduced the level of complexity to compromise and increased the attack surface targeting the people and gaps between teams.

For example, a malicious actor could target a published request for proposal (RFP). RFPs often contain detailed descriptions of how systems work, who is responsible for them in production, why the project has been approved for bidding and what the company wants to accomplish from the project. By using this information, a targeted social engineering campaign can be created to take advantage of the weaknesses inferred through the RFP description. Countering this threat surface is seemingly not very difficult: Simply avoid including a significant level of depth in the RFP, and instead, provide information via vetted scoping calls with vendors. Still, many RFPs are published with a great deal of sensitive information included.

Decision by Committee

Decision by committee occurs all the time. One business unit wants to go one way, and another wants to go the other. This is a standard condition in almost every industry. However, when making tactical decisions on how to design, integrate, and operate critical infrastructure, it can create vulnerabilities when the left hand doesn’t know what the right hand is doing.

We can break system design down into three functions: architecture, engineering, and operations. Each of these critical business functions has their own unique set of responsibilities that are generally well-defined. Effective architects understand the business and where technology can be used to improve efficiency. Engineers understand many of the same things architects do, but have a much deeper view of how to make the technologies work together. Operations has a deep understanding of how the technology works and knows exactly where the problems are.

Technology alone will never be the solution; only by people working together will risk be managed and exposure to cyberattacks reduced.

Cyber vulnerability can be reduced when these three functions agree on problems and are equally invested in the outcomes.

When bringing these three functions together to develop a new system design, make improvements to an existing one, or talk about what is and isn’t working well, it’s critical to have agreement on the problems and how those map to the business needs of the technology. When you don’t have that alignment, trust erodes, attitudes form, and silos occur. This is incredibly detrimental to the overall cybersecurity of technology.

An example of how this can create unmanaged cyber risk is in the design of a new software application used to house telemetry data. An all too common scenario is when software is designed to use old and outdated versions of Java. While Java is an easy-to-learn programming language, the software developers need to ensure they are coding to the latest versions and are not designing in antiquated functions that will be dependent on an older version of the java runtime environment. When easy-to-exploit java vulnerabilities are inducted to a control system network, it again lowers the bar an adversary must pass once they have gotten their code into a control network.

Compliance

Compliance obligations are enforceable and can lead to real-world impact, especially when it comes to operating critical infrastructure. There are cyber-focused business and financial controls originating from SOX, payment processing controls from PCI, and critical infrastructure controls coming from NERC CIP and TSA, as well as other cyber and privacy standards like HIPAA and GDPR. This creates a logistical minefield of different control mappings, overlapping standards, and the self-assessed applicability to technology systems.

One of the hardest, most stressful things to do is prepare for an audit. Whether the compliance framework digs deep into cybersecurity or just touches the surface, the people doing the preparation are highly motivated and very focused. During the time between audits, those same people are constantly gathering evidence and ensuring the compliance records are kept up-to-date.

Now, introduce multiple compliance obligations, each having their own narrowly focused applicability to technology systems, and you immediately create an inventory of systems that fall under compliance, as well as an inventory that does not. The inventory that does not is often left to whatever cyber controls exist natively or are implemented by an astute administrator, but what this approach lacks is a base level of protection applied throughout. A better approach is to map all cyber compliance obligations to a recognized framework that allows compliance to become a byproduct of following best practices.

Multiple competing compliance objectives create vulnerabilities when you have systems left exposed because technology teams do not apply best practices to systems that have no organization compliance requirements. This again lowers the bar an adversary needs to pass in order to compromise systems. The best way to counteract this issue is to develop a best-practices approach to cybersecurity, where technology requirements and configurations are clearly defined and balanced between usability and security.

Managing cyber risk is no different than managing other types of risk. It simply must be a team sport where people work together, talk to one another, align on problems that create risk, create shared views on risk tolerance, and, above all else, hold each other accountable.

As the security of critical infrastructure continues to gain more time in the spotlight, we cannot forget about the fundamentals of working together to solve common problems. Technology alone will never be the solution; only by people working together will risk be managed and exposure to cyberattacks reduced.

Scott King

Senior Director of Security Advisory Services for Rapid7

Scott King is the senior director, Security Advisory Services for Rapid7. Scott has over 20 years of professional work experience in the IT and cybersecurity fields. Scott has worked extensively in the energy industry, DoD, state governments, technology companies, and manufacturing companies.

Please enter a valid email address.
Success! Thank you for signing up.