Amidst the current rush to purchase gas this week, the details of and lessons underlying the Colonial Pipeline ransomware attack remains another example of what might appear to be enterprise disinterest in these methodologies.
Estimated reading time: 8 minutes
However, Colonial’s choice to shut down the operational technology (OT) infrastructure that halted fuel was more an abundance of caution than a direct response to the attack. The complex interdependencies between the Colonial enterprise IT and critical infrastructure OT networks created the much-touted “lack of visibility” that meant this ransomware attack would cause a state of emergency.
However, enterprise IT can learn several valuable lessons from this attack that strengthen their cybersecurity resiliency.
What really happened with Colonial Pipeline?
The shortest answer here is that the cybercriminal organization DarkSide deployed a ransomware attack against the Colonial Pipeline enterprise IT networks and systems. In recent years, ransomware groups have moved away from the traditional “just encrypt everything” method in response to more robust corporate business continuity and disaster recovery programs. Enhanced data backup capabilities meant organizations were no longer crippled by encryption alone. Thus, cybercriminals evolved their methodologies to both encrypt and steal sensitive data.
For Colonial, the enterprise IT attack was similar to others that have occurred over the past year. DarkSide encrypted sensitive information, stole it, gave “proof of life” by releasing some details, and requested the ransom payment.
The difference between Colonial and commercial ransomware attacks was the interconnected nature of the pipeline OT and enterprise IT. Colonial chose to shut down the OT stack because they were unable to definitively prove that the malicious actors had been contained to the IT networks.
The Post-Attack Lessons
At first glance, these post-attack lessons appear repetitive. The minute a breach occurs, people start to question, “why haven’t the organization’s learned their lesson yet?” and “how can they keep letting this happen?” On the other hand, the security professionals keep pointing out, “the infrastructures are too complex” and “it’s the end-users’ fault for clicking on that phishing email!”
Like many other things, the truth lies somewhere between these two extremes.
Complexity of Enterprise IT
Think about the average home network. Most homes have, at minimum, at least one of the following:
- A laptop/desktop computer
- Connected television
- “Virtual assistant,” like a Google Home, Alexa, or other similar service
That’s five technologies in a single home. Now consider the number of applications each of these connects with, including:
- Multiple social media accounts
- Shared drives
- Multiple email accounts
- Multiple streaming services
- Business software
- Internet browsers
For a single home, that means five devices connecting to upwards of ten or more applications, or fifty possible access points malicious actors can compromise. A household with two people may come close to doubling that number.
Now, consider the data around enterprise technology stacks:
- 5,000: average number of employees
- 690: number of distinct cloud applications used by organizations with 500-2,000 employees according to NetSkope
- 30%: Organizations mixing on-premises and in-cloud/Software-as-a-Service (SaaS) for storing sensitive data according to Flexera
Every single network connection point is a place where threat actors can gain access. The reality for most organizations is that they think they know what they have, but they can never really, truly, entirely be sure.
Need to Segment Networks
The answer a lot of security professionals will give is that enterprise IT needs to segment networks more rigorously. Network segmentation is the process of using firewalls or network devices to allow certain web traffic in or out of a network. Other professionals will argue that organizations should use this segmentation process to air gap or create one way tunnels for data to travel on.
A good way to think about these is that network segmentation is like having moats around data castles. The data can only cross the moat if the king lets down the drawbridge, allowing movement into and out of only that castle. Meanwhile, air-gapping is like saying that the bridge allows only exit or entry, not both.
In other words, malicious actors need to cross a digital drawbridge to get across the IT moat. If the organization segments the networks, the malicious actors only get into one data storage location. They will be unable to move to a different one. Air-gapping means that they can only travel into the castle, not out of it.
While network segmentation sounds simple, the reality is often complex. A few things the enterprise needs to consider when segmenting networks:
- Know all the data, devices, users, and applications on each network
- Analyze the risk level for each data category, device, user, and application
- Set access policies for all users, devices, and applications
- Understand data and traffic flows across applications and networks
- Limit application-to-network and application-to-application access
- Maintain availability of all networks and applications
- Ensure appropriate network performance and speed
While network segmentation is absolutely necessary to securing the digital infrastructure, these seven factors need to be managed across the 5,000 enterprise employees, 690 applications in use, and multiple on-premises and cloud resources.
Creating an appropriately segmented network means using technologies like software-defined networking (SD-WAN) that helps route users and devices to the right networks based on a set of attributes. For example, SD-WAN directs users inside the network firewall to one network, or sends users connecting remotely through a more secured network.
Moving toward a zero-trust model, where every user and device needs to be authenticated prior to accessing a network internally or remotely, is another way that organizations can enhance their security in response to increasing malware risks.
However, again, every single one of these options becomes more difficult as the organization’s size increases.
Need to Enhance Defensive Capabilities
Cybersecurity is similar to any other team sport. Organizations have their offensive “red teams” and defensive “blue teams.” Consider cybersecurity like a soccer game. Red teams are on the field looking to get the ball past the goalie. Blue teams are trying to anticipate the opposition’s next move, with the goalie as the last line of defense.
The primary problem trying to defend against ransomware and other types of attacks is that malicious actors keep changing their methodologies. This might sound like “vendor lingo,” but sometimes even a cliche holds some truth to it.
Although most cyber crime organizations, ransomware, and malware use patterns of behavior, they continuously change the order of things. Just as a soccer team’s wing, striker, and midfielder pass the ball differently on the field throughout a game, so threat actors use similar tactics in a different order from one attack to the next.
Blue teams need consistent, relevant training and experience to protect their enterprise IT goals. However, many lack the ability to recreate attack paths seen in the wild so that they can validate their tools and processes. According to Bryson Bort, CEO and Founder of SCYTHE, a threat emulation platform, “Defensive teams need the tools that give them the training to make them successful. By democratizing the process and sharing TTPs, they can create their own exercises responding to these new methodologies so they have experience necessary to respond to them.” Simply running pre-programmed exercises is not enough.
Blue teams, the last line of defense, can only protect the enterprise from ransomware and threats when they have the ability to fine-tune their tools for enhanced detection. The 2020 State of Security Operations report notes that enterprise security teams receive more than 11,000 alerts every day and spend nearly 70% of their time investing, prioritizing, and responding to them.
Organizations need to better enable their blue teams so that they can get better alerts. Throwing more technology at security has not yet produced the intended results. Many organizations lack a way to test their alerting capabilities against new attack methods prior to being attacked. This means that the tools their defenders use are not optimized to respond to new attacks. In other words, they’ve put the goalies in the box without giving them the training and equipment.
Organizations need to consistently train their blue teams so that defenders have the ability to respond appropriately. They need to continuously validate their detection and investigation processes to ensure that they have optimized tools in ways that enable their defenders. Running exercises on a regular basis is one way to do this. Annual penetration tests only give point-in-time visibility. They lack the agility necessary to provide assurance over processes and technologies.
The technologies matter, but they only matter insofar as they enable the people using them. People will always be the first line of offense and last line of defense.
Resilience in the Face of Ransomware
No organization is immune to ransomware. From the largest enterprise to the smallest “mom and pop shop,” malicious actors will continue to target sensitive data. Five years ago, professionals would argue “trust but verify.” However, the changing digital nature of business operations, reliance on data, and drive for cloud efficiencies is changing the message to “never trust, always verify.”
While it might sound like sowing fear, uncertainty, and doubt, the reality of the modern business is that systems will be infiltrated, and data will be exfiltrated. Every organization needs to assume that it has been breached or will be breached in the near future. The focus needs to turn towards resiliency. Mitigating risk by limiting movement within networking and enabling the defenders to contain a breach faster will be the way to undermine threat actors and protect data.
What do you think of the Colonial Pipeline attack? What can enterprise IT do to avoid another Colonial Pipeline incident? Please share your thoughts on any of the social media pages listed below. You can also comment on our MeWe page by joining the MeWe social network.
Karen Walsh – CEO and Founder of Allegro Solutions, is a data-driven compliance expert focused on cybersecurity and privacy who believes that securing today’s data protects tomorrow’s users. Karen has been published in the ISACA Journal experience in cybersecurity centers around compliance. Her work includes collaboration with security analysts and ghostwriting for c-suite level security leaders across a variety of internal and external vulnerability monitoring solutions. As a lawyer, she is deeply knowledgeable about security and privacy laws and industry standards including GDPR, CCPA, and ISO. She is currently under contract with Taylor& Francis and is writing a book about cybersecurity for small and midsized businesses.