Niel Nickolaisen, an IT consultant and director of Enterprise Integrations at Utah State University, provides a front-seat view of an increasingly common type of malicious cyberattack – and a company’s effective response.
Last year, the leaders of a global enterprise brought me on as an advisor to help them assess systems modernization options. Then, on the Saturday of a three-day weekend, the company’s IT team began receiving service desk reports that some system functionality was suddenly unavailable. Within 20 minutes, the team discovered that the entire virtualization environment had been corrupted and discovered a README file, detailing the encryption, data exfiltration, and ransom demand.
The cybercriminals had exploited vulnerabilities that the company knew about but had deliberately decided not to address based on the cost of remediation. The attack affected the company’s primary workloads — Enterprise Resource Planning (ERP), Human Capital Management (HCM), payroll and analytics. In addition, the attackers exfiltrated data — including a host of sensitive employee and customer information — that existed on vulnerable legacy files shares.
My work as consultant stopped immediately, and the company’s leaders enlisted me to lead the attack response. I had a front-row seat to the incident, from the moment the attack was first detected to its ultimate resolution.
While it’s not what I envisioned when I agreed to the engagement, it was an invaluable experience. The lessons I learned along the way may be helpful for any IT leader since it’s a matter of when, not if, any one of us will find ourselves dealing with the ramifications of a ransomware attack.
Response, Recovery, and Restoration Actions After the Attack
The hours and days following the discovery of a cyber incident are a critical period of time during which the victimized company can contain the damage, gather evidence, and implement its response plan.
Within the first few hours, we:
- Took all systems offline to prevent the attack from spreading.
- Contacted the company’s cyber forensics partner, which immediately began its investigation.
- Began evaluating the condition of the data backups for quality and integrity.
- Increased bandwidth to the data center and ordered additional networking equipment, necessary to download massive amounts of backed-up data.
- Decided to rebuild every system from scratch in case that attackers had left behind some dormant viruses that scans might miss.
Within 24 hours, we:
- Notified employees that there had been an issue at the data center, impacting all services.
- Created an air-gapped environment in which we could (1) safely restore services long enough to extract system log files for forensics and (2) ultimately rebuild the systems.
- Made solid progress on forensics.
- Tested and validated the condition of the data backups.
- Prepared the company to switch to manual transaction processing. We would load these transactions once systems were recovered.
- Changed all system and administrator credentials and put in place a process for the monitoring and rotation of such credentials.
- Established regular meetings: twice daily for the incident response team; a daily recovery team meeting; and a daily update to executive leadership, the board of directors, legal counsel, and cyber insurance company.
During the first week, we:
- Started rebuilding and testing systems, prioritizing those necessary for business operations (the things that generated and accounted for orders).
- Began the data backup download process. (If this recovery and restoration process were successful, the company would not need to pay off the attackers for decryption keys.)
- Engaged a third party to begin negotiation with the bad actors.
- Continued the forensics investigation.
During the second week, we:
- Narrowed in on the attack vector: those deprecated operating systems.
- Continued to download system data so that we could restore operations.
- Launched a team to determine the start-up sequence for enterprise systems like ERP (planning for the entry of manually-generated transactions to ensure that system data was correct before resuming system transactions).
- Continued negotiations with the attackers.
During the third week, we started to bring systems back online until we discovered that some were dependent on data stored on the file shares. Because the file share data was in a single folder structure, it was not possible to target the download of the necessary files. Nothing was available until everything was available, which would take several days. Meanwhile, we finalized the forensics incident report. And we continued the ransom negotiations.
By early in the fourth week, everything was back online and operations went back to normal. We finished the ransom negotiations and paid just what was required to ensure that we did not lose any data – this was an important commitment we made to the company’s employees and customers. Ultimately, the company paid the bad actors a fraction of the original ransom demand. We also finalized an updated cybersecurity plan (with budget and timeline) to reduce the company’s risk profile, presenting it to the company’s board of directors and owners.
Post-Mortem: The Good, The Bad, and the Ugly
Like many companies, the firm had done a number of things right, but some weaknesses in its approach left it open to ransomware thieves. Taking a page from the classic Sergio Leone spaghetti Western film, I’ll share the good, the bad, and the ugly — in reverse order.
The Ugly: Legacy Systems Left Vulnerable.
Because the company had been teetering on the edge of financial distress, there were a number of decisions (or lack thereof) that the bad actors could use to their benefit. First, there were the legacy systems running on outdated operating systems —one of them public-facing.
The attackers exploited this vulnerability to get into the network and then exploited a vulnerability in another outmoded operating system to capture system credentials. Keys to the kingdom in hand, the attackers patiently reviewed and exfiltrated the file share data and encrypted critical systems. In addition, the exfiltrated file shares (containing sensitive data) existed in a single, immense folder structure, which delayed service restoration.
The Bad: Gaps in Cybersecurity Defenses.
Although the company had invested in some robust cybersecurity tools, there were significant gaps in capabilities, including event correlation and data exfiltration monitoring. In addition, while the company’s cybersecurity staff was skilled, there were not enough people to provide round-the-clock coverage in addition to their other duties.
Also: the company’s critical workloads were running in a co-location data center with narrow network connectivity. This created a logjam during data restoration.
Likewise, there were a number of personnel bottlenecks that delayed recovery. Work to rebuild systems was slowed by the fact that people on the team had to do crazy things like sleep. There were just not enough people available to handle a rapid and complete system rebuild.
The system backup schedule was also inconsistent. The company backed up some systems daily, others weekly, and still others only monthly. The data that wasn’t kept current had a negative impact on system restoration.
The Good: Fast Action After Incident Detected, Solid Data Backup.
The IT team immediately contacted its cyber forensics partner to begin its investigation when it realized there was an issue. The forensic team’s experience with ransomware was incredibly valuable since no one within the company’s team had ever dealt with such a situation.
The company’s approach to data backups was solid. They had invested in a high-quality backup tool and kept a copy of the backups in an immutable storage location. This was key to operational recovery and significantly reduced the size of the ransom as no decryption key was required.
Within just a few hours the team was talking with the executive team, board of directors, company owners, its cyber insurance company, and external counsel. This communication was frequent, honest, and fostered better decision-making during incident response and recovery.
Just as importantly, there was no finger-pointing after the attack. I have found that as soon as someone is blamed for a problem, that person becomes defensive, diverting their energy to self-protection rather than incident remediation. During difficult times, it’s essential that everyone is focused on resolution and restoration.
Lessons Learned – and Tips for the Future
My experience leading the response to this ransomware attack was a great opportunity to fully understand how best to navigate one.
My most important lesson was related to the ransomware business model. Based on the negotiations with the attacker, the value of certain things became apparent. In exchange for the ransom the attacker offered:
- The decryption key
- To describe their method of attack
- To prove that they had not released and had destroyed the exfiltrated data
- A guarantee that they would never attack the company again
As soon as we were confident that we could recover the systems and did not need the decryption key, the ransom significantly dropped. The attackers believed the costs of not being able to operate (e.g., take orders, provide services) was much larger than the cost of any data loss.
Based on that insight and my observations about what went well and what didn’t leading up to and after the incident, I can offer the following tips for other IT leaders to better gird their own organizations against ransomware attacks:
Consider the system restoration timeline. How long will it take to restore systems and data to pre-attack conditions, using current processes and technologies? Is that timeframe acceptable? In a ransomware incident, time is money. Minimizing the time to system recovery is the goal.
Analyze, classify, and actively manage data that exists in less secure locations like file shares. In this situation, the company’s file shares had become a dumping ground for all types of sensitive data. Planning and launching a data discovery, classification and management project just might be too daunting. Instead, create an ongoing process for the continuous discovery, classification and management of data – particularly unstructured data. There are now AI-assisted tools that can help.
Assess every data location in terms of data recoverability. Recovery was slow because of the way the unstructured data was stored – in a big blob on a file share. Having the essential operational data in its own locations would have saved the company days.
Make data management a primary candidate for modernization. We should treat data modernization just like we treat system modernization. Build a list of data modernization candidates and a work plan to deliver the list. Data modernization should be part of our ongoing work. This can help not only to mitigate the risk or impact of ransomware attacks but also enable AI applications.
Don’t overlook low- and no-cost cyber risk mitigation strategies. The company had made the decision that some cyber mitigation opportunities were too expensive but overlooked some low- and no-cost actions that would have reduced its risk profile. For example, better management of administrative credentials and putting simple access controls (like MFA) in front of the public-facing systems with the known vulnerabilities.

Written by Niel Nickolaisen
Niel Nickolaisen is Director of Enterprise Integrations at Utah State University and is leading the implementation of the processes and systems to enable comprehensive constituent lifecycle management at the university. The co-author of The Agile Culture: Leading Through Trust and Ownership and Stand Back and Deliver, he is as an advisor to several technology start-ups and sits on the board of a start-up accelerator. Previously, Niel held several technology and operational executive positions. Nickolaisen has an MBA from Utah State University, an M.S. in engineering from MIT, and B.S. in physics from Utah State University.