Skip to content

Incident simulation: practice makes perfect

Best practice

08:00 Friday, 17 September 2021

UK Cyber Security Council

When we develop a new piece of software, we test it before going live. When we configure a new firewall we test it before putting it into production. When we make a change to a key system we test it before letting the users back into the system. Similarly, when we write a new policy or procedure we should test it - and this is particularly important for our incident response plan.

The moment someone invokes an incident response, we are in a high-stress environment. By definition we have something very bad going on, we do not know its magnitude, the underlying cause is not yet known and the extent of the potential damage is anyone’s guess. A cyber security attack may well involve breaches of both personal data and sensitive company data, and could well involve the attacker having privileged access to some or all of our systems.

An organisation’s incident response plan should detail the initial steps of the response, because they seldom vary, along with key information such as call cascade lists and the related contact details. It will also detail roles and responsibilities, reporting requirements, stakeholders who must be updated and authorities that you may have to communicate with (for example the police, the Information Commissioner or a regulator). These elements are straightforward to test - it’s quick and easy to test the cascade list to verify the accuracy of phone numbers, for example - and this can be done quickly and frequently. What’s harder to test is the elements of an incident response that cannot be pre-planned - the elements where how the team acts depend largely or entirely on the circumstances of the incident.

There is an entire industry built around simulations of serious incidents, and cyber breach incidents form a major chunk of what the firms in this industry do. While one could potentially run simulations internally, the value would be limited as some of the individuals who would normally be part of the response team would instead be occupied running the exercise. For best effect, then, it makes sense to employ a company that does it for a living.

Good simulations can accurately be described as “not cheap”. At the entry level are basic “desktop” exercises, where the organiser presents scenarios and the response team describe what action they would take; at the other end of the scale are full simulations where the team is given access to SIEM tools, service desk ticketing systems, email servers and the like configured in a “sandbox” environment with synthetic data, and the host company has staff who call the team masquerading as members of the Press or the police force and asking realistic questions at appropriate moments.

Exercises generally start off with a simple scenario and then grow over time, with the organisers adding complexity layer by layer. A simple ransomware attack is followed up by a report that the online shop is running slowly. A local firm then gets in touch to say their office is flooded and asks if they can use some your seats in the business continuity suite. Soon afterwards the Security Operations Centre tells you that some customers’ personal data has been discovered on the Dark Web by a member of the public. Then the local paper calls to ask for a comment on a news story they are about to publish about your company having told the abovementioned local firm to get lost when they asked to share the BC suite. The exercises last for several hours and are designed to make people work together and make decisions for which there is usually no right answer.

For this reason, no incident response team ever comes out of a simulation exercise with a perfect score because there always has to be compromise: for example, while you can contain ransomware by taking your servers offline, this brings the business to a halt and impacts revenues. And no incident response team works perfectly because there will always be a raft of different personalities among people who, in many cases, seldom work particularly closely with each other day to day but who have suddenly been packed into a small room and told to work together to make a number of difficult decisions in an ever-changing scenario.

But the benefits of simulations are twofold.

First, the organisers will have run the scenario - or one like it - for other organisations, or perhaps (if you’re a large enterprise) for the cyber teams of other regions. In the wash-up exercise they will walk through the approach you took and the decisions you made, and can put everything in the context of how others behaved: it might make you think differently next time.

Second, and most importantly, no matter how effective (or ineffective) your response was, you will learn from it. Even simple exercises bring masses of learnings, both areas where you could have made a better choice (so you can think differently next time) and ones where the approach you took worked really well (and hence you’ll have a good starting point for next time).

The more incident responses you experience, the better you get at incident responses. And although simulations come at a cost, this is probably preferable to waiting until the next real crisis for your next learning opportunity.