If a user makes a mistake, ask: What could the company have done better?
12:00 Saturday, 21 November 2020
UK Cyber Security Council
One of the most important elements of cyber security is user training. Anyone in the security industry will have experienced improvements in user behaviour after introducing a comprehensive, ongoing awareness training campaign.
And the statistics back it up: one survey revealed, for example, that 90% or more of successful attacks are a result of phishing campaigns (which rely on the user falling for a bogus email and inadvertently taking an action that allows an intruder in), and so if we can train our staff to help them fail less, this can only be a good thing.
The problem is, however, that with all the training in the world, people will make mistakes. This is exemplified in another piece of research , which found that even when people had been given training and were aware that clickable links in inbound emails can pose a threat, 56% of people clicked on them anyway.
At a human level, what should we do to someone who causes an infection? In an HR sense would this constitute gross misconduct and be a sackable office?
Let us look at another example. Say a call centre operator is persuaded by a caller, who is masquerading as someone else, to divulge sensitive information relating to the person they are pretending to be – and that the operator has done so by failing to follow the correct procedure of identifying the caller thoroughly. On the surface this also sounds like an offence that should involve HR.
It is perfectly reasonable in both cases to have disciplinary action as one of the potential outcomes. Before exercising that option, though, we should first consider whether it is fair to do so.
Let us take the second example first. The operator in this example will almost certainly have being using a CRM system of some sort. Why, then, was he or she able to divulge sensitive data without having properly authenticated the caller? Just as the CRM system will have taken the operator through a fairly rigorous authentication process, common sense would suggest that each customer session should include a solid, mandatory and thorough identification process each time a customer call comes in.
Now let us go back to our phishing example. If one of our users is part of the 56% of people who clicked a link despite the training, the first question we should be asking ourselves is whether it is acceptable that the phishing email reached the user in the first place. Are our spam filters up to scratch – if we have any at all. Do our mail servers use technologies such as DKIM and SPF in the defence against forged inbound email?
The same goes for user-led outbound data leakage. Few of us can claim never to have sent an email to the wrong recipient – either through carelessness or because the email application wrongly predicted the recipient based on the first few characters we typed. Many organisations disable this auto-correct feature, and others use Machine Learning software to identify what it considers to be unusual behaviour. Security is, however, a risk-based world and there are many organisations that have decided that, say, the convenience of auto-correct outweighs the risk of a breach. And this is absolutely fine – that is exactly what a risk-based approach is about – but if one’s organisation has taken that approach, it would perhaps be a little unfair to fire someone for an inadvertent breach where there was no safety net.
In the event of a user-led breach, then, the first question we should ask ourselves is: what could we, the organisation, have done that would either have prevented this breach or at the very least made it less likely and/or reduced the potential impact?
If, with hand on heart, we are convinced that we could not reasonably have done more, and that the training we have provided was up to the job, then perhaps a disciplinary approach is a fair option.
If, on the other hand, we consider on reflection that we could have done better, the ball is in our court. This doesn’t mean, of course, that we should expect ourselves – or be expected – to spend extravagantly on systems and services that would give a modest improvement for an enormous cost. But if there are things that, if we are being honest with ourselves, we could have done reasonably inexpensive and straightforwardly, then the answer is not to punish our staff, but to do better ourselves.