Do we react too quickly?
08:30 Monday, 10 January 2022
UK Cyber Security Council
This article is being written about a week after the Log4shell vulnerability became probably the most widely trumpeted security issue of recent years - way more so than even the Colonial Pipeline attack and the likes of Kaseya and SolarWinds.
Dealing with the call from your CEO who just read about a critical vulnerability, or the notice you spotted on your threat intelligence service, or even the email from the raft of ambulance-chasing vendors who've spammed everyone they know in the hope of some business, is an acquired skill.
What do we do when we get the announcement? Call in the troops and start checking versions, checking settings, upgrading libraries, communicating with the executive team and the board. But why?
In part it's because when a vulnerability comes out with a severity score of 10 on the CVSS scale. The CVSS combines all the critical factors that relate to vulnerabilities, including the ease of exploitation, the level to which exploitation can impact the confidentiality, availability and integrity of the company's data and how locally or widely its effects can spread.
But enough about how CVSS works, and back to the skills element - how we should be dealing with these attacks from a cyber security point of view? By this we mean let's set aside the incident response elements of reacting to important new security issues - as we've said a number of times in the articles on this site, incident response is a science and skill in its own right - and ask: when we have others managing the incident for us, what are we as subject matter experts doing? Are we, as suggested earlier, checking versions, checking settings, upgrading libraries and so on?
Well, eventually we might be. But before we do, we should be considering the information that has led us to be aware of the issue. We should be closing our ears to the incident managers for a few minutes, reading what we're being told, and asking the questions: "So what?", and "Who says?".
Taking Log4shell as the example du jour, it scores the worst possible rating as calculated by the gurus of CVSS scoring. That's a big deal … but is it a big deal to us? We're being told that it is, but are we actually running anything that's vulnerable? Do we have that package at all? It's easy to say to the incident team: yes, we run that package, and see the look of panic in their faces as if the world is about to end. We need them to trust us to go and check whether we're running the vulnerable version and whether it's configured, or being used, in a way that makes the vulnerability exploitable. Yes, it's a bad vulnerability, but who says it's bad for us?
As for "so what?"… your average vulnerability has a lot of caveats to go with it. And the main caveat is: if a bad actor can't actually communicate - directly or indirectly - with the vulnerable system, it makes the existence of that vulnerability largely null and void. So yes, even if we run the version of a product or service that has its front doors and windows wide open, the chances are we have other stuff in our world that will defend it from attack. If the attack can't get in through the firewall, it can't get to the vulnerable system and so can't do anything bad to it. Yes, of course it would be better if we weren't running that dodgy version but the whole point of defence in depth - multiple layers of security - is that it defends us against a hole in one layer by stopping the traffic at other layers.
So yes, we do often react too quickly, because we need to learn - or even, if we've been in the industry for a whole, re-learn - that something generally bad to the world is not necessarily specifically bad to us, and gain or regain the skill of dealing with it promptly, calmly and sensibly.