Skip to content

Patching is about more than Windows

Best practice

08:00 Friday, 22 October 2021

UK Cyber Security Council

We’ve all heard of Patch Tuesday. It’s the second Tuesday of each month, when our colleagues running our Windows servers and user endpoints take that month’s functionality and security updates from Microsoft and apply them across the estate. With the various tools at our disposal thanks both to Microsoft and to third-party software distribution vendors, the process is generally extremely painless and straightforward.

Why, then, do most companies make such a complete hash of patching everything else?

I used to work for a company that had cabinets and cabinets full of servers from a single market-leading vendor, and thanks to the law of averages, things sometimes stopped working properly. The uninitiated would pick up the phone to the vendor support line to report the fault, and the following conversation would ensue:

Support Person: “Have you sent us a DSA log?”

Our tech: “Errrr … no …”

SP: “OK, can you please run a DSA log and send it in; we’ll analyse it and get back to you”.

DSA was the Dynamic System Analysis; it would take some time to look at all the diagnostic elements of the server, network interface configs, RAID controllers, the lot. You sent it to the vendor and the inevitable response came: can you please update the firmware on <insert components here> and get back to us if the fault still exists.

And this is no criticism of this particular vendor – because if you, as a hardware manufacturer, have published software and firmware patches that address known issues, it’s perfectly fair to expect the customer to install them before complaining that the kit doesn’t work. On several occasions I’ve seen equipment fail when, had it been running the latest firmware, it would have kept right on humming. And believe me, it’s hard to explain to your management that the two-hour outage on their core billing system was entirely preventable and that the firmware patch had existed for over a year.

Why am I writing this in a security blog? Easy: remember, in the classic security triumvirate CIA, the A is availability. Most of the time, low-level issues in firmware and drivers tend to cause crashes due to software bugs … but some of the time they expose vulnerabilities that can be exploited to deny access or even to break into systems. In the former case I remember a brand of Unix server that one could cause to reboot, thanks to a funky bug, simply by issuing a “ping” command … but only if it had two Ethernet interfaces. And the most famous case of the latter is the iLO (remote administration) bug of recent years simply by firing a request at it containing the character sequence “AAAAAAAAAAAAAAAAAAAAAAAAAAAAA” (which, by coincidence, was the exclamation of most server engineers when they read about the bug in the press).

So, it’s valid for a security blog … what does it have to do with skills? Easy: most server engineers have been trained or educated in why and how to run their Windows patching. Most network engineers have a decent level of knowledge on how to (for example) upgrade their Cisco ASA firewall clusters’ firmware without downtime. But how many of us are taught to monitor firmware revisions on a vast variety of diverse equipment, to run a regime of upgrades across the entire estate whilst planning adequately to minimise unwanted downtime, to communicate to management in a sufficiently convincing tone that yes, we know the IT and security guys never bothered doing boring, time-consuming firmware upgrades on kit that has been running for years without it.

Patching is not only important, then, but it’s also non-trivial to do right. Which means that someone who knows how to do it right stands head and shoulders above all those who don’t. And who wouldn’t want that advantage?