Skip to content

Trusting your users more

Opinion

08:00 Monday, 15 November 2021

UK Cyber Security Council

Users get a bad press. Wherever we turn, users are cited as being the primary threat to our organisations' cyber security - whether by falling for phishing attacks, sending email to the wrong recipient or unwittingly sharing data with people who are not entitled to see it. The result of this negative image is that we, as our organisations' cyber security specialists, take control away from the users. If we can't trust users not to make mistakes, we take away as many opportunities as we can for them to do so.

And this is sometimes the right thing to do - specifically, it is the right thing to do where we reduce cyber security risk without making anyone's job more difficult (or, even better, reducing the risk and making people's jobs easier). Take, for example, an organisation whose contact centre staff had a strict process for authenticating callers but which failed a social engineering test because the CRM application permitted users to access data regardless of whether the caller had been authenticated. The application was changed so that at the beginning of any call it prompted the user with the authentication questions, and would not permit data access until the right answers had been entered. User stress fell, as the contact centre staff knew they could no longer make a mistake, and productivity was not impacted.

Instances like the above are rare, though. Only in the minority of cases are we able to increase security without someone somewhere having more work to do. Most of the time we are adding work and delay when we add security to our systems. Approached such as forcing strong passwords or adding Multi-Factor Authentication (MFA) to internet-facing systems causes inconvenience for users. And adding monitoring systems in the hope of identifying security flaws or unwanted behaviour has an overhead on the security or risk teams, as someone has to check on the outputs of the monitoring system.

Let us rewind for a moment, though, to the first paragraph. We talked of not being able to trust users not to make mistakes. We did not, however, say that we don't trust the users. Yes, in a small minority of cases security breaches that have resulted from actions taken by our people are planned and deliberate. But most of the time they are not - they are accidental and unwitting, and the users are mortified when the security team call on them.

Additionally, if an employee wants to make a deliberate attempt to, say, exfiltrate a mass of customer data for illicit use, he or she will probably find a way to do it. By all means use email filtering to check outbound emails for lists of customer details or credit card numbers, but most of the time it is simple to obfuscate the data prior to emailing it, so the robot on the email server will be entirely unaware that something unwanted is leaving the business. And anyway, unless you ban mobile phones entirely and police your office intensively there is always the potential for a bad-acting user simply to photograph data for exfiltration, and to decode it later using an Optical Character Recognition (OCR) tool.

Working on the premise that our users are predominantly honest and trustworthy, we can use these values to our - and their - advantage.

To do this, we need to understand what part of a mistake is the bad part. This may sound odd, because it is easy to convince oneself that the whole of a mistake is bad - it's a mistake, after all, and most of us have been brought up believing that getting things wrong is a bad thing. The late Sir Ken Robinson famously said, in a talk about education stifling creativity: "We stigmatize mistakes". Sir Ken was talking on a slightly different subject from the one covered by this paper, but what he said is universally correct.

In fact, we make mistakes all the time in our day-to-day lives. Running out of tea bags. Pulling the wrong debit card out of our pocket and having the transaction declined. Forgetting to get the washing in before leaving for work despite the forecast predicting rain. And there is little we can do to prevent such mistakes aside from telling ourselves to be more careful and thoughtful in future. But this is not a big deal because the outcome is (unless we are particularly unlucky in the first example) not particularly severe - usually embarrassment at the declined transaction or annoyance that we will have to drink coffee for a change. And this - the outcome - is what has the potential to be the "bad part" of a mistake.

Let us consider a potentially more dangerous mistake we might make in our day-to-day lives. Most of us have, when driving somewhere, become distracted and drifted over the white centre line of the road. Once in a while it ends catastrophically, but ninety-nine times out of a hundred we get away with it - unless a police officer spots what we have done and books us for driving without due care.

The potential for punishment is a deterrent, of course. While we drive carefully because we don't want to hurt ourselves or others, part of our motivation is that we know there is the potential for punishment if we make a mistake. And the same applies to cyber security: users exercise a good deal of care in how they work because they want to do things right and they are keen not to harm their customers through (for example) personal data breaches. But they also know that the companies' policies state that they can be disciplined - perhaps even fired - for making a security-related mistake.

But what if we could adjust our approach and focus not on the action of the user but on the actual outcome of that action?

Let's look at our distracted driver again. We could prevent catastrophe by taking away drivers' licences - but this would be disproportionate as the majority of the population will have made the same mistake at some point. What if, rather than preventing the action - becoming distracted and drifting off our intended line - we do something to prevent the bad outcome? And this is exactly what has happened: the motor manufacturers have introduced technology that can detect when you are drifting toward the edge of your lane and give you a visual and audible warning. Is this an invitation to drive less carefully? In some cases very probably, but the evidence shows that "lane departure warning" systems have a significant aggregate benefit: according to a US survey, about 85,000 accidents were avoided in 2015 thanks to such systems.

Back to cyber security, then, and we are beginning to see this approach in the security systems we use in our organisations. An example is the Data Leakage Prevention (DLP) tools from the likes of Egress and Tessian, which (among other things) examine the email messages users are sending - after they have hit "Send" - and warn of potential problems. "Auto-complete" issues are the most common issue: the user types the first few letters of the intended recipient's email address, and the email program automatically fills in the rest of the address based on whom the user has emailed in the past. And sometimes it picks the wrong one. DLP tools analyse historic email behaviour for each user and are able to flag messages that look "wrong" in some way.

But the important point here is that the tool does not flag it to the security team, or to the risk team: it flags it to the user. The DLP tool - a plug-in in the email program - kicks in when the user hits "Send", and if its algorithm sees something suspicious it pauses the sending of the email and pops up an alert on the user's screen. The user is told that, say, one of the recipients' email addresses has a different domain from the others, or that the "fred.smith@..." you usually send to is one at a different company.

And this is the key element: the user gets the warning, and the user has the option to hit "Send anyway" (or, of course, to hit "Don't send" and to edit the recipient list).

Trusting the user in this way is incredibly powerful, because everyone wins. In our DLP example the users receive the occasional pop-up message, but they don't mind because they feel that, on balance, the DLP tool "has their backs" - and the moment they are saved from a bad outcome such as a personal data breach by the DLP tool, they become fans. And the security team are happy because users are having fewer bad outcomes with next to zero effort on the part of the security team.

We need, however, to promote this culture of trust. By default, we do not trust our staff - even before they join the company we check their references and determine whether they have a criminal record. We would not dream of taking an individual's word that they are not an axe murderer and that they really have worked where they say they have. But this lack of trust at the beginning of an employee's engagement gives us all the more reason to trust them once they have joined.

As security specialists, trusting staff to behave securely is a leap of faith. But it is a leap we have to take, because with the right help and the right tools, trusting our users allows them to help us in our quest for security.