top of page
  • Phil Venables

Human Error

Human error is not an explanation, rather it is something to be explained. In analyzing and learning from incidents, not just security incidents, you should never be satisfied with anyone closing out a post-mortem or issue as having a root cause of human error. In every incident or close-call I’ve ever worked on, where there was an assignment of root cause to human error it actually turned out, when you looked at it more deeply, to be a situation where the humans were in fact performing heroics. These were heroics in the face of a poorly designed environment, terrible user interface, system operation issues or other factors. In reality, what was amazing in all these cases was not that there was the occasional error but that there weren’t a lot more - a hell of a lot more. The humans weren’t the problem, they were the saviors already working to mitigate broader root causes. Let’s look at some examples I’ve seen over the years.

1. Logical Circuit Breakers

There are many types of processes where "circuit breakers" operate to reduce the risk of runaway algorithms or other automation. Some common examples include virtual or physical server re-provisioning, imaging or reboots as well as automation to revoke access when there are changes in HR records, like someone's employment status.


I’ve heard of a pretty horrendous event where an HR system bug erroneously terminated a whole company and then the identity and access management system automatically tried to revoke 20,000+ employee id’s and privileges. This was only curtailed by a fast fingered production support person. Many organizations build in circuit breakers to provide protection against exactly this type of issue. These circuit breakers are configured to automatically stop activities that appear anomalous, for example >5% of employees terminated in any one day, but also have overrides in case the activity is genuine. The problem can be, and I’ve seen a few different examples of this, where the circuit breaker is mistuned and noisy so the human operators have built up muscle-memory to dismiss the alerts and to constantly reset it. On the day there is an activity that really does need stopping then the operators just keep reseting the circuit breaker and disaster happens. The incident report then asserts this was human error in incorrectly resetting the circuit breaker - as opposed to blaming the system design and tuning such that the humans were conditioned to keep reseting it.

2. Alert Fatigue

There is a similar situation that happens a lot in security monitoring and other observability situations. In fact, this has been covered as a factor in a number of well-known security breaches. In these cases the security systems were flashing red, so to speak, but the security operations analysts missed it, tuned it out, or failed to connect the dots. In hindsight, the one big red alert that showed a successful attack, that was missed can be ascribed to human error. This is wrong, especially when there were, in parallel, hundreds of other seemingly big, but false positive or less critical, red alerts.


The real root cause here, among others, is the failure to absolutely prioritize significant events, and to effectively mask away non-critical events using automation or ideally autonomic security operations. It is also worth thinking about the desire of teams to cast things into false positives vs. true positives which causes some issues to be missed. I’ve seen occasions when people assert a true positive, that is just not critical, is a false positive. The problem with this approach is that a series of these kinds of “false positives” together represent the build up to a serious event. The trick here is to only have false positives be actual false positives. In other words, to distinguish between false positives, true positives and varying degrees of criticality of true positives. Then, one of the most important security operations metrics is the false positive rate simply because that is what creates the risk of noise which can crowd out the true positive signal.

3. Error Prone Business Workflows

Every organization is full of business workflows that are some combination of application, e-mail, data re-entry or transcription. Well run organizations have less of these but most have some, and poorly run organizations have lots. Inside these patchwork workflows are numerous opportunities for sensitive data to be leaked, mishandled or for fraudulent transactions to slip through. Such issues may result in the specific targeting of that organization if it is known that it is structurally unprepared to handle them. Again, the post-mortems of incidents like a fraudulent payment being dispatched are often put down to human error. This is deemed human error just because the human made one mistake in the midst of a transcription between one system and other, had a confusing cue from another system about what was authorized or many other possible factors. The real root cause, of course, is the system design which had no in-built guardrails for what was authentic, authorized and approvable. When you look at situations like this, the amazing reality is that there isn’t even more errors or problems. Incidentally, when these types of workflows are automated using, so called, Robotics Process Automation (RPA) then that approach can paper over the myriad of cracks that should really be redesigned end to end. Without such care it effectively makes RPA become “human error as a service."

4. Poor Confirmation Dialogs

This is a classic. Interfaces to critical systems where there are pop-up/modal dialog boxes to confirm an action e.g. “Do you really want to approve a transaction for $1,000,000,000.00 hit Yes / No”. Again, occasional data entry issues or incorrect approval dismissals are blamed as human error until you look at the fact they’ve built muscle memory to keep clicking yes. There are some interesting innovations I’ve seen from simply randomizing the Yes / No box positions, to varying the relative mouse movement speed as you approach the visual element that would approve a critical transaction (so it feels like harder work to get to that), all the way through to some interesting experiments in sensory feedback like haptic mouse vibration and system alert sounds or other warnings.


There are other related issues, often in financial services where rates or exchange fields need to be entered and the underlying semantics of the numbers change in different systems e.g. in one system you enter a percentage, in another system you enter basis points (1/100th of a percent) - this is where significant user interface consistency work is needed with particular attention to incompatible data entry semantics that exist within specific work groups.


5. E-Mail Data Leakage

Another common one is blaming people for misdirecting sensitive content in e-mail. But the underlying issue is really that the organization has failed to take steps to protect employees from themselves. This could be by routinely tolerating workflows in which sensitive information lives in e-mail in the first place, to not having warnings (although see point 4) that you are about to e-mail something externally or that you have chosen to permit e-mail address book auto-complete without some type of flow restriction to protect employees from common misdirection errors.


6. Phishing

I probably don’t need to even discuss this. But for completeness, yes, it's blaming users for clicking on phishing links which subsequently cause security incidents in some way. There’s a special place in hell for post-mortems that blame users for big security incidents because they clicked one link. If there is an issue where the cause was clicking on a link then the diagnosis should not be human error and the infliction of more training and phishing tests, rather it should be to question the lack of defense in depth and what needs to be improved such that clicking anything isn’t harmful.

7. Software Vulnerability

Ironically, software development is accepted to be so hard that it’s rare that developers a maligned for vulnerabilities, although there has been a few cases where the software error was sufficiently naive or otherwise egregious that people blame them anyway. Again, if an incident is ascribed to a programming error then that isn’t enough. The real root cause could be a combination of a lack of tool / IDE automation to detect security or other issues when the software is developed, a need for improvements in the analysis (static, dynamic, fuzzing or otherwise) at build time all the way through to the provision of well-architected and reviewed frameworks that eliminate common errors.


8. Production Maintenance Errors

Many of us are familiar with events when developers, operators, DBAs or SAs issue the wrong command in production that drops a database or reboots a server. I once IPL’d (rebooted) a whole production mainframe instead of the test partition I thought I was working in. All of these incidents and outages are human error. But the real root cause is the environment the humans work in that made these errors possible. You shouldn’t be able to be routinely tinkering in production in the first place and any necessary support access should be mediated through layers of access review and subject to menu driven interfaces to reduce the scope of command line error. If for some reason you can’t put in place those more deeply structural mitigations then other approaches can help. I saw one system reduce the scope for error by making people do a quick math problem (adding numbers) to engage a different part of their brain to make them stop and think about what they are doing. Other techniques include coloring a window differently and in the physical world putting floor markers or barriers around sensitive areas.

9.Web Site Uploads

Related to business workflow and e-mail issues there’s another pattern of incident where people are blamed for sending data outside of organizations to external web sites to perform basic functions. There was a spate a few years ago in various organizations where PDF to Microsoft Office converter web sites were used, and some of them weren’t particularly careful with the data that was sent to them. There are many different types of web sites/service and this is part of the whole “shadow IT” pattern. If there’s a security breach as a result of this then it is (or was) usual to blame the employee who was often just trying to get their job done. The real error was the lack of learning that when this situation was occurring to quickly provide internal tooling and to redirect those type of web sites to that tool or to the one reviewed and preferred web site - or even just go one step further and embed the web site’s capability more fundamentally into the employees workflow.

10. Connecting Compromised Devices

Another seemingly common pattern, thankfully less so with zero trust user access and other controlled/mediated remote access methods, was to blame employees for connecting a laptop to the corporate network when it was unpatched and/or compromised and to then ascribe human error for the consequent security incident. Of course, the reality here is that the root cause is the fact they could do this without the interdiction of the security system.


11. Excess Privilege

Finally, although I could probably list another 20 categories of human errors that really aren’t, is the security incident that stems from someone having excess privilege, or a break in separation of duties for critical approvals like high value payments. This can be seen as the human error on either the security privilege administrator or the manager / supervisor who is supposed to routinely review and re-certify privileges. In reality many of these systems have poor user interfaces or otherwise don’t give the right cues to the person doing the reviewing that shows what are discrepancies. Even more fundamentally, there shouldn’t be such recertification in the first place and people’s access should be determined according to rules, roles and attributes and such enforcement applied automatically in the privilege management systems.

_____________________________________________________________

So, let’s sum up what organizations can do to shift from having to tolerate or deal with as much human error:

  • Conduct blameless postmortems for incidents or close-calls and do not permit human error to be used as an explanation - except if it’s to illustrate, collective, human based decision errors in the design of the system overall.

  • Focus on the design of end to end business processes including applying systems and design thinking to reduce the scope for human error.

  • Put in place guardrails and automation.

  • Examine where fail-safe or fail-closed controls need to be put in place.

  • Establish protocols where alerts / circuit-breakers or other tunable logical controls are regularly reviewed for noise levels.

  • Empower employees to raise red-flags if they feel they are working in an error-prone situation.

  • Apply defense in depth so impact from human error is minimized and the blast radius reduced.

  • Design-in separation of duties and/or dual control for critical activities e.g. two-person code review, critical transaction approval like payments creation and payments release.

  • Finally, in physical safety critical systems look for analog complements to digital controls. For example, dual turning physical keys, removing physical firing pins before a computer controlled weapons system can activate. Even in regular environments it’s useful for periodic or activity driven touching of your FIDO U2F key. As a more extreme example, I once worked on a oil drilling platform that had an analog back-up to the digital control system that when manual strain was detected, that was in excess of what the digital system should have reacted to, it would physically cut power.

Bottom line: in any incident analysis do not permit there to be a root cause of human error and above all actually look to see how the humans are adapting to avoid errors in the face of poor system design. Often you will find heroics that are keeping at bay what likely should be the much higher natural incident rate. Then apply systems and design thinking to make the environment free from the potential of or otherwise less prone to human error.

2,523 views0 comments

Recent Posts

See All

Security and Ten Laws of Technology 

There are many well known, so called, laws of technology. Moore’s law being particularly emblematic. Let’s look at some of them and see what the security implications have been for each and what might

A Letter from the Future

A few weeks ago The White House published our PCAST report on cyber-physical resilience. Thank you for all the positive reactions to this. There is already much work going on behind the scenes in publ

InfoSec Hard Problems

We still have plenty of open problems in information and cybersecurity (InfoSec). Many of these problems are what could easily be classed as “hard” problems by any measure. Despite progress, more rese

bottom of page