Insider Threat - Blast Radius Perspective - Updated
Of the vast canon of insightful commentary that has come from Dan Geer over many years, one that especially stuck with me was his description of insider threat being the "illegitimate use of legitimate authority". I have found this to be a very useful and practical framing to help sort out what goes in your insider threat program vs. all your other control programs.
Of course, I just had to draw up the grid of legitimate vs. illegitimate use and authority.
Which raises the question of what is legitimate use of illegitimate authority? Perhaps law enforcement or military action to interdict or investigate attackers.
Anyway, let's focus on insider threats, the management of which is a complex and often under-thought process. People who work on it appreciate the subtlety and difficult trade-offs. Some who don’t think it is straightforward. Let’s unpack it. First of all, this short post isn’t going to even come close to covering all aspects of well managed insider threat programs - instead there is excellent coverage by SIFMA and CERT. Grossly simplifying, there are 3 types of threats:
Trusted insiders who go bad over time due to disgruntlement or other reason (Progressive Insider Risks)
Trusted insiders who go bad immediately from some cue like coercion from an external actor (Instantaneous Insider Risk)
Infiltrators, i.e. external attackers who infiltrate the organization. These are still acting with legitimate authority that they have been granted. Infiltrators can often look like Instantaneous Insider Risks, so we’ll just discuss the first 2 types. Note: one of the commercial benefits of effective insider threat risk management is you can protect from error/carelessness often with the same precautionary steps as you would to thwart malicious intent - this can often be worth doing even if you're not considered a significant target.
Progressive Insider Risks. As the name implies, these people go bad over time before perpetrating usually small then progressively large malicious actions. They can get caught by detecting some “disturbance in the force” (h/t @taylopet for this phrase in this context). Such detections can be from their activities (e.g. accessing more information, leaking data, small infractions, job performance issues, etc.) or changes in their behavior (e.g. change in work patterns, personal circumstances, revealed work stresses, etc.) There will often be signals given off "left of boom” before they commit a more significant event. These can be used to intervene with discipline, but sometimes more helpfully as a trigger for support / counseling to address the root of the disgruntlement / other issues. The usual array of preventative and detective controls in place to mitigate many other risks are critical here, from background checks, identity / access management, data rights management, data leakage prevention and detection, logging, anomaly detection and so on.
Instantaneous Insider Risks. As the name implies, these can happen without warning and without pre-signaling. As they say in the trade, “if you hear the boom they’ve already missed you.” Arguably, as organization’s digital defenses improve and limit the reach of attackers, we will see more of the tactics come back that pre-date digitization i.e. bribes, extortion, coercing an employee into doing something nefarious with no warning. The key here is to reduce the blast radius of potential events. Specifically, to enumerate job roles and determine, if the person in that position went bad instantaneously, how bad would it be. If the answer to that is beyond whatever your risk appetite is, then work needs to happen. This, hard, work includes designing interventions to adjust job roles to reduce blast radius, remembering this isn't just about theft or fraud, it could be destructive events. Interventions can include:
- Reducing access to what is reasonable for the role
- Further redesigning the role to need less privileges
- Adding separation of duties or multi-party control
- Adding circuit breakers to reduce scale of potential damage
- Creating means to fast undo actions
- Adding temporal breakers to delay invocation of activities (time to reverse)
- Adding time between progressions of activities (time to intervene)
- Prohibiting direct change to environments - policy as code.
Bottom line: many insider threat programs are tuned to detect progressive risks. It is important to also deal with hazardous instantaneous risks by limiting the blast radius of potential events. This has the adjacent commercial benefit of reducing error risk and increasing resilience.