top of page
Phil Venables

Security and Ten Laws of Technology 

Updated: Apr 21

There are many well known, so called, laws of technology. Moore’s law being particularly emblematic. Let’s look at some of them and see what the security implications have been for each and what might further develop as a result. 

[Definitions of the laws are from Wikipedia or other linked sources.]


1.Moore’s Law

Moore's law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years. Moore's law is an observation and projection of a historical trend. Rather than a law of physics, it is an empirical relationship linked to gains from experience in production.

We’re all very familiar with Moore’s law and the massive benefits to computing and society it has represented over the past decades. The mass digitization of pretty much everything enabled by the diminishing cost of processing capability has of course been amazing. However, that has inevitably fueled the rise of cyber-attacks driven from this exponentially increased attack surface. 


On the plus side, much less talked about, is the security boon that has come from the radical scaling in that processing capability. This has enabled us to be less frugal in using up cycles for security functions. It has also helped with specialized instructions on chips to optimize the performance of cryptographic operations, without which the pervasive encryption we enjoy on many platforms would not be as cost effective as it is.


While silicon real estate is always under pressure for cost, power, performance and other factors, the scaling that Moore’s law describes has also enabled other security innovations. Such innovations support security isolation objectives further up the stack from hypervisor, virtual machine manager to the operating system. We also have other hardware innovations like confidential computing which is enabling advances in the protection of AI systems


However, on the downside, the effects described by Moore’s law have tapered off. As a result we have seen the need for performance optimization in processors leading to techniques like speculative execution which has inadvertently spawned a range of vulnerabilities like Spectre, Meltdown and others.   


2.Murphy’s Law

Murphy's law is an adage or epigram that is typically stated as: "Anything that can go wrong will go wrong." In some formulations, it is extended to "Anything that can go wrong will go wrong, and at the worst possible time."

This is not a strict technology law of course. It’s something that all walks of life experience. This points to the extended disciplines of resilience and chaos engineering in the security and reliability domains. Building technology in the face of constant failure becomes even more important at scale when even four to five 9’s of reliability become constant point failures if you have millions of machines. So, engineering for such failure is vital. The best book on this concept is from the experience of Google: The Datacenter as a Computer : Designing Warehouse Scale Machines


A more specific angle on Murphy’s law for security is the need for continuous control monitoring. Most controls will degrade or otherwise fail over time unless constantly and deliberately sustained. Murphy’s law also dictates that the control that has failed will be the one you most needed to thwart an attack - if only because, by definition, you’ll notice that failure in a breach post-mortem. Most of those post-mortems are usually about why the control you thought was supposed to stop an attack didn’t (“yeah, we disabled it for this change and someone didn’t switch it back on”) as opposed to a genuinely new attack for which no-one had contemplated a control. My favorite Murphy’s law corollary is: "If you can think of four ways that something can go wrong, it will go wrong in the fifth way." 


3.Conway’s Law

Conway's law describes the link between the communication structure of organizations and the systems they design. It is named after the computer programmer Melvin Conway, who introduced the idea in 1967. His original wording was: “Organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.”

There are many off-shoots from this in the way products come out of any organization. It’s not just the big tech companies that this can be applied to, although this legendary diagram does epitomize how many think of this. 





In my experience, Conway’s law affects most organizations, even those on the small end. It’s also worth thinking about this in the context of overall risk management and security alignment as opposed to how products are conceived and produced. 


Many security teams strive to either align themselves closely with product teams, or go the full way and embed security engineers or create federated security teams within other teams. At first, security teams that do this mostly think the effort is to train, equip and otherwise support those embedded teams to get that local security job done. But, most of the time, the real effort is trying to harmoniously orchestrate the work of those teams across organization boundary lines. Let’s be honest, this mostly means dealing with organization dysfunction to get it all working despite the org chart. I don’t mean this to sound too negative, sometimes the organization structure makes perfect sense from one design objective but doesn’t work for others - security included. I’m sure if we designed organizations purely to optimize for collaborative end to end security objectives we might break wider goals or impact other risks like resilience. Optimizing such things is a hard problem. 


There are similar effects on risk governance where a cross-enterprise risk exists that requires coordination among departments to resolve an issue. The classic example of this I’ve seen many times is a software vulnerability manifested in a shared library for which the update changes some functionality depended on by a different function - who for whatever reason can’t prioritize the testing and upgrade. So, all organizations are “locked” because of the one - and it’s undesirable to manifest or fork versions of the library for each function. The organization design aspects of this come down to different degrees of “coordination tax” and incentives to get these types of risks mitigated. 


4.Hyrum’s Law

Hyrum’s law asserts that with a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody. 

This is perhaps best illustrated by another one of the classic XKCD cartoons. 





I’ve seen the security consequences of this on a number of occasions, often in concert with the consequences of Conway’s law. My favorite (if such a thing can be a favorite) example of one of these issues was many years ago in trying to upgrade the key length of a cipher set on a well known directory server. It had (I’m glossing over much detail here) inadvertently exposed a property not in its main standard APIs that had become depended on by a particular vendor’s NFS file system in a way that couldn’t support the change. The directory server vendor didn’t want to know about the problem as it wasn’t in their official APIs and the NFS vendor was dragging their heels (“you're the only customer asking for this” - even though we weren’t). Worse, further upstream a home-grown database system further depended on some undocumented features in the NFS system for performance optimization (clever engineers) that was going to break when the NFS vendor did finally do the upgrade. So, there you have it, it took us a long time to upgrade the crypto despite huge effort and organization commitment. To any outsider we’d have looked like we didn’t care about security because they didn’t see the hidden dependency hell-scape we were living in. You could probably write a book full of similar examples. 


5.Metcalfe’s Law

Metcalfe's law states that the financial value or influence of a telecommunications network is proportional to the square of the number of connected users of the system. The law is named for Robert Metcalfe and first proposed in 1980, albeit not in terms of users, but rather of "compatible communicating devices" (e.g., fax machines, telephones).

This law permeates pretty much everything we all work on these days from social networks, B2C, B2B, complex IT environments, through to supply chains. Network effects, scale-free networks with preferential attachment and other properties create a whole set of risks. These risks are typically manifested as concentration risk on particular components, services, products, vendors or physical infrastructure. In many cases they’re a hidden part of a supply chain network at the 4th and 5th party level from your own 3rd parties. 


On the upside though, it does give us some opportunities to find the 80/20’s when driving security upgrades. It also reveals “network” based opportunities to slipstream control implementations across a wider enterprise. For example, I remember first implementing an enterprise single sign-on (SSO) system in the 2000’s - when we were all first doing this kind of thing - and it was no easy path to get 5000+ applications integrated across a major enterprise. The key intuition we had was not to target the SSO implementation at the most critical systems first, but rather at the most widely used systems. Most of these, frankly, weren’t that security critical. The reason we did this is we wanted end users to experience the benefits of a little bit less password entry friction so they in turn would demand their IT teams get their apps on the SSO system. In this way we, the security team, didn’t really have to do a forced march SSO implementation. Rather, we provided the tooling and APIs and let the development teams self-serve onboarding at the rate which their users were demanding. If we’d have driven this first by the 20% of systems that represented 80% of the risk criticality we would not have realized a network effect and it would have taken 5X as long to get this done. Rather, going after the mostly different 20% of systems that touched 80% of the workforce we catalyzed a business driven pull that got it done in ½ X the time. 


6.Wirth’s Law

Wirth's law is an adage on computer performance which states that software is getting slower more rapidly than hardware is becoming faster. The adage is named after Niklaus Wirth, a computer scientist who discussed it in his 1995 article "A Plea for Lean Software".

Despite Moore’s law we manage to develop software that more than soaks up hardware scaling. I remember developing on IBM mainframes, which were actually pretty cool. What I most remember about that was running a 1000+ concurrent user transaction management system on a mainframe that had less processing power than a Pentium with 8Mb of memory and not a lot of disk. But thanks to co-processing (front end comms processor, disk channels etc.) and an efficient O/S even with virtualization (MVS on VM/CP) and judicious use of efficient programming and transaction monitors we could squeeze a lot out of every instruction cycle. 


On the negative side, a big consequence of Wirth’s law is the software bloat that has created in many cases an unnecessarily large attack surface with all the commensurate security vulnerabilities. 


7.Cunningham’s Law

Cunningham's Law : "The best way to get the right answer on the Internet is not to ask a question; it's to post the wrong answer.” This refers to the observation that people are quicker to correct a wrong answer than to answer a question. According to Steven McGeady, Cunningham advised him of this on a whim in the early 1980s, and McGeady dubbed this Cunningham's Law.

Many of us deal with the related effects of this that some parts of the security community are pretty adept at critiquing laudable efforts to improve security without providing much in the way of alternatives themselves. Now, yes, we should all be thick-skinned a bit more but Cunningham’s law plays out unintentionally all the time. 


8.Hyppönen’s Law

Hyppönen’s Law is stated as “If it’s smart, it’s vulnerable”. 

Mikko’s law is obviously true and, yes, deliberately obvious. I like this law especially because it does remind us to think about rationing how smart things are. Now, this might sound odd, wouldn’t we want everything to be as smart as possible? Well no. I don’t necessarily want my toaster to be that smart. 





Beyond this, having some choice of ensuring basic functions of devices can be sustained without connectivity or with user-selected degrees of smarts would not only ensure some element of long-term reliability but would also cut out a whole bunch of potential attack staging points and future botnets. 



9.Kryder’s Law

Kryder's Law, describes the observation that magnetic disk areal storage density was then increasing at a rate exceeding Moore's Law. The pace was then much faster than the two-year doubling time of semiconductor chip density posited by Moore's law.

Talking of attack surface, this is the related concept of how much data we’re keeping - simply because we can - largely because of Kryder’s law. So with all this data we have increasing efforts to protect that data. There are some good efforts in many organizations to minimize the data captured or retained in various services. On the plus side, we get to keep more logs and other telemetry that is useful for security purposes.


10.Venables’ Law

Venables’ Law can be stated as “Attackers have bosses and budgets too”. 

Sorry for the arrogance of putting my own “law” in here. While I might have been the first to possibly articulate this concept using this expression, I'm certainly not the only person to contemplate decades ago that attackers have constraints and are mostly rational in the face of those constraints. And, where there are constraints there are opportunities for defenders to mess with the attackers to impose cost or deter action. Constraints might be economic, they have budgets and time to conduct certain attacks before having to move on to other things. Other constraints might be the power structure they live within (their bosses) that drives a desire to perform, to not get caught and embarrass the team or regime. Or a constraint might be personal, like you quite enjoy your annual vacation in Europe and that pesky indictment / sanction or other outing of you puts a damper on your life in many ways. All these constraints can be amplified by defenders by making defense more effective, increasing the likelihood of them being caught through to “defending forward” to sow discord among groups of attackers, their network or their management structures. This is why cyber-deterrence is more of a nuanced topic than simply raining fire down on attackers in the form of penalty; you also get to use futility (raise the cost of attacks), dependency (the attack also impacts the attacker somewhere in their network or supply chain), or counter-productivity (the longer term consequences of attacker behavior negates their own strategic objectives). 


So from this law, our goal as defenders is to wreck the economics of attacks and wreck the power structures of attackers’ human / organizational networks. The adage of the attackers only having to be right once but defenders having to be always so can and should be reversed. Attackers have to be right (undetected) all the time while defenders only have to find them once. Of course, to actually make that happen and truly impose those costs you’ve got to have a pretty mature approach to prevention, detection, response, recovery and transparency of what happened. The transparency part (maybe in some closely held way) is what will drive the digital immune system to destroy attacker advantage and bring the disinfectant of sunlight onto their objectives and power structures. 


Bottom line: of course these are not laws in the strictest sense but these adages are nevertheless useful for encoding some of our body of knowledge. Like megatrends these are important to pay attention to so you can “ride” them to your advantage but perhaps even more so to make sure you’re not positioned against the unstoppable forces they represent.

4,154 views0 comments

Recent Posts

See All

6 Truths of Cyber Risk Quantification

I wrote the original version of this post over 4 years ago. In revisiting this it is interesting to note that not much has actually...

Comments


Commenting has been turned off.
bottom of page