• Phil Venables

Research Challenges in Info/Cybersecurity - Part 2: “Carbon”

This is the second part of the post from 2 weeks ago, which explored research challenges in Info/Cybersecurity related to systems: “silicon”. This time we'll focus on the human aspects: “carbon”.


Security is an emergent property of a complex system. One of the drivers of such emergence is the right mix of behaviors from the people that comprise that system. People are often resistant to behavioral coercion, so, techniques are needed to stimulate, capture and promulgate good personal and system-wide behaviors.


As I see it there are 5 main challenges on the human front of improving our collected information security posture. These are not just at an individual level, in fact most are about organizations and systems of organizations, whether they are companies, societies or other constructs. As ever, I suspect this list is far from complete, and it is remarkable how much research has flourished in the past decade on the human aspects of security from human factors work to economics and behavioral psychology.

1. Economics and Incentives

Many of you who have worked in any organization, especially large ones, know that a big part of the security role is to align incentives. Part of this is to seek adjacent benefits so that risk reducing activities don’t have to be imposed top-down but develop naturally as a consequence of how the work is undertaken. There has been significant progress here on joint efforts to bring together multi-disciplinary research from computer science, economics and other disciplines. WEIS, the annual Workshop on the Economics of Information Security is a shining example of this. Its archive is full of useful, insightful and practical research.


Still, we have opportunities for further work to bridge the management of various risk types, including cyber, particularly on how risk transmission occurs in organizations and across global systems and supply chains. Great work is emerging in various Universities, not least the NYU Stern School of Business Volatility and Risk Institute (full disclosure, I’m on the Advisory Board of this so I’m biased).


2. Behavioral Science and Human Factors


This is another area where there has been tremendous progress across a range of risk types, especially human factors and design considerations for encouraging more secure interactions with systems because of, rather than in spite of, their design. This area has also benefited from behavioral economics research, particularly nudge theory which has led to various incarnations of so called nudge units. I know a number of organizations have seen tremendous security risk reduction results from work of their own nudge units. There is even a NudgeStock which is truly fascinating. The Human Risk podcast is well worth subscribing to as well.

One thing to remember that comes through in all of this is that human error is something to explain not an explanation itself.


3. Risk Ontology, Measurement and Metrics

One area where I think there is still much to be done is on the development of approaches for risk ontologies, taxonomies and methods to quantify risk. There is a lot of practical work going on in consultancies and organizations using methods such as FAIR. One very promising research area is on Bayesian Networks applied to operational risk (including cybersecurity risk) and this book is a great summary of that with links to current research.

4. Human Readable Policy Expression


In Part 1 of this post we discussed the need for improvements in distributed policy specification and enforcement. There is a related topic on how to drive such specification in human readable form. This is so that participants in an organization can match risk management or policy intent to the policy that will be encoded in machine readable and automatically enforceable form. There has been a lot of research using various graphical interfaces but I’ve yet to see progress on how to map organization risk intent in ways that can be understood and reasoned about by non-specialists that can then be step-wise encoded into machine readable form.


5. Systems Thinking / Fail-Secure / Fail-Safe Design Principles [Cyber / Physical Control System Synchrony]

Finally, there is a rich field on systems thinking and control theory which helps us understand and manage risks in complex systems. I’d recommend this book as a great introduction to the concepts. But, this is a wide field of study, that is much referenced and discussed but actually under-utilized in information and cybersecurity. This seems to be the most promising future line of research.

Bottom line: I think we all realize we cannot do the technical work of security without the human work of ensuring our employees, users, or customers are able to deal with our constructions. We need to ensure our own organizations and eco-systems are designed to reinforce appropriate risk management. Today, much of that vital work is folklore driven, some of which is good, but most is haphazard. Great work is happening but a lot more needs to be done before we have anywhere near a reasonable theory of the “carbon” side of risk management that can support and reinforce the “silicon” side.

758 views0 comments

Recent Posts

See All

CISO: Archeologist, Historian or Explorer?

We talk about attackers being the enemy. Sometimes we talk about insider threats. But one of our biggest enemies is pernicious dependencies. We all have painful examples of these, here’s one: A long t

Cybersecurity - The Board's Perspective

How Boards, especially public company Boards, oversee cybersecurity is a crucial but difficult topic. This previous post discussed how you, as a security or risk leader, can think about representing y

Cybersecurity and the Curse of Binary Thinking

Working in information/cybersecurity and technology risk is a fascinating and challenging career, as I’ve covered here. There is, mostly, a great spirit of sharing and collaboration among security pro