Crucial Questions from Governments and Regulators
In this, fourth and final post in the series of Crucial Questions I’m going to focus on those from governments and regulators. This builds on the topics covered before:
This has been the most challenging to write because there is not such a clear set of natural themes, but I’ve extracted some of the most common I come across directly or indirectly. The prior disclaimers apply even more here that these are not the questions I necessarily think governments and regulators should be asking, but they are what is often asked. Again, these are not meant to be a collectively exhaustive set and are not in ranked order.
Disclaimer: these are personal perspectives written on my personal blog and are not in any way meant to convey the official perspectives of any of my professional or advisory board affiliations and should not be quoted or used in that context.
1. Risk Tradeoffs
How do we think about the trade-offs between security, resilience, privacy and other risks?
This is a complex topic where the more thoughtful governments and regulators understand that to drive harder on security can mean impact to resilience or privacy, and to drive harder on privacy can often limit the ability to observe or deal with certain types of security events. Things to consider include:
Examine the consequences of specific legislation or regulation designed for one risk-mitigating purpose and how that might affect other risks. Classically, here we see tension between privacy and security, especially where privacy rules limit visibility for security purposes.
Many of these natural tensions require some degree of trade-off and the fact that one thing may impact another doesn’t mean it’s bad - it just means the trade-offs need consideration. It is also a trigger for innovation. It might be, in the example of privacy and security that a different approach to security can be as effective while not impacting the ability to conform to privacy rules. Similarly, an adjustment to the precise framing of the privacy rule might preserve security capabilities while not affecting the intent of that rule. Consideration of all consequences will result in a better portfolio of outcomes.
As society presses harder on things like security, privacy, resilience and other risks we will see more and more inter-dependencies, complications and trade-offs. We see this within organizations and many organizations have or are aligning their teams that manage these risks more closely as a result - so that the trade-offs are internalized and become better handled. We should consider similar approaches across government and regulatory bodies to better internalize and handle those trade-offs.
2. Principle vs. Objective Standards
How stringent should we be in setting prescriptive standards for security?
This is a question that has been debated for decades in standards and regulatory arenas in every domain. Objective standards make for clearer goals and easier means of demonstrating and attesting to whether such standards are adhered to. But, these same standards can’t hope to cover all cases and all contexts. So, principle based approaches are often used. The use of principles vs. explicit standards can help determine new and more effective ways to implement specific controls to meet those principles. They can evolve in light of new technology or market developments, but they can also be hard to measure. Principles driven approaches also require significant access, expertise and experience from auditors and regulators to determine whether the principles are truly being adhered to. So, here are some thoughts on this space:
Technology innovations are constant and so any standard that prescribes a mandated configuration is going to be out of date almost immediately. But there is value in industry definitions of, so called, benchmark configurations. The Center for Internet Security (CIS) is a good example of this. It is best to couple principle based regulations with industry-driven standards. For example, a regulatory principle might be to assure you adopt and maintain hardened O/S configurations using recognized industry configuration benchmarks with a risk-driven exception approval mechanism. Then the specific implementation would be, say, to adopt the CIS benchmarks and have an IT Risk Council (or equivalent) review specific exceptions. Showing completeness or some exceptional approval process to a regulator for this principle is more than likely to be satisfactory even if some circumstances dictate non-conformance to the particular benchmarks. The NIST Cybersecurity Framework, while not a regulatory standard per se, is a good example of this type of principle, controls, to specific standards approach.
Measurability is important. When writing regulations, either principle or standards based, it is important to define criteria for assessment, guides for examiners and other approaches for consistency. Some industries in some geographies have gotten quite good at this. The FFIEC IT Examination Handbooks are a good example. Now, I know some of my former colleagues in financial services might disagree with me, but this approach where there are handbooks written for examiners but made available to the regulated organizations works quite well. It manages to thread the needle between principles and standards with some degree of prescription. Most objectives are clear while giving examiners (and the examined) some latitude for reasonable debate in the context of their institutions. Now, of course, there can be some unreasonable debate depending on the experience of the examiner and the intransigence of those examined.
Governance frameworks are immensely useful. As mentioned, the NIST Cybersecurity Framework is a good example but there are many others. Various ISO standards, flavors of SOC and even some of the more in-depth constructs like CoBIT while a bit dated can be used, in part, to good effect. If you and your regulators view these for what they are, a means of structuring appropriate professional management of risk, as opposed to ending up being a religion to blindly follow then they can be effective.
Governments and regulators should be wary of the unintended consequences of laws and regulation and establish appropriate consulting periods with trade associations, regulated or governed institutions to look to avoid such unintended (negative) consequences. This is especially important in laws or regulations that may not be directly security related. I’ve often wondered if we should expect governments and regulators to include a mandatory statement in every proposal where the cybersecurity impact of such a law or regulation is explicitly considered.
Having ongoing consultation between governments, regulators and companies is useful for forward risk planning so adjustments to laws and regulations can be contemplated well in advance of when it becomes imperative to do so. There are plenty of good examples of government advisory boards and industry-specific arrangements. Again, financial services in the US has some good examples here with the ongoing partnership between the organization of regulators, FBIIC, and the financial institutions' counterpart, the FSSCC.
It’s in all our interest to cooperate with regulators to establish an appropriate baseline of protection for your industry. Lax or missing risk frameworks can lead to new market entrants lowering the bar, causing issues and resulting in lack of customer trust which can impact a whole industry. Keeping a level playing field and constantly raising that level is in everyone’s interest as long as it is done with a reasonable commercial partnership from the regulator. That takes immense understanding and collaboration from both sides.
3. Information Sharing
What is the right amount and type of information to share between the public and private sector to what effect?
This topic can get pretty tiring because the answer to pretty much every problem seems to be let’s do more information sharing or let us have more public/private partnership. It often reminds me of the South Park underpants gnome episode except in this case constructed as Phase 1 : Share Information and Phase 3 : Cybersecurity is Fixed. But no-one explains what Phase 2 is.
So, how to talk and think about information sharing:
The objective, means and desired results of information sharing should be explicitly stated. For example, is it to share threat intelligence and hence proactively respond to that or is it sharing best practices, vulnerabilities or other information. The nature, mechanisms and success criteria of such information sharing will determine the best approaches to do this.
Capabilities are important. It is amazing how often there are situations where organizations wish for more information sharing, for example, receiving more threat intelligence from a government agency but they have no or little ability to actually handle it, make sense of it and respond to it in the right way. Their gnomes have no ability to conceive or implement Phase 2. But, that likely makes it even more important when developing the goals and means of information sharing to have different types of sharing according to the relative maturities of the parties involved. A sharing “on ramp” for some organizations would be useful.
When thinking of sharing information that is currently classified there was a time when our means of handling that was to keep getting more people cleared to be able to see that information. The problem with this, for many commercial organizations is that they have people that might not be clearable in certain countries. Even if they are, the people cleared to receive and act on such information need to be in sufficient authority to direct action without being questioned - because they won’t be able to share the underlying classified information as evidence of the need to act. So, the best approach often is to work hard to declassify information so it can be put in the hands of the many to enable action. This isn’t as hard as often imagined given much of the classification level is determined by factors that most organizations are unconcerned with. Most people now, when faced with some recommendation for control in response to some threat, will take action even if they can’t see the sources and methods of how the intelligence that led to that recommendation was collected. Of course, I’m simplifying here, but declassification is the ideal path.
Sometimes peer sharing between private organizations like ISACs or ISAOs is not as fluid as it should be. In many cases, organizations blame their lawyers. In my experience this isn’t a great excuse. In some cases organizations just don’t want to share and rather than just saying that, they blame their lawyers. In other cases the organization has not sought to have their lawyers work to come up with suitable desensitization or other approaches to overcome the actual or perceived blocks to sharing. The more society can put in place incentives or approaches were we are biased to share by default the better we will be.
Even sharing vague information can be useful. I remember with various technology vendors figuring out ways to get advance information of impending patches. Now, it was totally appropriate for them to not release information about patches ahead of time. But, it was useful (and some did this) to give a vague indication of what technology area or product was effected. For example, while they weren’t able to share what patch was coming that mitigated which vulnerability, the fact that is was coming to a certain product, say, an e-mail server, meant you could prime your e-mail engineering team to make sure they had people available to push through alpha, beta to production quickly and that any other regular maintenance plans might need to be postponed.
For me, the most important area of focus on all forms of information sharing, is to move beyond relationship-driven sharing. To be clear, trusted relationships between people in companies and between companies and government is important and often the best information sharing is done in such contexts. However, it cannot only be in such ways as we then always have the need to rebuild sharing mechanisms when the people in those positions move or change roles. Rather, we also have to focus on building the pipes or rails for information sharing to occur irrespective of interpersonal relationships or particular tacit policies. Moving from “artisanal” to “industrial-scale” sharing is vital.
4. Supply Chain Risk
How should we think about 3rd party, 4th party or even deeper supply chain risks?
There are a lot of concerns about the impact of vendors upon regulated companies and what transparency exists down the extended supply chain across an array of risks. Here are some considerations:
Recognize, up front, in all discussions of supply chain risk just how fiendishly complex this space is both as a source of risk and as a means of risk transmission. This is one of the spaces where simple (often naive) approaches can look like they will work but hit the buzzsaw of complexity very quickly.
Regulators have direct oversight to the organizations they regulate through various means of examination, dialog or consumption of third party audits, certifications or other evidence. However, everyone recognizes that risks flow up to those organizations through their supply chain. This supply chain is made up of a variety of different types of vendors from connected services, software providers, outsourced IT to other types of business process outsourcing. Any reasonably sized organization could have hundreds or even thousands of vendors. Some Fortune 100 companies have tens of thousands of vendors. Of course, these vendors (your 3rd parties) have vendors which could have risk impact, and those vendors have vendors as well which can in turn be a flow of risk. So, an organization has 3rd party risk, but also has 4th, 5th (or more) party risk to consider.
Regulators, unless vendors are also regulated entities, won’t have the authority to get to these 3rd and especially 4th, 5th or other parties. In fact, organizations themselves might not be able to get to inspect their 4th and 5th parties, or even want to for practical reasons. The only proactive way to do this is to dictate the existence of a Third Party Risk Management (TPRM) program at your vendors, and require that they dictate such an approach onward to their 3rd parties and so on recursively down the tree of dependencies.
In reality, transparency of risk and issues down this supply chain can get muffled quite quickly and, despite best efforts, you will not likely get a quick flow of risk information to deal with issues. This is where creating your own transparency, as an organization or regulator, is important. There are various means to do this but increasingly the only viable way is to build or source an extended supply chain graph. Such a graph (or map) of your supply chain down to the N’th parties lets you visualize the, perhaps surprising, extent of your risk. You can also use closed or open source information to tag that graph with known incidents, issues or other data to let you see the propagation of risk. It also lets you model various scenarios beyond security, such as reliability or resilience across that graph to see where you might need to intervene on particular risk build-ups. For example, as a regulator, seeing one vendor that is common to your entire industry may mean you would want to probe each of your directly regulated entity’s TPRM programs about their level of diligence on that entity. This is very hard to do, but there are companies that can help here, for example Interos. [Full disclosure: I am on the Board of Interos].
5. Cybersecurity Workforce
How we do ensure we are growing skills and jobs in our local market?
There are regular drum beats that we don’t have enough cybersecurity workers to meet the demand for such workers. This is mirrored in many government sponsored initiatives to create training, skills development or other certification programs. But as I’ve discussed in prior blogs, this is a more complex topic:
We need to view this as a supply and demand problem. There is a level of demand for cybersecurity professionals and a supply to meet that demand. The problem is that the supply is failing to meet demand.
Now, if you accept this is a supply and demand problem you are compelled to look more deeply, beyond aiming to only create more raw supply. You can look at the demand side and ask how demand can be reduced by various means of tooling, role distribution and other techniques. You can also look at the supply side and think creatively to enhance the productivity of existing people to meet that rising demand. If we need 10X more cybersecurity professionals then we can meet that demand by 10X’ing the productivity of the people we already have (I know this is a simplistic view as it fails to account for the distribution of supply, but you know what I mean).
On the supply side we also have to focus on the impediments to meeting additional supply goals such as insufficient entry-level positions or a failure to identify broad enough pools of diverse talent to draw from.
We also have to do more to embed education on cybersecurity, or more likely a context-specific set of topics in the professional qualifications or degrees in other disciplines. Getting cybersecurity in every Computer Science, Technology or Business Systems degree would be transformational. It is just as important to get a cybersecurity or business information risk case study into every MBA program.
Finally, we have to stop thinking of cybersecurity roles as some kind of monolithic career. There is no typical “cybersecurity” role. We’ve evolved - especially if you think broadly to technology and business information risk - beyond such notions and there is a much bigger tent of needed skills from engineering through to business risk. The more we view cybersecurity like other fields the better we will be and the more we do so will help us optimize the use of each type of skill set to achieve our 10X productivity goal. One analogy is the medical profession. The medical profession has no one archetype, there are doctors, nurses, research scientists, specialists, surgeons, radiologists, pathologists, medical receptionists, and so on. Everyone has their role and each role is optimized around the scarcity and degree of expertise of that role. For example, when you go for a doctor’s visit you see a receptionist to update your details, you see a nurse to take your vitals, you might see a phlebotomist for a blood sample, a radiologist for scans and then finally you see the doctor who is equipped with the collection of that information. In many respects this structure permits career development, offers different levels of entry for different professional types and permits the optimal use of time for the more scarce resources. Cybersecurity doesn’t have to be much different. I’ve seen organizations that structure their application security teams (to pick one example) in similar ways, for example, have the junior entry-level people work with application teams to structure the build environment for scanning and analysis and to run and respond to commodity scans and deal with straightforward issues while preserving the time of the “master” application security specialists to probe, review and discover more in depth issues.
6. Nation State vs. Criminal Threats
How should we prioritize dealing with nation state threats vs. organized criminal threats against businesses?
This is often intertwined with the role of governments in defending a company's operations within its national boundaries. This is difficult given the intrinsically amorphous nature of global digital operations:
Organized criminal threats and nation-state threats continue to blend. Some nation-states are using criminal activity to fund themselves, others are using criminals either as mercenaries to help their cyber-operations or as cover for their goals.
For most organizations organized crime is their biggest threat, and in many respects this is trans-national organized crime and so international law enforcement cooperation to interdict or respond to this is vital.
The reality is that organized crime is as much a national security issue as nation-state attacks. If organized criminal activity goes unchecked the less resources organizations can apply to the necessary defense from nation-state threats. Also, cyber-criminal activities can have geo-political consequences given some of the destabilizing impacts they have.
7. Reporting and Transparency
How do we become aware of incidents affecting our domestic enterprises or those under our regulatory charge?
This is often related to the prior topic in that governments and regulators for their own reasons want to know whether companies (or indeed large segments of the populace) are being targeted or impacted by security events. There are different reasons for this and it brings up various subtleties:
There is inherent difficulty as those organizations that have the visibility can’t necessarily share that without compromising the expected privacy of the people or organizations from which they have the vantage point for that visibility.
So, aggregation / fusion and then synthesis of such data is needed to form necessary insights to trigger appropriate responses.
This is another topic, like information sharing in general, where careful attention to the goals and objectives is needed to guide what activity can or should be done. It is worth looking more deeply at the inter-play between information sharing and necessary risk transparency. It will become progressively harder to do one without the other.
Bottom line: governments and regulators in all their various flavors have significant challenges. Laws and regulations can often be beneficial but can also be problematic when they spawn unintended consequences or hold back the innovation that is needed for us to rise above current security challenges. This is where the often vaunted public/private partnership is vital.