top of page
  • Phil Venables

Where the Wild Things Are: Second Order Risks of AI

Every major technological change is heralded with claims of significant, even apocalyptic, risks. These almost never turn out to be immediately correct. What often turns out to be riskier are the 2nd order effects that are a result of what is done with the new technology.

 

No matter what, we do have to care about AI risks. Many past technological warnings of disaster have been avoided precisely because we did care. But the bigger risks come with what comes after what comes next. This is inherently unpredictable but it doesn’t mean we can’t try to foresee this or at least look for warning signs. To paraphrase the thesis from Collingridge’s The Social Control of Technology, when a technology is in its infancy and can be controlled we don’t understand its consequences -  and when we do it is so widespread and entrenched that it is difficult to then control. 


Clearly, this is all worth paying attention to, not so we are overly anxious about AI but so we can manage the risk and reap the massive rewards in safe and responsible ways, and be ready to mitigate the inevitably surprising second order risks in appropriate ways.

 

Lessons from History

History has been punctuated with massive technological shifts. In each case, at their inception, predictions of disaster were made that did not come about, at least in the timeframes or at the scale predicted. Rather, it was the 2nd order effects that had the more significant consequences. Let’s look at a few illustrative cases focusing on the second order risk consequences. To be clear, I’m in no way diminishing the 10, 100, perhaps 1000X benefits that outweigh the negatives of each, but for the purpose of this exercise let’s just look at those 2nd order risk effects:


  • The Steam Engine. There were immediate warnings of perpetual explosions and deaths from its use in transport. Instead it led to the industrial revolution and all that came from that. 


  • The Car. There were significant concerns about the amount of dust created on unpaved roads. Instead it led to wider forms of pollution and the reshaping of cities. 


  • Computing and the Internet.  Microelectronics, IT and then the Internet were going to take everyone’s jobs. Instead it led us on a long path to incessant cybersecurity issues and a world in which (to quote Dan Geer) “every sociopath is your next door neighbor”. 


  • The Smart Phone. Smart phones, and cell phones before them, were going to give everyone cancer. Instead the smartphone exponentially amplified the nascent social media platforms and birthed the associated challenges with those. 


  • Cryptocurrency. It was going to disrupt economies and eliminate fiat currency. Instead it gave us ransomware. 


Let’s look at the Internet a bit more. In the early days of the World Wide Web (1990’s) it was dismissed by many as a fad. Many businesses were brought on-line by IT staff with skunk-works efforts. At the beginning of the Web we talked of the Information Superhighway and the idea of Internet TV (i.e. Internet through your TV rather than what actually happened where the Internet ate the TV). For threats, the big concerns were spam, on-line crime, people being socially isolated, cyber-bullying, the digital divide, and the eradication of jobs. Movies around the time (War Games, Terminator and such) depicted various AI-driven apocalyptic scenarios. 


This paranoia was healthy in that it catalyzed significant work to address these risks. In the 1990’s we developed the SSL protocol that led to today’s protocols that encrypt most Internet traffic which led to an explosion of e-commerce. We developed effective anti-spam technology. Jobs and infrastructure were created. In some cases infrastructure was over-built (remember the dot-com collapse of 2001) but which laid the foundations of what came next. 


But we, inevitably, missed the 2nd order effects. The massive rise in organized crime was not exploiting the Internet itself but rather the weak identity models of businesses naively digitizing existing commerce. Hacking by criminals and nation states was exponentially boosted when Microsoft shoved TCP/IP into Windows 95, as did other vendors with their products, to have the effect that most computer systems that relied on isolation for security were now connected to everything and everyone else. The standardized protocols and the resultant economy of scale created the Internet-of-things and made operational technology a reality where not only all IT is connected but all infrastructure is connected to everything else. This in turn has enabled criminals and nations to use billions of insecure devices as “bot” armies to drive denial of service attacks and provide domestic staging posts for nation state cyber-operatives to hide in plain sight from US intelligence agencies constrained to operate internationally.


Amusingly, we were so worried about security getting too good that the US Government advocated hard to control cryptography with the Clipper chip and similar schemes without predicting the 2nd order opportunity of spying through insecure end points now on-line because of the unconstrained proliferation of connectivity.  


Our inability to clearly see second order effects isn’t just on security, it pervades all aspects of technology. Technology like SQL which democratized the workforce’s ability to make queries to connect data sources led to just-in-time supply chains. There are more examples at wider scale (sourced from Benedict Evan’s annual presentation):



So, here we are with Generative AI (we had less fears, for some reason, about prior generations of AI in the traditional application of machine learning) and the same situation is playing out. Everyone is concerned about many risks, some valid and some not so much, and are working to mitigate them. Some risks are overblown and may always be so, and some risks are overblown for now but may occur over the coming years or more. However, as with any prior technological change we need to also look harder for the 2nd order risks. I don’t think we can reliably predict these, by definition, but we can at least be on the lookout for what comes after what comes next. 


Example Second Order Risks

The rest of this post is focused on possible second order risks from a digital / information technology perspective. But, you can also contemplate many other examples from biology, chemistry, and all the other sciences. 


One way of framing these types of analyses is to imagine a world where the positive is true, and ask “What risks are in such a world?” Then imagine a world where the negative is true, and ask, again, “What risks are in such a world?” For example, let’s assume generative AI is truly transformative for synthetic biology and pharmaceutical development. An extreme positive outcome is that the treatment of disease is transformed, lifespans are extended, treatments are highly customized for each individual. In that world, what risks do we have? It could be societal pressures to fund healthcare during the initial transition period where treatments are expensive, social security pressures increase due to longer lifespans, diseases that are deadly might not be cured but simply become chronic with more people living but impaired, all the way through to complications of spotting unique side effects from the unique treatment of individuals. What steps do we need to take now to be at least directionally ready for that reality? On the negative side, imagine a world of biological terror with unique pathogens being constructed regularly. That world will have closed borders for people and restricted trade, what steps do we need to take to position for that version of reality?


1.Human Misunderstanding Mediated by AI (“Did we really agree to that?”) 

Imagine a situation where one person or organization is using AI to generate content or transactions which are then, in an uncoordinated way, being consumed by another person or organization intermediated by their AI. What are the collective consequences of this (especially at scale)?  For example, look at legal contracts and their intent. Imagine you have a generative AI assistant that helps you write legal contracts and that you send it to a person who has an AI assistant that is helping them parse and understand legal contracts that are sent to them.



There are some questions to ask here:


  • Does A = B? Does the encoding of the intent match the consequent decoding.


  • Does C = A and/or B? Does the actual contract when independently parsed match what A intended and B understood.


  • Will C be the actual contract? Surely yes, and people will sign that assuming they actually read it but probably didn’t. 


  • Will C be reviewed and how? In that case how will C be reviewed and checked against A and B.


Now imagine this for RFPs or any other complex document that people hate writing and also equally dislike reading. What will be the evolutionary pressure on the way C is automatically generated in the face of “adversarial” generation between A and B? Will it still be English (or other common language) or will there be emergent encodings?


Let's do the same for other inter-personal electronic communications, say e-mail. We’re going to soon end up in more situations where you will have a generative AI agent write your communications for you. At the other end, the person you are sending it to will have an agent to summarize (and perhaps even reply) to those communications. There will be pressure to write and optimize email to get the right message through the other person’s agent to reach the actual person.



Again, there will be evolutionary pressure which will evolve the nature of intermediate communication. Now we have similar questions:


  • Does A = B? Does the sender's wishes match the receiver's synthesis? 


  • Does C = A and/or B? Or will the actual content even be understandable?  


We could develop similar situations for how to think about parties to a video conference doing independent transcription, language translation and many other possibilities. For example, what happens when we all agree to an AI’s transcription of a meeting that most of us didn’t attend and what happens when the transcription or synthesis provided by our virtual attendee (who we sent in our place) doesn’t agree with the moderator’s AI’s transcription of events, will we need independent AI validators?


2. Complex Agent Interactions (“What are all these agents actually doing?”) 

It is inevitable in the medium term that more people and organizations will mediate activities through AI agents (or assistants). Our personal AI agents will interact with other people’s and organization’s agents to coordinate events, mediate transactions, book vacations, evaluate and select products to purchase, coordinate medical treatments and monitor outcomes, and much much more. 


There might be a small number of dominant personal agents due to network effects, but there will still be a range of “modules” [think apps] that people will buy and plug into their agents for specific tasks. Businesses will implement a plethora of different agents to interact with people and to conduct business to business transactions. So, this world will be full of agents with different underlying models, trained in different ways, and configured with different safety or reliability settings. 


We need to work to understand the emergent properties that might occur in such a world of trillions of independently interacting agents with powerful intelligence all primed to achieve competing goals. Evolutionary pressure might cause agents to change in unpredictable ways. Businesses and governments need to be better equipped to monitor system-wide effects. 


Some of the second order risks that might come from this could be:


  • Concentration risk / funneling. Optimal paths / services will be discovered quickly leading to overwhelming traffic surges, massive gyrations of supply-chains into and out of certain products or services. Financial flash crashes from conventional algorithms will look tame compared to this world of agents of people and businesses making purchases on our behalf.


  • Manipulation. Competitive and adversarial pressures will seek to manipulate agents to achieve desired outcomes through small adjustments in agent behavior with outsize network effects or by adversarial AI techniques (e.g. model poisoning).


  • Reproduction. Agents will have the ability to reproduce and replicate, and will introduce modifications to themselves under competitive (evolutionary) pressure. The emergent properties from this could be the path to a more distributed form of AGI (think ant colonies vs. ants).


  • Conflict. Indeed, such a diffuse agent based world will make conflict look more like my “ant hill” vs. your “ant hill”.


  • Race conditions. Inadvertent creation of critical race conditions in sensor driven, interdependent systems. 


3. Deskilling (“Most of us can’t smelt iron, that might be ok….or not.”) 

As we become ever more dependent on technology driven by the higher capabilities of AI then will we de-skill people (or more correctly, I guess, people will up-skill away from prior skills). Will it be possible to maintain the societal discipline to keep sufficient slack in the system to sustain productivity if people’s AI agents fail or to have people preserve skills given up to AI. It’s one thing for Navies to retain the capability for manual navigation by sextant, it’s another to make skills back-up pervasive to some level of societal resilience. 


4. Everything Has an API (“You think customer service is bad now.”)

In a world of person to person, person to business and business to business agent interactions, every single thing will have an API (Application Programming Interface, or an Agent Programming Interface). When your agent fails to mediate a task across a complex web of businesses agents/APIs and you call one of those businesses they will have little idea of their role in the complex web of your agent’s goals. Or, more likely, their “complaints” API will struggle to deal with your query. 


5. Augmented Reality (“You mean manipulated and filtered reality?”)

Are we underestimating the impact that AR will have? Technologies are emerging that are currently in the oddball stage that will become pervasive in the medium term as visual overlays (glasses, visors, contact lenses). Now the second order impact here is what happens when your personal AI agent(s) are mediated through AR. What happens when an AI agent overlays (or filters) out things it thinks you don’t want to see? What are the societal knock-on effects of this? What are the criminal and adversarial opportunities here? 


6. AI Replacement of Humans in Dual Control Situations (“Open the pod bay doors HAL.”) 

Many critical controls in society are subject to human dual control / separation of duties. There is insufficient research on the consequences of replacing Human 1 + Human 2 control with AI + Human, Human + AI or AI + AI for these scenarios. Such scenarios from medical checklist conformance, weapons release, financial transaction approval, flight safety and many more have obvious potential risks. 


How to Start to Look for 2nd Order Effects

In my experience the best way to even try to get a handle on 2nd order effects is to look through several lenses:


  1. What does the technology want? In other words, what are the economics, incentives, and opportunities that will drive the technology. 

  2. What do the humans want? In other words, what do people (or businesses) want from it? From connectedness, growth to safety. 

  3. What does a world look like if all the positives come about? Then what are the risks in that world?

  4. What does a world look like if all the negatives come about? Then what are the risks in that world?

Scenario planning is a useful technique to do this exercise. 


Bottom line: We should be appropriately cautious about AI but not so that we forgo the truly massive upside that the bold but responsible use of this technology will give us in a range of fields. It’s healthy to have a societal level debate about AI risks as that is what will drive the mitigation of those risks so we can enjoy the benefits of this remarkable capability. But, in doing this we need to be much more focused on the real risks that come, and have come in prior technological shifts, from the 2nd order effects. Ask, in a society reshaped by AI, what does that world look like? And, in that world, what risks will we face that we don’t face today? Then, what do we need to do to be prepared to mitigate those effects? If we’re not careful, that will be where the wild things truly are.

3,876 views0 comments

Recent Posts

See All

Security and Ten Laws of Technology 

There are many well known, so called, laws of technology. Moore’s law being particularly emblematic. Let’s look at some of them and see what the security implications have been for each and what might

DevOps and Security

Each year, DevOps Research and Assessment (DORA) within Google Cloud publishes the excellent State of DevOps report. The 2023 report published in Q4 was as good as ever and in particular documented so

The 80 / 20 Principle 

Ever since I first became familiar with the 80/20 principle, and other circumstances marked by Pareto distributions, I began to see examples of it everywhere. Naturally, I’m particularly biased to obs

bottom of page