top of page
Phil Venables

AI Consequence and Intent - Second Order Risks

There is a lot of good discussion and emerging methods to manage the risks of AI in various forms from training data protection, model validation, to safety harnesses for use and more. Daniel Miessler’s AI Attack Surface Map is a useful summary and Google’s SAIF developing framework is good.


But, we should all remember to explore the risks that will arise from the use of AI in certain contexts. There will be a broad set of operational risks to manage in AI-enabled business or other services that will be different from the current inherent risks. How we manage that will, of course, be important while still getting the tremendous benefit from the adoption of these amazing new technologies. In other words, there's AI risk and there's the second order effects that risks in processes that make use of generative AI will shift.


Let’s explore a couple of examples. You can likely imagine many more.


Contractual Intent

Imagine you have a generative AI assistant that helps you write legal contracts and that you send it to a person who has an AI assistant that is helping them parse and understand legal contracts that are sent to them.


There are some questions to ask here:


  1. Does A = B? Does the encoding of the intent match the consequent decoding.

  2. Does C = A and/or B? Does the actual contract when independently parsed match what A intended and B understood.

  3. Will C be the actual contract? Surely yes, and people will sign that assuming they actually read it.

  4. Will C be reviewed and how? In that case how will C be reviewed and checked against A and B.


Now imagine this for RFPs or any other complex document that people hate writing and also equally dislike reading.


What will be the "evolutionary pressure" on the way C is automatically generated in the face of “adversarial” generation between A and B? Will it still be English (or other common language) or will there be emergent encodings?


Electronic Communications

Now consider e-mail, or other forms of electronic communication. We’re going to soon end up in more situations where you will have a generative AI agent to write your communications for you. At the other end, the person you are sending it to will have an agent to summarize (and perhaps even reply) to those communications. There will be pressure to write and optimize email to get the right message through the other person’s agent to reach the actual person. Again, there will be "evolutionary pressure" which will evolve the nature of the intermediate communication. All this might even be akin to the expertise required for search engine optimization.


Now we have similar questions:

  1. Does A = B? Does the sender's wishes match the receiver's synthesis?

  2. Does C = A and/or B? Is the actual content even understandable?

We could develop similar situations for how to think about parties to a video conference doing independent transcription, language translation and many other possibilities. There’s even more disturbing and amusing possibilities already being made into Hollywood movies.


Bottom line: AI is and will have transformative benefits to society and we need to continue to develop approaches for its safe and responsible use and development. In doing this we also need to keep paying attention to the primary risks but all to the risks stemming from these second order effects.

1,283 views0 comments

Recent Posts

See All

Comments


Commenting has been turned off.
bottom of page