I saw it happen once while watching the spectacle from a terrace with a nice glass of wine in hand. You can assume that anyone entering that traffic circle has a driver's license and knows the traffic rules. The problem with Place Charles de Gaulle is that upon entering the square, conditions suddenly become extremely complex, especially for a motorist accustomed to uncluttered intersections and traffic lights.
Chatbot between guard rails
Something similar is happening with AI agent technology. You start in the experimentation phase with a controlled environment and clear rules, but end up in unexpectedly more difficult situations after implementation. While many companies still struggle with proper implementation anyway, as a recent McKinsey report showed, there are also more and more successes to report. For example, organizations are already deploying agent technology to make interactions with customers much smarter. Chatbots are then no longer irritating decision tree-based helpers, but can provide real customer-friendly assistance, for example by guiding a claim settlement online and answering customer questions meaningfully.
In such cases, it is often still a relatively simple agent. This agent communicates very naturally with a customer via genAI. With the resulting query, the agent looks something up in a knowledge system and initiates an action in a back-office application. The agent returns the result to the customer. In doing so, the designers thought carefully about limiting the agent's authority and laid down the rules and guardrails in AI governance.
Agents in trouble
After a successful implementation, ideas quickly bubble up as to where else the agent technology can be used or how the chatbot's functions can be expanded. Next come more agents giving each other commands and exchanging information. With each new agent, complexity increases and with it the risk of unforeseen situations arising. For example, cooperating agents may make wrong decisions based on assumptions made by another agent or rely on incomplete data. As a result, they may unintentionally delete data or perform incorrect actions that disrupt processes. Because agents perform tasks at a very high speed, incorrect actions quickly cause much more damage than if a human were to perform those tasks.
Many agent risk mitigation tools focus on monitoring and preventing erroneous actions. They offer little or no help when things do go wrong, warned Richard Cassidy, EMEA CISO at Rubrik recently in an opinion article at ComputerWeekly. His advice is to make sure - as in preparing for a cyberattack - that IT systems with which agents interact are capable of immediate rollback. That way, you can prevent erroneous agent actions from jeopardizing business continuity. In addition, agents should generate sufficient log information so that not only what went wrong but also the cause can be quickly identified.
When the gendarme arrives
The unfortunate motorists on Place Charles de Gaulle have little use for such advice. They will have to rely mainly on the position of the dents in their cars and possibly the statements of bystanders to clarify the circumstances of their accident when the gendarmerie arrives. A rollback is not possible for them without a costly visit to the garage and their reputation is permanently damaged. Don't let the arrival of agent technology damage organizations' reputations while there are still good protective measures to be taken.