In a previous article I described Borges’ and Baudrillard’s metaphor of the map preceding the territory, and how it applies to our digital age. Today I want to extend that metaphor to agentic systems. By agentic systems I mean AI that doesn’t just answer questions but also takes action, makes decisions, and pursues objectives continuously without a human approving each step. Agentic systems can unlock massive value. But if we’re not careful, security oversight of agentic systems can turn into a simulacrum with no real referent, a map that precedes the territory.
There is a saying in corporate America that says, “if it isn’t measured, it doesn’t matter.” Key Performance Indicators are what demonstrate our value to the organization. But if I’m honest, I’ve never felt that KPIs have accurately reflected my value. I still strive to meet them, but I view my true value as doing the “next right thing” every day, which often moves much faster than quarterly KPIs can represent. My strategy has been to find leaders who implicitly understand the value of what I’m doing, and work hard doing the needful while they provide top cover. This is how I’ve been able to move nimbly enough to achieve the daunting task of obtaining FedRAMP and DoD authorizations at multiple companies. To paraphrase Biggie, “mo KPIs mo problems.”
One reason I am so excited about FedRAMP 20x is that it makes measurement continuous rather than point-in-time. Like quarterly corporate KPIs, traditional FedRAMP relied on annual assessment and monthly scanning to measure risk posture. But 20x upsets that paradigm. Now, practitioners can focus on what actually matters, and the numbers will reflect the good work they do. They don’t have to choose compliance vs. actual security — the two converge on the same outcomes.
However, I continue to have one nagging doubt. When I do my thing in corporate America, I am relying on a mix of hard data, intuition, and common sense. I gather as much data as I can and then monitor that data. But I also rely on a sense of the intangibles — who gives a shit, who doesn’t, and who knows where the bodies are buried. From there, I apply common sense based on experience and reasoning to decide what to focus on each day. This last step is probably the hardest part.
Here’s what is nagging me though. Agentic systems don’t have a sense of intrinsic value, of an intuition around the “next right thing.” An agent optimizes toward a measurement, not the thing the measurement was meant to capture. For example, autonomous agents could converge neatly on the FedRAMP KSIs via a series of insecure decisions that aren’t measured. A human practitioner would understand that this defies common sense and the intent of the security program, but an agent would not.
In other words, it’s turtles all the way down. Fundamentally, our representations never fully capture reality, and it takes human discernment to understand the difference and choose a course of action that is an inherently subjective mix of information, common sense, and intuition. This means the gap between map and territory is not an engineering problem to be closed, it’s a permanent state to be managed.
This is akin to the example I provided in a previous article about the GPS driving off a cliff because it’s the fastest route. FedRAMP 20x is meant to prevent this by adding security guardrails, but the guardrails themselves can cause this behavior. This situation creates a conundrum. On the one hand, we want to use the technology we’ve developed for maximum value. Agentic workflows can automate almost all areas of the SDLC now. On top of that, they can adapt much more quickly than human teams, which provides capabilities that are hard to fully fathom. But on the other hand, without many iterations of human review these systems lack the innate discernment that prevents them from routing humanity off a cliff. And since generative AI is inherently non deterministic, it will always be this way. Agents lack awareness of the intangible things (security, well being, human flourishing, etc.) that metrics are supposed to represent. They are susceptible to missing the forest for the trees. And even if we can prove that AI systems are better at preventing known knowns, there are always those “black swan” catastrophic events that haven’t happened yet but could happen tomorrow. The faster we go, the more likely a “black swan” is to occur. With AI agents embedded in the kill chain now (source), we had better get good at making our intentions crystal clear and communicating them effectively.
Of course, human decision makers can make bad decisions too. The difference is speed and scale. The risk is orders of magnitude higher. Human governance failures are limited by human speed. Even rapid failures are comparatively gradual and recoverable over time. Institutions reform, societies rebuild. Conversely, certain agentic failure modes (critical infrastructure, weapons systems, financial contagion) are near-instantaneous and non-recoverable. The asymmetry in recovery time is probably the most important variable, more than frequency or even magnitude.
This is why governance is so important. Mitigating risk is not solely a technical problem, it is a problem of aligning on the desired outcome (not just metrics, but the underlying state that metrics describe or represent), and then providing the overarching structure that allows us to apply our intent to move toward that state continuously. Effective risk management includes preserving the conditions under which human judgment can operate. In a previous article, I drew on Gen. Stanley McChrystal’s framework that combines quantitative measurement, qualitative spot-checking, and mature judgment that comes from experience and organizational context. FedRAMP 20x handles the first well. Effective governance structures need to protect the other two.
Without governance anchored in human intent, we get organizations that are perfectly, continuously, autonomously convinced they’re secure. Their dashboards will look fine, and the simulacrum of security will be in place. The breach, when it comes, will be the first moment of contact with reality in a very long time.
Like this article? Here's another you might enjoy...