"Some controls just can't be automated."
I hear this often, and from people whom I respect that have spent years navigating compliance frameworks. The sentiment is not entirely wrong. There are certainly aspects of security programs that resist automation. But the problem with this statement is how often it's used as a blanket dismissal that shuts down the possibility of exploration before it begins.
In my view, things are both more nuanced and more interesting. Controls that seem inherently manual often have automatable components hiding in plain sight. The key is understanding what we're actually trying to measure and then building the technical infrastructure to capture that measurement when it needs to be measured. Sometimes the timing is continuous, other times it is periodic, but it happens automatically.
From Paper Trails to Data Pipelines
Consider AT-2, the awareness and training control from NIST 800-53. On the surface, this looks like the epitome of a "non-technical" control. It requires organizations to ensure personnel receive appropriate security awareness training and that privileged users get role-based training. Traditionally, this means someone manually compares training completion records against employee rosters, generates a report, and updates it quarterly or annually.
That approach worked in a pre-cloud era when organizations moved slowly and access management was relatively static. But in modern cloud environments where infrastructure changes constantly and engineers can spin up and assign access to production resources in minutes, quarterly attestations about who completed training last year tell you almost nothing about who has taken the training and has access right now.
Much more helpful is a live view of compliance status that answers the question authorizing officials actually care about: "Do the people who can access sensitive systems right now have the training they need?" Not "did they have training at some point in the past," but now, today, as the system exists at this moment.
CSPs participating in the FedRAMP 20x Phase 2 pilot are proving that this works by pulling user data from identity provider APIs and comparing it to user data from training platform APIs continuously. I strongly encourage people to watch the public demonstrations where CSPs have shown this approach in action.
The Three Pillars of Compliance Automation
What makes this possible? Three technical capabilities that work together: APIs, policy engines, and GitOps workflows.
APIs provide the connective tissue. Every modern SaaS application exposes programmatic interfaces. Identity providers, HR systems, training platforms, ticketing systems, code repositories - all of them speak JSON and can be queried in real time. This means data that previously existed in silos, accessible only through manual export and reconciliation, can now flow freely between systems.
Policy engines like Open Policy Agent, Steampipe, and Kyverno provide the logic layer. They evaluate conditions, enforce rules, and make decisions based on data flowing through APIs. In my previous work on the FedRAMP Agile Delivery Pilot, which helped inform FedRAMP 20x, I identified four types of policies that work together: immutable policies that encode non-negotiable requirements, threshold policies that provide flexible boundaries, intelligent decision policies that categorize and route based on context, and escalation policies that identify when human judgment is needed. These four policy types cover almost every use case you need.
GitOps workflows provide the orchestration. Infrastructure as code (terraform, cloudformation, ansible scripts, etc.) and GitOps practices mean your entire technical environment is defined in version-controlled configuration files. When compliance requirements change, you update policies in git, and the system propagates those changes across your infrastructure. This creates an audit trail automatically while ensuring consistent policy enforcement.
The combination is powerful. You can take controls that appear manual and non-technical, decompose them into measurable components, and automate the measurement and reporting.
Beyond Awareness and Training
AT-2 is just the beginning. Look at CM-3, configuration change control, which requires you to document changes and get approval before implementation. Traditional approaches involve change advisory boards meeting weekly to review spreadsheets. Automated approaches encode approval requirements in CI/CD pipelines, validate changes against policy engines before merge, and automatically document everything in git commits and pull request threads. The CAB still meets, but they're reviewing exceptions and edge cases, not every single change.
Or consider IR-4, incident handling, which requires you to track incidents from detection through resolution. Traditional incident management involves someone manually updating spreadsheets or creating tickets with status information. Automated approaches integrate your SIEM, ticketing system, and communication platforms so incident data flows automatically. Detection creates a ticket, response actions update it, timeline reconstruction happens programmatically, and post-incident reviews pull from actual system logs rather than reconstructed narratives.
The pattern repeats across control families. Personnel security controls that verify background investigation currency? Pull from HR systems and government background investigation databases. Physical security controls for data centers? Pull from badge access systems and video surveillance APIs. Supply chain risk management? Pull from software composition analysis tools and integrate with procurement systems. All of these examples are both objective in terms of validating the control outcome and achievable by organizations with in-house engineering expertise (presumably, cloud service providers!)
The Future: Interactive Compliance
Forward-thinking organizations are already demonstrating that the approaches I'm describing work today. But for these approaches to transform government-wide risk management, we need something more fundamental. We need a trust infrastructure.
Consider how certificate authorities and PKI enable secure communications at internet scale. No one manually verifies the identity of every website they visit. Instead, we built a trust infrastructure that makes identity verification automatic and ubiquitous. The infrastructure does the heavy lifting; browsers and servers just use it.
PKI isn't perfect, but it has worked well enough to enable relatively seamless online commerce for decades. We need similar infrastructure for cybersecurity compliance. Not just tools for individual agencies or CSPs, but an ecosystem that makes continuous compliance monitoring as natural as HTTPS.
What would this infrastructure look like? Here are some potential building blocks:
First, standardized data formats. OSCAL (Open Security Controls Assessment Language) is creating machine-readable formats for security controls and assessment results. When compliance data is standardized and machine-readable, it becomes portable across tools and organizations. This enables automated comparison, aggregation, and analysis at scales previously impossible. Government agencies will be able to integrate compliance data directly into their GRC tooling, creating truly interactive data feeds from cloud service provider infrastructure to agency authorizing officials and back again.
Second, standardized control sets. Using standardized data sets to measure risk will enable comparison of apples-to-apples risks across multiple offerings, industries, or mission areas. But this is less effective when different systems get assessed using different baselines and methodologies. When assessment data is automated, standardized, and continuously updated, stakeholders can see real-time risk across a portfolio and make informed decisions about where to invest resources. FedRAMP, DoD SRG, CJIS, CIRCIA, and even industry-focused frameworks can map to each other and objectively measure specific outcomes in the same way even when the technology itself is different, facilitating effective risk management across the enterprise.
Third, transparent governance. FedRAMP 20x has embraced the principle of building in public, and this is amazing but should expand across the government. All policy development on public repositories, not closed-door meetings. Anyone can see proposed changes, provide feedback, and contribute improvements. Similarly, decisions should include public justification with reference to community input.
The technology already exists and is being built today. What's missing is the infrastructure layer that would let this data flow from CSPs to government agencies and back again.
What Still Requires Humans
Policy decisions that involve balancing competing values require humans. When you're deciding whether to accept the risk of using a particular third-party service because it provides critical capabilities not available elsewhere, that's requires understanding mission context in a uniquely human manner.
Incident response decisions during novel attacks require humans. When you're facing an intrusion that doesn't match known patterns and you need to decide whether to shut down systems or keep them running while you investigate, that requires judgment developed through experience. Automation can provide the data you need to make the decision, but it can't make the decision for you.
Creative problem-solving for architectural challenges requires humans. When you're designing a new system and need to figure out how to meet security requirements while maintaining operational effectiveness, that requires outside-the-box thinking. Policy engines can validate whether your design meets requirements, but they can't decide what ought to be designed.
The goal of compliance automation shouldn't be to remove human judgement but rather to position humans as the conductors and let automation handle shoveling coal into the engine. Let the machines do what machines do well and humans do what humans do well.
For Those Still Skeptical…
The skepticism around compliance automation is understandable. We've been burned by "automated compliance" tools that are vaporware and marketing hype. We're rightly suspicious of anything that promises to make compliance easy because we know security is genuinely hard.
But the alternative is unsustainable. The manual approach scales linearly while complexity scales exponentially. We cannot manually assess our way to security in cloud environments where infrastructure changes hundreds of times per day. The only path forward is building technical systems that embed compliance as an emergent property rather than a post-hoc attestation.
The controls that seemingly can't be automated probably can be, at least partially. The question we should be asking is, "what aspects of this control can we measure programmatically and what aspects genuinely require human judgment." The answer is usually that more can be automated than not, and the parts that require human judgment become clearer and more focused.
So next time you hear "some controls just can't be automated," ask them which specific aspect they're talking about. There could just be a way to automate it.
Like this article? Here's another you might enjoy...