Skip to main content

Building a Compliance Automation Pipeline in AWS For Less than $5000

The views and opinions expressed in this article are those of the author and do not reflect the views of any organization or employer.
As organizations continue to migrate their infrastructure to the cloud and address technical debt from legacy cloud infrastructures, achieving and maintaining compliance has become increasingly complex and expensive. But it doesn't have to be. In this article, I'll share my experience implementing a compliance-as-code approach in AWS, focusing on automated security controls and continuous compliance monitoring. This solution typically costs less than $5,000 annually for a moderately-sized SaaS application running on serverless infrastructure, and I'll break down exactly where those costs come from.

Cost Breakdown

Before diving into the technical details, let's address the $5,000 claim with a detailed breakdown of annual costs:

  • AWS Config: ~$2,400 ($200/month for recording config changes across accounts)
  • Security Hub: ~$1,200 ($100/month for centralized security findings)
  • GuardDuty: ~$600 ($50/month for threat detection)
  • CloudWatch Logs: ~$400 ($33/month for log aggregation)
  • Lambda Functions: ~$100 (minimal costs for serverless automation)
  • EventBridge: ~$100 (event routing for automation)
  • Secrets Manager: $180 ($15/month for secret rotation and storage)
  • Optional - ECR: ~$120 ($10/month for container image storing and scanning)
  • Optional - Fargate: $100 ($8.33/month for Fargate Container Insights monitoring)

Total: ~$4,800-$5,200 annually. These estimates assume a moderate workload with proper configuration of logging and monitoring. Your actual costs may vary based on scale and specific requirements.

The Foundation: Infrastructure as Code

Control Tower serves as the core AWS service for establishing a secure multi-account architecture. Begin by setting up Control Tower to create your landing zone, which automatically configures your management account, security account, and log archive account following AWS best practices. Use Organizations to implement a hierarchical structure with Organizational Units (OUs) for different environments (Dev, Prod, etc.) and workload types.

A key success factor is implementing strict naming conventions and tagging strategies from the start. Here's an example tagging strategy that satisfies most compliance frameworks:

                        

const tagDefinitions = {
  Environment: ['dev', 'stage', 'prod'],
  CostCenter: ['product-*', 'infrastructure-*'],
  DataClassification: ['public', 'internal', 'confidential', 'restricted'],
  ComplianceFramework: ['NIST800-53', 'SOC2', 'PCI', 'HIPAA'],
  SecurityZone: ['dmz', 'private', 'restricted'],
  Owner: 'email@domain.com',
  AutomationExempt: ['true', 'false']
};

const tagValidation = new CustomResource(this, 'TagValidation', {
  serviceToken: tagValidationFunction.functionArn,
  properties: {
    requiredTags: Object.keys(tagDefinitions),
    tagPatterns: tagDefinitions,
    resourceType: 'AWS::S3::Bucket'
  }
});
                        

The Pipeline: Configuration-as-Code Using Config

AWS Config is the backbone of your compliance monitoring. Here's a practical example of a conformance pack that implements NIST 800-53 AC-2 (Account Management) and AC-3 (Access Enforcement) controls:

                        

Resources:
NIST80053ConformancePack:
  Type: AWS::Config::ConformancePack
  Properties:
    ConformancePackName: nist-800-53-baseline
    TemplateBody: |
      Parameters:
        MaxPasswordAge:
          Type: String
          Default: "90"
      Resources:
        IAMPasswordPolicy:
          Type: AWS::Config::ConfigRule
          Properties:
            ConfigRuleName: iam-password-policy
            InputParameters:
              MaxPasswordAge: ${MaxPasswordAge}
            Source:
              Owner: AWS
              SourceIdentifier: IAM_PASSWORD_POLICY
        
        # AC-3: Access Enforcement
        S3PublicReadProhibited:
          Type: AWS::Config::ConfigRule
          Properties:
            ConfigRuleName: s3-bucket-public-read-prohibited
            Source:
              Owner: AWS
              SourceIdentifier: S3_BUCKET_PUBLIC_READ_PROHIBITED
        
        # Custom rule for session management
        MaxSessionDuration:
          Type: AWS::Config::ConfigRule
          Properties:
            ConfigRuleName: iam-role-max-session-duration
            Source:
              Owner: CUSTOM_LAMBDA
              SourceIdentifier: ${MaxSessionDurationFunctionArn}
                        

Control Validation: Automated Remediation with Security Hub

Security Hub findings should trigger automated remediation where possible. Here's an example of a Lambda function that automatically remediates common security issues:

                        

import boto3
from typing import Dict, Any

def handle_security_finding(event: Dict[str, Any], context: Any) -> None:
    finding = event['detail']['findings'][0]
    
    # Define remediation mappings
    remediation_actions = {
        'S3_BUCKET_PUBLIC_READ_PROHIBITED': remediate_public_s3,
        'IAM_PASSWORD_POLICY': remediate_password_policy,
        'EXPOSED_ACCESS_KEYS': rotate_access_keys
    }
    
    # Execute appropriate remediation
    rule_id = finding['Types'][0].split('/')[1]
    if rule_id in remediation_actions:
        remediation_actions[rule_id](finding)
        
    # Log remediation for compliance
    log_remediation(finding)

def remediate_public_s3(finding: Dict[str, Any]) -> None:
    s3 = boto3.client('s3')
    bucket_name = finding['Resources'][0]['Id'].split('/')[-1]
    
    # Block public access
    s3.put_public_access_block(
        Bucket=bucket_name,
        PublicAccessBlockConfiguration={
            'BlockPublicAcls': True,
            'IgnorePublicAcls': True,
            'BlockPublicPolicy': True,
            'RestrictPublicBuckets': True
        }
    )
                        

Control Monitoring: Real-time Compliance Monitoring

Implement a comprehensive monitoring strategy using EventBridge rules that trigger based on compliance-related events. Here's an example that monitors for critical compliance violations:

                        

{
    "source": ["aws.securityhub"],
    "detail-type": ["Security Hub Findings - Imported"],
    "detail": {
        "findings": [{
            "Severity": {
                "Label": ["CRITICAL"]
            },
            "Types": ["Software and Configuration Checks/Industry and Regulatory Standards/NIST-800-53"],
            "Compliance": {
                "Status": ["FAILED"]
            },
            "Workflow": {
                "Status": ["NEW"]
            }
        }]
    }
}
                        

GitOps Implementation

Implement a GitOps workflow that ensures all infrastructure changes go through proper security review. Here's an example GitHub Actions workflow that includes security scanning:

                        

name: Infrastructure Security Validation
on:
  pull_request:
    paths:
      - 'infrastructure/**'
      - 'compliance/**'

jobs:
  security-validation:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v1
      
      - name: Setup Node.js
        uses: actions/setup-node@v2
      
      - name: Install dependencies
        run: npm install
      
      - name: Run Security Scan
        run: |
          # Scan IaC templates
          npx cfn-lint cdk.out/*.template.json
          npx cfn-nag-scan cdk.out/*.template.json
          
          # Validate against compliance rules
          ./scripts/validate-compliance.sh
      
      - name: Check for Sensitive Data
        uses: trufflesecurity/trufflehog@v3
        with:
          path: "."
          base: ${{ github.event.pull_request.base.sha }}
          head: ${{ github.event.pull_request.head.sha }}
                        

Container Security: Pre-hardened Images and Fargate

While many modern SaaS applications can run entirely on serverless infrastructure like Lambda, you may need containers for specific workloads. When using containers, AWS Fargate provides a serverless container runtime that maintains much of the operational simplicity of Lambda while supporting containerized workloads. Here's how to implement container security in your compliance pipeline:

                        

# CDK code for secure Fargate service
const taskDefinition = new ecs.FargateTaskDefinition(this, 'SecureTask', {
  cpu: 256,
  memoryLimitMiB: 512,
  executionRole: executionRole,
  taskRole: taskRole,
});

// Use hardened base image
const container = taskDefinition.addContainer('AppContainer', {
  image: ecs.ContainerImage.fromAsset('../app', {
    // Use multi-stage build with hardened base image
    file: 'Dockerfile.secure',
    buildArgs: {
      BASE_IMAGE: 'amazonlinux:2023-minimal'
    }
  }),
  logging: new ecs.AwsLogDriver({
    streamPrefix: 'secure-app',
    mode: ecs.AwsLogDriverMode.NON_BLOCKING
  }),
  environment: {
    NODE_ENV: 'production',
    // Inject secrets at runtime
    DB_SECRET: secrets.secretFromSecretsManager('db-secret').secretValue
  }
});

// Apply security group rules
const securityGroup = new ec2.SecurityGroup(this, 'ServiceSG', {
  vpc,
  description: 'Security group for Fargate service',
  allowAllOutbound: false  // Explicitly manage outbound rules
});
                        

When using containers, implement these security controls:

  • Use AWS or 3rd party provided hardened container images, or maintain your own hardened base images that are regularly scanned and updated
  • Implement ECR image scanning and only deploy images with no HIGH or CRITICAL vulnerabilities
  • Use multi-stage builds to minimize the attack surface of your final container image
  • Enable Fargate container insights for runtime monitoring
  • Use Secrets Manager for injecting secrets at runtime instead of building them into images

Here's an example of a hardened Dockerfile using multi-stage builds:

                        

# Build stage
FROM amazonlinux:2023 AS builder

# Install only necessary build dependencies
RUN dnf update -y && \
    dnf install -y nodejs npm && \
    dnf clean all

WORKDIR /build
COPY package*.json ./

# Install production dependencies only
RUN npm ci --only=production

# Copy application code
COPY . .

# Final stage
FROM amazonlinux:2023-minimal

# Install runtime dependencies only
RUN dnf update -y && \
    dnf install -y nodejs && \
    dnf clean all

# Create non-root user
RUN useradd -r -s /bin/false appuser

# Copy only necessary files from builder
WORKDIR /app
COPY --from=builder --chown=appuser:appuser /build/node_modules ./node_modules
COPY --from=builder --chown=appuser:appuser /build/dist ./dist

# Switch to non-root user
USER appuser

# Set secure defaults
ENV NODE_ENV=production \
    NPM_CONFIG_LOGLEVEL=error \
    NODE_OPTIONS='--max-old-space-size=2048 --max-http-header-size=16384'

CMD ["node", "dist/index.js"]
                        

Add Config rules to monitor container compliance:

                        

Resources:
  ContainerSecurityPack:
    Type: AWS::Config::ConformancePack
    Properties:
      ConformancePackName: container-security
      TemplateBody: |
        Resources:
          ECRImageScanningEnabled:
            Type: AWS::Config::ConfigRule
            Properties:
              ConfigRuleName: ecr-image-scanning-enabled
              Source:
                Owner: AWS
                SourceIdentifier: ECR_PRIVATE_IMAGE_SCANNING_ENABLED
          
          FargateInsightsEnabled:
            Type: AWS::Config::ConfigRule
            Properties:
              ConfigRuleName: fargate-insights-enabled
              Source:
                Owner: CUSTOM_LAMBDA
                SourceIdentifier: ${FargateInsightsFunctionArn}
          
          ContainerLoggingEnabled:
            Type: AWS::Config::ConfigRule
            Properties:
              ConfigRuleName: container-logging-enabled
              Source:
                Owner: CUSTOM_LAMBDA
                SourceIdentifier: ${ContainerLoggingFunctionArn}
                        

Cost Optimization Tips

To keep costs under $5,000 annually while maintaining effective compliance:

  • Configure AWS Config to only record changes for in-scope resources
  • Use custom Config rules instead of managed rules where possible to reduce evaluation costs
  • Implement intelligent log filtering in CloudWatch to reduce storage costs
  • Use EventBridge rules to trigger Security Hub evaluations only when needed
  • Leverage AWS Organizations for volume discounts across accounts

The key to success is starting with a well-planned tagging strategy and building automated enforcement from day one. This prevents technical debt and reduces the need for costly remediation efforts later.

If you followed the instructions in this article and would like to share your experience, please reach out via the Contact Me page!

Like this article? Here's another you might enjoy...