DevSecOps for DoD Mission Systems: A Practitioner's Guide to Pipelines That Pass STIG
How to build CI/CD pipelines that are fast enough for modern delivery and rigorous enough for an ATO review. Real configs, real tooling, no hand-waving.
The Two Audiences Problem
Every DoD DevSecOps pipeline has to satisfy two audiences simultaneously: engineers who want to ship fast, and auditors who want evidence that nothing slipped through. Build for only one and the other will shut you down.
The good news is these goals aren't opposed — they just require thoughtful architecture. This guide covers the pipeline patterns, tooling choices, and configuration that bridge the gap between modern software delivery and federal compliance requirements.
The Fundamental Tension
Commercial DevSecOps and DoD DevSecOps share a philosophy — automate security, shift left, continuous feedback — but they operate under different constraints:
| Commercial | DoD | |
|---|---|---|
| Deployment target | Your cloud account | An accredited environment you don't own |
| Change approval | PR review + merge | CAB + CCB + ISSO sign-off |
| Container base images | Whatever works | Hardened, DISA-approved (Iron Bank) |
| Vulnerability SLA | "We'll get to it" | 30 days (Critical), 90 days (High) |
| Logging retention | 30-90 days | 1 year minimum, often 5+ |
| Audit trail | Nice to have | Non-negotiable |
The real tension isn't between speed and security — that's a false dichotomy. It's between automation and auditability. Every automated action needs a paper trail. Every deployment needs evidence that the right checks ran. Every container needs a provenance chain back to source code.
A pipeline that deploys in 45 minutes but can't produce an audit report is useless in this world.
Architecture of a DoD-Ready Pipeline
Here's the pipeline architecture that satisfies both audiences:
┌─────────────────────────────────────────────────────────────┐
│ Developer Workstation │
│ git commit --signoff → signed commits with CAC/PIV cert │
└──────────────────────────┬──────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Git Repository (GitLab) │
│ Branch protection · Merge request approvals · CODEOWNERS │
└──────────────────────────┬──────────────────────────────────┘
│ webhook
▼
┌─────────────────────────────────────────────────────────────┐
│ Pipeline Orchestrator │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌────────────┐ │
│ │ Build │→ │ Scan │→ │ Test │→ │ Sign │ │
│ │ │ │ │ │ │ │ & Publish │ │
│ │ Kaniko │ │ Grype │ │ Unit + │ │ cosign + │ │
│ │ (no │ │ Trivy │ │ STIG │ │ SBOM │ │
│ │ Docker │ │ Semgrep │ │ InSpec │ │ Syft │ │
│ │ daemon) │ │ Checkov │ │ OPA │ │ Notary │ │
│ └──────────┘ └──────────┘ └──────────┘ └────────────┘ │
│ │
│ Evidence collected at each stage → Compliance Evidence Store │
└──────────────────────────┬──────────────────────────────────┘
│ signed image + SBOM
▼
┌─────────────────────────────────────────────────────────────┐
│ Artifact Registry (Harbor / Iron Bank) │
│ Signed images · SBOMs · Vulnerability reports · Provenance │
└──────────────────────────┬──────────────────────────────────┘
│ GitOps sync
▼
┌─────────────────────────────────────────────────────────────┐
│ ArgoCD (GitOps Deployment) │
│ Admission control: only signed images from approved registry │
│ OPA/Gatekeeper policies: enforce security constraints │
│ Deployment evidence → Compliance Evidence Store │
└─────────────────────────────────────────────────────────────┘
Let's walk through each stage.
Stage 1: Signed Commits and Branch Protection
Every commit must be signed — in the DoD world, that means GPG keys tied to a CAC (Common Access Card) or PIV credential. This proves who wrote the code, not just which account pushed it.
# .gitlab-ci.yml — verify commit signatures
verify-signatures:
stage: pre-check
script:
- |
# Verify every commit in the merge request is signed
UNSIGNED=$(git log --format='%H %G?' origin/main..HEAD | grep -v ' G$' | grep -v ' U$')
if [ -n "$UNSIGNED" ]; then
echo "ERROR: Unsigned commits detected:"
echo "$UNSIGNED"
echo "All commits must be signed with a CAC/PIV-backed GPG key."
exit 1
fi
echo "All commits verified."
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
Branch protection rules worth enforcing:
- Main branch: 2 approvals required, one from CODEOWNERS, all CI checks passing
- Release branches: ISSO approval required (tagged as a required reviewer in GitLab)
- Force push: Disabled everywhere — force-pushing main breaks audit trails
A common mistake: enabling GPG signing but never actually verifying signatures in CI. Signing without verification is security theater. Make sure the check covers the entire merge request, not just the HEAD commit.
Stage 2: Build Without Docker (Kaniko)
In many DoD hardened environments, you can't run Docker. The daemon requires root, which violates the principle of least privilege. Kaniko builds container images from Dockerfiles without a Docker daemon or root privileges:
build-image:
stage: build
image:
name: gcr.io/kaniko-project/executor:v1.22.0
entrypoint: [""]
script:
- |
/kaniko/executor \
--context="${CI_PROJECT_DIR}" \
--dockerfile="${CI_PROJECT_DIR}/Dockerfile" \
--destination="${REGISTRY}/${CI_PROJECT_NAME}:${CI_COMMIT_SHA}" \
--cache=true \
--cache-repo="${REGISTRY}/${CI_PROJECT_NAME}/cache" \
--build-arg BASE_IMAGE=registry1.dso.mil/ironbank/opensource/nginx:1.25 \
--snapshot-mode=redo \
--single-snapshot
The Dockerfile must start from an Iron Bank base image — commercial images will fail admission control in the target cluster:
ARG BASE_IMAGE=registry1.dso.mil/ironbank/opensource/nodejs:20
FROM ${BASE_IMAGE} AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
RUN npm run build
# Production stage — minimal attack surface
FROM ${BASE_IMAGE}
WORKDIR /app
# Non-root user (STIG V-222425)
USER 1001
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
EXPOSE 8080
HEALTHCHECK --interval=30s --timeout=3s CMD curl -f http://localhost:8080/healthz || exit 1
CMD ["node", "dist/server.js"]
The Iron Bank timing problem: Iron Bank images are updated on DISA's schedule, not yours. When a Critical CVE drops, you may need to wait days for a hardened image update. A smart mitigation: maintain a staging pipeline that tests against Iron Bank nightly builds so you're ready to merge the moment the official release drops.
Stage 3: Scan Everything, Trust Nothing
Commercial pipelines typically run one scanner. DoD pipelines run several — each covering a different attack surface:
# Container image vulnerability scan
container-scan:
stage: scan
image: anchore/grype:latest
script:
- |
grype "${REGISTRY}/${CI_PROJECT_NAME}:${CI_COMMIT_SHA}" \
--output json \
--file grype-report.json \
--fail-on critical
# Human-readable report for the ISSO
grype "${REGISTRY}/${CI_PROJECT_NAME}:${CI_COMMIT_SHA}" \
--output table \
--file grype-report.txt
artifacts:
paths:
- grype-report.json
- grype-report.txt
expire_in: 1 year # Audit retention requirement
# Static application security testing
sast-scan:
stage: scan
image: returntocorp/semgrep:latest
script:
- |
semgrep scan \
--config=p/owasp-top-ten \
--config=p/cwe-top-25 \
--config=p/secrets \
--json \
--output semgrep-report.json \
--error \
.
artifacts:
paths:
- semgrep-report.json
expire_in: 1 year
# Infrastructure as Code scan
iac-scan:
stage: scan
image: bridgecrew/checkov:latest
script:
- |
checkov \
--directory ./deploy/kubernetes \
--framework kubernetes \
--output json \
--output-file checkov-report.json \
--hard-fail-on CRITICAL,HIGH
artifacts:
paths:
- checkov-report.json
expire_in: 1 year
# STIG compliance check
stig-check:
stage: scan
image: chef/inspec:latest
script:
- |
inspec exec ./compliance/stig-profile \
--reporter json:stig-report.json cli \
--input-file ./compliance/inputs.yml
artifacts:
paths:
- stig-report.json
expire_in: 1 year
The key insight: scan results are evidence, not just gates. Every artifact gets stored for the full audit retention period (1-5 years depending on the system). The auditor doesn't just want to know you scanned — they want to see what you found and what you did about it.
A practical approach: create a compliance evidence store (an S3/MinIO bucket organized by pipeline run ID). Every scan report, approval record, and deployment manifest lands there with a timestamp and signature. When the auditor asks "show me the vulnerability scan for the March 15th deployment," you retrieve a signed JSON file in seconds. That kind of response time builds trust.
Stage 4: Sign, SBOM, and Publish
Every image that reaches the artifact registry needs three things:
- Signature (cosign) — proves it came from your pipeline, not someone's laptop
- SBOM (Software Bill of Materials) — every dependency, every version
- Provenance attestation — which pipeline built it, from which commit, with which base image
sign-and-publish:
stage: publish
script:
- |
IMAGE="${REGISTRY}/${CI_PROJECT_NAME}:${CI_COMMIT_SHA}"
# Generate SBOM
syft "${IMAGE}" -o spdx-json > sbom.spdx.json
syft "${IMAGE}" -o cyclonedx-json > sbom.cdx.json
# Sign the image
cosign sign \
--key ${COSIGN_KEY} \
--annotations "commit=${CI_COMMIT_SHA}" \
--annotations "pipeline=${CI_PIPELINE_ID}" \
"${IMAGE}"
# Attach SBOM
cosign attach sbom \
--sbom sbom.spdx.json \
"${IMAGE}"
# Provenance attestation (SLSA)
cosign attest \
--key ${COSIGN_KEY} \
--predicate provenance.json \
--type slsaprovenance \
"${IMAGE}"
artifacts:
paths:
- sbom.spdx.json
- sbom.cdx.json
expire_in: 1 year
SBOMs aren't optional in federal procurement. Executive Order 14028 (May 2021) requires them for all software sold to the federal government. No SBOM, no purchase order. Procurement officers have been known to Ctrl+F for "SBOM" in technical proposals — if it's not there, the proposal doesn't advance.
Stage 5: GitOps Deployment with Admission Control
ArgoCD handles deployment, but the cluster shouldn't blindly accept whatever it receives. OPA Gatekeeper enforces policies at admission time:
# Only allow images from approved registries
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedRepos
metadata:
name: approved-registries
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
parameters:
repos:
- "registry1.dso.mil/ironbank/"
- "registry.internal.mil/"
---
# Require image signatures
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sImageSignatureVerification
metadata:
name: require-cosign-signature
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
parameters:
cosignPublicKey: |
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE...
-----END PUBLIC KEY-----
---
# No running as root (STIG V-222443)
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPSPAllowedUsers
metadata:
name: no-root-containers
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
parameters:
runAsUser:
rule: MustRunAsNonRoot
---
# Require resource limits
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sContainerLimits
metadata:
name: require-resource-limits
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
parameters:
cpu: "2000m"
memory: "4Gi"
Don't start these in dry-run mode and forget about them. Enforce from day one. Every day of unenforced policies is a day of non-compliant deployments that need to be explained in your next assessment.
The cATO Pipeline: Continuous Authority to Operate
The holy grail of DoD DevSecOps is cATO — Continuous Authority to Operate. Instead of a traditional ATO (a point-in-time assessment valid for 3 years), cATO means the system is continuously assessed and continuously authorized.
The requirement: your pipeline must produce continuous evidence that the system remains in compliance. Not "we checked once." More like "we check every deployment, every scan, every config change, and here's the evidence."
compliance-evidence:
stage: evidence
script:
- |
EVIDENCE_DIR="evidence/${CI_PIPELINE_ID}"
mkdir -p "${EVIDENCE_DIR}"
# Aggregate all scan reports
cp grype-report.json "${EVIDENCE_DIR}/"
cp semgrep-report.json "${EVIDENCE_DIR}/"
cp checkov-report.json "${EVIDENCE_DIR}/"
cp stig-report.json "${EVIDENCE_DIR}/"
cp sbom.spdx.json "${EVIDENCE_DIR}/"
# Generate compliance summary
cat > "${EVIDENCE_DIR}/compliance-summary.json" << EOF
{
"pipelineId": "${CI_PIPELINE_ID}",
"commitSha": "${CI_COMMIT_SHA}",
"timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
"image": "${REGISTRY}/${CI_PROJECT_NAME}:${CI_COMMIT_SHA}",
"imageSigned": true,
"sbomGenerated": true,
"scans": {
"containerVulnerabilities": { "tool": "grype", "passed": true },
"sast": { "tool": "semgrep", "passed": true },
"iacCompliance": { "tool": "checkov", "passed": true },
"stigCompliance": { "tool": "inspec", "passed": true }
}
}
EOF
# Sign the evidence bundle
cosign sign-blob \
--key ${COSIGN_KEY} \
--output-signature "${EVIDENCE_DIR}/signature.sig" \
"${EVIDENCE_DIR}/compliance-summary.json"
# Upload to evidence store
mc cp --recursive "${EVIDENCE_DIR}/" \
"evidence-store/${CI_PROJECT_NAME}/${CI_PIPELINE_ID}/"
rules:
- if: '$CI_COMMIT_BRANCH == "main"'
When these evidence bundles feed into a real-time compliance dashboard, every subsequent audit becomes "here's the dashboard, what questions do you have?" instead of "give us six weeks to compile our evidence package."
Air-Gapped Pipeline Considerations
Everything above assumes network connectivity. For air-gapped environments, you need additional machinery for keeping vulnerability databases, base images, and tooling current:
Connected Side (UNCLASS) │ Air-Gapped Side (CUI/SECRET)
│
┌──────────────────────┐ │ ┌──────────────────────┐
│ Mirror Registry │ │ │ Internal Registry │
│ (Iron Bank sync) │──────────│──│ (Harbor) │
│ │ USB / │ │ │
│ Vuln DB Updates │ data │ │ Vuln DB Mirror │
│ (Grype, Trivy feeds) │──────────│──│ (Grype, Trivy) │
│ │ diode │ │ │
│ Tool Updates │ │ │ Pipeline Tools │
│ (Kaniko, cosign) │──────────│──│ (air-gapped versions)│
└──────────────────────┘ │ └──────────────────────┘
│
Weekly transfer cadence │ Pipeline runs identically —
Checksums + signatures │ same stages, same policies,
on every artifact │ just different registries
Critical detail: the transfer bundle must include a fresh vulnerability database. A pipeline running with a stale database will report false negatives — fewer vulnerabilities than actually exist. The bundling script should refuse to package artifacts if the vulnerability feeds are older than a defined threshold (7 days is a reasonable default).
Practitioner Notes
A few things that are easy to overlook:
1. Automate the evidence, not just the pipeline. The pipeline is maybe 40% of the effort. Evidence collection, storage, and retrieval is the other 60%. Auditors don't care how fast your pipeline is. They care about proof.
2. Start with admission control on day one. Enforce image signature verification and registry restrictions from the first deployment. Retroactively explaining unsigned images in your cluster is not a conversation anyone enjoys.
3. STIG compliance is not a one-time activity. STIGs change. Compliance checks belong in CI, running on every build, not in a quarterly manual review. Use InSpec profiles mapped to STIG IDs, and update the profiles when STIGs are revised.
4. The pipeline itself needs an ATO. The pipeline infrastructure — GitLab, ArgoCD, Harbor, the CI runners — processes CUI. It needs its own hardening, access controls, audit logging, and security assessment.
5. Bring the ISSO in early. The Information System Security Officer knows what auditors will ask. They know which controls are interpreted strictly vs. flexibly. Co-designing the pipeline with the ISSO from sprint one saves months of rework.
6. Track open-source licensing. Every open-source component should be in your SBOM, assessed for vulnerabilities, and checked for license compliance. Certain licenses (GPL, AGPL) in government systems can create procurement complications.
The Payoff
After the signed commits, Iron Bank images, multiple scanners, evidence store, admission policies, and air-gap transfers — what you get is a pipeline that deploys secure, auditable, mission-ready software to any DoD environment in under an hour. A pipeline where the compliance posture is continuously demonstrated, not periodically reconstructed from memory and screenshots.
That's the difference between modern DevSecOps and the traditional approach of building something, throwing it to the security team, and waiting months for an assessment.
The former scales. The latter doesn't.