Back to Blog
Enterprise Distribution

How to Sell Your SaaS to Air-Gapped Customers (Without Losing Your Mind)

We spent six months learning how to ship software to networks with no internet. Here's every mistake we made and the playbook that finally worked.

Oikonex TeamJan 5, 202618 min read

"Wait, You Have No Internet?"

The first time a customer told us they had no internet, we thought they were joking.

We were on a call with a defense contractor — the kind of company whose LinkedIn page says "a leading provider of mission-critical solutions" and whose actual product is classified at a level where even the acronym is redacted. They loved our platform. They wanted to buy. And then their infrastructure lead said the words that would haunt our engineering team for the next six months:

"Just to confirm — this can be deployed to a network with no external connectivity, right?"

We muted ourselves. Someone typed "can we?" in Slack. Fourteen people responded with some variation of "lol no." Our application made 47 external API calls during startup alone. Our license server lived in us-east-1. Our container images were pulled from Docker Hub at deploy time. We had npm packages that downloaded binaries during postinstall.

We told the customer we'd "look into it" — the engineering equivalent of "let me check with my manager" — and spent the next two weeks in denial. Then they told us the contract was worth $4.2 million annually.

Denial turned into motivation real fast.

What "Air-Gapped" Actually Means

Before we get into the how, let's talk about the what, because "air-gapped" is one of those terms that sounds simple until you try to build for it.

An air-gapped network is a network with zero connectivity to the internet. Not "restricted." Not "firewalled." Not "we use a proxy." Zero. The network is physically isolated. The machines in the network have no path to any external host. DNS doesn't resolve public domains. There are no NAT gateways. The air gap is literal — there is a gap of air between this network and the rest of the world.

In government and defense contexts, these environments live in SCIFs — Sensitive Compartmented Information Facilities. Think windowless rooms with Faraday cages embedded in the walls, combination locks on the doors, and rules about what you can bring in (spoiler: not your phone, not your smartwatch, not your laptop, and definitely not your AirPods).

Some facilities have guards who inspect physical media before it enters. Some require that all software deliveries go through a formal review process that takes weeks. Some have "data diodes" — hardware devices that allow data to flow in one direction only, like a check valve for bits.

The air gap is the ultimate firewall. It's also the ultimate "works on my machine" destroyer.

The Four Things That Break

After our first attempt at an air-gapped deployment (which failed spectacularly — more on that later), we realized that every SaaS application has exactly four categories of things that break in an air-gapped environment. Without exception.

1. Installation

Your container images? They're on Docker Hub. Or ECR. Or GCR. All of which require internet access.

Your Helm chart references? They point to chart repositories that are... on the internet.

Your operators? They pull their own images from... you guessed it.

Even if you've containerized everything, the act of pulling containers requires connectivity. In an air-gapped environment, your fancy Kubernetes deployment is just a YAML file full of broken promises.

2. Updates

In the SaaS world, updates are continuous. You push to main, CI/CD does its thing, customers get the new version. Beautiful.

In an air-gapped world, updates are a ceremony. Someone has to package the update. Someone has to transfer it to physical media. Someone has to carry that media to the facility. Someone has to verify the media. Someone has to load it into the network. Someone has to run the update. And if it fails, someone has to do the whole thing in reverse to get logs out.

We call this Sneakernet CI/CD, and it's exactly as glamorous as it sounds.

3. Licensing

If your license validation calls home to a server, congratulations — your software is now a very expensive coaster in an air-gapped environment.

We learned this one the hard way. Our licensing system was a "simple" API call on startup. We thought we'd just cache the license. But caches expire. And when your cache expires in an environment where you can't phone home, your software stops working, and the customer calls your CEO directly. At 6 AM. On a Saturday.

4. Telemetry and Observability

Your Datadog agent? Can't phone home. Your Sentry error tracking? Nope. Your analytics pipeline? Dead on arrival. Your "anonymous" usage telemetry that "helps us improve the product"? The customer's security team already found it in your code and they are Not Happy.

In an air-gapped environment, you don't just lose observability — you lose the ability to know that you've lost observability. Your application could be on fire and you'd have no idea until someone walks out of the SCIF and calls you.

The Packaging Problem (And How We Solved It)

The first thing you need to solve is getting your software into the air-gapped environment. This means creating a self-contained package that includes absolutely everything: container images, Helm charts, operators, CRDs, configuration, and any tools needed for installation.

We call this the bundle, and building it reliably was harder than it sounds. Here's the script we eventually landed on after about seven iterations:

#!/bin/bash
set -euo pipefail

# airgap-bundle.sh — Package everything needed for an air-gapped deployment
# Usage: ./airgap-bundle.sh v2.4.1

VERSION="${1:?Usage: $0 <version>}"
BUNDLE_DIR="oikonex-bundle-${VERSION}"
IMAGES_FILE="${BUNDLE_DIR}/images.tar.gz"
MANIFEST_FILE="${BUNDLE_DIR}/manifest.sha256"

echo "=== Building air-gap bundle for ${VERSION} ==="

mkdir -p "${BUNDLE_DIR}"/{charts,images,tools,scripts}

# Step 1: Collect all container images referenced in our Helm charts
echo "[1/6] Discovering container images..."
IMAGES=$(helm template oikonex ./charts/oikonex \
  --version "${VERSION}" \
  --set global.airgap=true \
  --set global.imageRegistry=registry.internal \
  | grep -oP 'image:\s*"\K[^"]+' \
  | sort -u)

echo "Found $(echo "${IMAGES}" | wc -l) unique images"

# Step 2: Pull all images and save to a tarball
echo "[2/6] Pulling and packaging container images..."
for img in ${IMAGES}; do
  echo "  Pulling ${img}..."
  docker pull "${img}" 2>/dev/null || {
    echo "  WARN: Failed to pull ${img}, trying with platform flag..."
    docker pull --platform linux/amd64 "${img}"
  }
done

echo "  Saving images to tarball (this takes a while)..."
# shellcheck disable=SC2086
docker save ${IMAGES} | gzip > "${IMAGES_FILE}"
echo "  Image bundle size: $(du -h "${IMAGES_FILE}" | cut -f1)"

# Step 3: Package Helm charts
echo "[3/6] Packaging Helm charts..."
helm package ./charts/oikonex \
  --version "${VERSION}" \
  --destination "${BUNDLE_DIR}/charts/"

# Include all sub-charts (operators, CRDs, etc.)
for chart in cloudnativepg-operator nats vault minio-operator; do
  helm pull "oci://registry.internal/charts/${chart}" \
    --destination "${BUNDLE_DIR}/charts/" 2>/dev/null || \
  helm pull "${chart}/${chart}" \
    --destination "${BUNDLE_DIR}/charts/"
done

# Step 4: Include installation tools
echo "[4/6] Packaging installation tools..."
# Include specific versions of tools the installer needs
TOOLS=(kubectl helm k9s)
for tool in "${TOOLS[@]}"; do
  cp "$(which "${tool}")" "${BUNDLE_DIR}/tools/" 2>/dev/null || \
    echo "  WARN: ${tool} not found locally, skipping"
done

# Step 5: Include installation and upgrade scripts
echo "[5/6] Copying installation scripts..."
cp scripts/airgap-install.sh "${BUNDLE_DIR}/scripts/"
cp scripts/airgap-upgrade.sh "${BUNDLE_DIR}/scripts/"
cp scripts/load-images.sh "${BUNDLE_DIR}/scripts/"
cp scripts/validate-bundle.sh "${BUNDLE_DIR}/scripts/"

# Step 6: Generate checksums for everything
echo "[6/6] Generating integrity checksums..."
cd "${BUNDLE_DIR}"
find . -type f -not -name "manifest.sha256" \
  -exec sha256sum {} \; > manifest.sha256
cd ..

# Create final archive
echo "=== Creating final bundle ==="
tar -cf "${BUNDLE_DIR}.tar" "${BUNDLE_DIR}/"
sha256sum "${BUNDLE_DIR}.tar" > "${BUNDLE_DIR}.tar.sha256"

FINAL_SIZE=$(du -h "${BUNDLE_DIR}.tar" | cut -f1)
echo ""
echo "=== Bundle complete ==="
echo "  Archive: ${BUNDLE_DIR}.tar (${FINAL_SIZE})"
echo "  Checksum: ${BUNDLE_DIR}.tar.sha256"
echo "  Transfer this to the target environment via approved media"
echo ""
echo "  On the target machine, run:"
echo "    tar xf ${BUNDLE_DIR}.tar"
echo "    cd ${BUNDLE_DIR}"
echo "    ./scripts/validate-bundle.sh"
echo "    ./scripts/airgap-install.sh"

The checksums matter more than you think. When you're transferring gigabytes of data via USB drive to a facility where you can't re-download anything, you really want to know if a file got corrupted. We learned this after a customer's installation failed because a 4GB image tarball had a single bit flip. Finding that without checksums would have been like finding a typo in the Library of Congress.

The Installation Script

Once the bundle is inside the air-gapped network, someone needs to install it. That someone is usually not you — remember, you can't SSH in. So your installation script needs to be bulletproof, well-documented, and capable of handling errors gracefully.

Here's our air-gapped Helm installation script:

#!/bin/bash
set -euo pipefail

# airgap-install.sh — Install Oikonex in an air-gapped Kubernetes cluster
# Prerequisites: kubectl access to target cluster, local container runtime

SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
BUNDLE_DIR="$(dirname "${SCRIPT_DIR}")"
REGISTRY="${INTERNAL_REGISTRY:-registry.local:5000}"
NAMESPACE="${TARGET_NAMESPACE:-oikonex}"

log() { echo "[$(date '+%H:%M:%S')] $*"; }
err() { echo "[$(date '+%H:%M:%S')] ERROR: $*" >&2; }

# Pre-flight checks
log "Running pre-flight checks..."

if ! kubectl cluster-info &>/dev/null; then
  err "Cannot connect to Kubernetes cluster. Check your kubeconfig."
  exit 1
fi

if ! curl -sf "https://${REGISTRY}/v2/" &>/dev/null; then
  err "Cannot reach internal registry at ${REGISTRY}"
  err "Ensure your internal container registry is running and accessible"
  exit 1
fi

# Validate bundle integrity
log "Validating bundle integrity..."
cd "${BUNDLE_DIR}"
if ! sha256sum -c manifest.sha256 --quiet 2>/dev/null; then
  err "Bundle integrity check failed!"
  err "One or more files may be corrupted. Re-transfer the bundle."
  exit 1
fi
log "Bundle integrity verified."

# Step 1: Load images into the internal registry
log "Loading container images into ${REGISTRY}..."
log "(This may take 10-20 minutes depending on hardware)"

docker load -i "${BUNDLE_DIR}/images/images.tar.gz"

# Re-tag and push to internal registry
for img in $(docker images --format '{{.Repository}}:{{.Tag}}' | grep -v "${REGISTRY}"); do
  new_tag="${REGISTRY}/${img##*/}"
  log "  Pushing ${new_tag}..."
  docker tag "${img}" "${new_tag}"
  docker push "${new_tag}"
done

# Step 2: Install CRDs and operators
log "Installing CRDs and operators..."

for chart in "${BUNDLE_DIR}"/charts/*-operator-*.tgz; do
  [ -f "${chart}" ] || continue
  chart_name=$(basename "${chart}" | sed 's/-[0-9].*//')
  log "  Installing ${chart_name}..."
  helm upgrade --install "${chart_name}" "${chart}" \
    --namespace "${chart_name}-system" \
    --create-namespace \
    --set image.registry="${REGISTRY}" \
    --wait --timeout 5m
done

# Step 3: Install the application
log "Installing Oikonex platform..."
helm upgrade --install oikonex \
  "${BUNDLE_DIR}/charts/oikonex-"*.tgz \
  --namespace "${NAMESPACE}" \
  --create-namespace \
  --values "${BUNDLE_DIR}/charts/values-airgap.yaml" \
  --set global.imageRegistry="${REGISTRY}" \
  --set global.airgap=true \
  --set licensing.mode=offline \
  --set telemetry.enabled=false \
  --wait --timeout 15m

# Step 4: Verify deployment
log "Verifying deployment..."
kubectl -n "${NAMESPACE}" wait --for=condition=ready pod -l app=oikonex \
  --timeout=300s

PODS_READY=$(kubectl -n "${NAMESPACE}" get pods \
  --field-selector=status.phase=Running --no-headers | wc -l)
PODS_TOTAL=$(kubectl -n "${NAMESPACE}" get pods --no-headers | wc -l)

log ""
log "=== Installation Complete ==="
log "  Pods running: ${PODS_READY}/${PODS_TOTAL}"
log "  Namespace: ${NAMESPACE}"
log "  Access: kubectl -n ${NAMESPACE} port-forward svc/oikonex-web 8443:443"
log ""
log "  Next steps:"
log "    1. Apply your offline license: kubectl apply -f license.yaml"
log "    2. Access the dashboard at https://localhost:8443"
log "    3. Save this log for your records (you can't send it to us!)"

Notice the last line. That's not a joke — it's a real operational concern. In an air-gapped environment, the customer can't email you logs. They can't open a support ticket with a screenshot. If something goes wrong, your debugging information is whatever the operator wrote down on a notepad (or, in one memorable case, photographed with a disposable film camera because digital cameras weren't allowed in the facility).

The Licensing Nightmare (And Our Solution)

Licensing in an air-gapped environment is where dreams go to die. Every SaaS licensing model you've ever used assumes internet connectivity at some point. Stripe? Needs internet. License servers? Need internet. "Just check in once a month"? Needs internet.

After our licensing system caused a production outage at a customer site (the cached license expired on a Friday, nobody was on-site until Monday, and the application refused to start for three days), we redesigned the entire thing.

The solution: cryptographically signed offline licenses.

# license.yaml — Applied via kubectl, validated entirely offline
apiVersion: v1
kind: Secret
metadata:
  name: oikonex-license
  namespace: oikonex
type: Opaque
stringData:
  license.key: |
    eyJhbGciOiJFZDI1NTE5IiwidHlwIjoiSldUIn0.eyJjdXN0b21lciI6IkFj
    bWVEZWZlbnNlIiwic2VhdHMiOjUwMCwibm9kZXMiOjI1LCJleHBpcmVzIjoi
    MjAyNy0wMS0wMSIsImZlYXR1cmVzIjpbImFpcmdhcCIsImhhIiwiZmVkcmFt
    cCJdLCJzaWduZWRfYnkiOiJvaWtvbmV4LWxpY2Vuc2luZy12MiJ9.kV9x7mR
    3bNqF2sKz...

The license is a JWT signed with Ed25519. The public key is embedded in the application binary at build time. Validation is entirely offline — no network calls, no license server, no phone-home. The license encodes:

  • Customer name and contract ID
  • Seat and node limits
  • Expiration date (with a 30-day grace period, because we learned)
  • Feature flags
  • A digital signature that can't be forged without our private key

When the license is within 60 days of expiration, the application starts showing warnings in the admin dashboard. When it's within 30 days, it sends alerts. When it expires, there's a 30-day grace period where everything still works but the warnings get more insistent. After the grace period? The application still works — it just goes into read-only mode. We never, ever brick a customer's production environment because of a licensing issue. That lesson cost us one relationship and approximately ten years off our CTO's life.

The Support Nightmare: "We Can't SSH In"

Let's talk about the elephant in the SCIF: support.

In the SaaS world, when a customer reports a bug, your first instinct is to look at their logs. Check their Datadog dashboard. Maybe SSH in and poke around. In an air-gapped environment, you can't do any of that. Here's what support actually looks like:

Customer: "The dashboard is showing an error." Us: "Can you send us a screenshot?" Customer: "No, we can't take data out of the environment." Us: "Can you describe the error?" Customer: "It says 'An unexpected error has occurred.' There's a reference ID." Us: "Can you run kubectl logs and read us the relevant lines?" Customer: "We'd need to get approval for that. It'll take about a week." Us: (muffled screaming)

We solved this — or at least made it survivable — with a diagnostic bundle system. The application can generate an encrypted diagnostic package that contains sanitized logs, system state, and configuration (with all secrets redacted). The customer can review the contents before exporting it, and it's encrypted with our public key so only we can read it.

# Generate a diagnostic bundle (runs inside the air-gapped cluster)
kubectl exec -n oikonex deploy/oikonex-web -- \
  oikonex diagnostics export \
    --redact-secrets \
    --redact-pii \
    --encrypt-for-vendor \
    --last 48h \
    --output /tmp/diag-bundle.tar.gz.enc

# Customer reviews the manifest of what's included
kubectl exec -n oikonex deploy/oikonex-web -- \
  oikonex diagnostics list-contents /tmp/diag-bundle.tar.gz.enc
# Output:
# - pod-status.json (3.2KB) — Pod names, statuses, restart counts
# - app-logs.txt (847KB) — Application logs, secrets REDACTED
# - helm-values.yaml (2.1KB) — Deployed config, secrets REDACTED
# - cluster-info.json (1.4KB) — K8s version, node count, resources
# - events.json (12KB) — Kubernetes events from last 48h

# Customer exports the bundle through their approved data transfer process
# We decrypt and analyze on our side

This workflow respects the customer's security requirements while still giving us enough information to actually debug problems. It's not perfect — sometimes we need information that isn't in the bundle, and then we're back to the "describe the error over the phone" dance. But it handles about 80% of support cases.

Sneakernet CI/CD: The Update Pipeline

Software updates in an air-gapped environment follow what we affectionately call the Sneakernet CI/CD Pipeline. It looks like this:

┌──────────────┐     ┌──────────────┐     ┌──────────────┐
│   Our CI/CD  │────▶│  Bundle      │────▶│  Encrypted   │
│   Pipeline   │     │  Generation  │     │  USB Drive   │
└──────────────┘     └──────────────┘     └──────┬───────┘
                                                  │
                                          ┌───────▼───────┐
                                          │   Physical    │
                                          │   Transport   │
                                          │   (sneakers)  │
                                          └───────┬───────┘
                                                  │
┌──────────────┐     ┌──────────────┐     ┌──────▼───────┐
│   Monitoring │◀────│  Upgrade     │◀────│  Security    │
│   & Verify   │     │  Script      │     │  Review      │
└──────────────┘     └──────────────┘     └──────────────┘

The "physical transport" step is not a joke. Someone literally carries a USB drive (or, in some facilities, a specially formatted CD-R — because USBs are banned) from the outside world into the air-gapped environment. Some facilities have "data transfer stations" where media is scanned for malware before being allowed in. Some require the media to be purchased new, used once, and destroyed after.

We've had update bundles delivered via:

  • Encrypted USB drives
  • Write-once optical media
  • Secure file transfer appliances (data diodes)
  • Printed QR codes (for very small config changes — we're not kidding)
  • And once, on a hard drive that was physically escorted by an armed courier

Our CI pipeline generates the air-gap bundle automatically on every release. The bundle includes a detailed changelog, upgrade instructions, rollback procedures, and validation scripts. Everything the on-site operator needs to perform the upgrade without calling us.

Because here's the thing about air-gapped deployments: you're not the one deploying your software. A person you've probably never met is deploying your software, in an environment you can't see, using instructions you wrote months ago. If those instructions are wrong, or incomplete, or assume knowledge the operator doesn't have, the upgrade fails and you might not find out for days.

Write your upgrade docs like you're writing them for someone defusing a bomb. Step by step. No ambiguity. "Run this exact command. You should see this exact output. If you see anything else, stop and call this number."

The Business Case: Why This Pain Is Worth It

After six months of engineering work, a complete licensing redesign, a new support model, and more than a few late nights questioning our life choices, we shipped our first air-gapped deployment. The contract was worth $4.2M annually.

But that was just the beginning. Here's what we've seen across our client base after enabling air-gapped deployment:

MetricBefore Air-GapAfter Air-GapImpact
Average enterprise deal size$180K ARR$620K ARR+244%
Win rate on gov/defense RFPs0% (couldn't bid)34%New market
Enterprise pipeline coverage65% of addressable market92% of addressable market+27%
Pricing premium for on-premN/A2-3x SaaS pricingMargin boost
Sales cycle length4 months6 months (but worth it)Longer but larger

The math is simple: air-gapped customers pay more, buy longer contracts (typically 3-5 years vs. annual), and churn at near-zero rates. Once your software is deployed inside a SCIF, the switching cost for the customer is enormous. They're not going to rip it out to save 10% on licensing.

The defense and intelligence community alone represents a $15B+ software market where most SaaS vendors simply cannot compete. If your software can be deployed air-gapped, you're selling into a market where your competition just got cut by 90%.

Lessons Learned: The Hard-Won Playbook

After doing this a dozen times, here's our condensed playbook for SaaS companies entering the air-gapped market:

  1. Audit every external call your application makes. Every one. DNS lookups, NTP servers, CDN fetches, Google Fonts, analytics pixels — all of it. We found external calls in places we never expected, including a CSS file that imported a font from googleapis.com.

  2. Your "offline mode" isn't offline enough. If any part of your application degrades, shows an error, or behaves differently without internet, it's not air-gap ready. The application should have no concept of "online" vs. "offline." It should just work.

  3. Invest in your installation script like it's a product. Because it is. The person running it is your user. The script is your UX. Make it beautiful.

  4. Design your licensing for the worst case. What happens if the license expires and nobody notices for 60 days? If your answer is "the application stops working," go back to the drawing board.

  5. Build the diagnostic bundle system early. You'll need it on day one, and building it after a support crisis is much harder than building it proactively.

  6. Hire (or partner with) someone who has clearance. Some facilities require that the people who support the software have security clearances. If you're serious about this market, you need people who can walk into a SCIF.

  7. Charge accordingly. Air-gapped deployment is harder. It requires more engineering, more testing, more support infrastructure, and a different operational model. The pricing should reflect that. Our clients typically charge 2-3x their SaaS pricing for air-gapped deployments, and customers expect it.

The Ultimate Firewall

There's a certain irony in the air gap. We spend our entire careers building connected systems — APIs, webhooks, real-time sync, cloud-native everything. And then someone asks us to make all of it work inside a box with no wires coming out.

But that box is where some of the most important software in the world runs. National defense. Intelligence analysis. Critical infrastructure. Power grids. Air traffic control. These systems protect and sustain millions of lives, and they need good software too.

If your SaaS can cross the air gap, you're not just accessing a lucrative market segment. You're building software that's genuinely more resilient, more self-contained, and more thoughtfully architected than software that assumes the internet will always be there.

The air gap doesn't just keep threats out. It forces you to build better software. And honestly? That's worth a few USB drives.

Building a SaaS product and wondering if air-gapped deployment is right for you? Let's talk. We've made every mistake so you don't have to.

Enterprise DistributionAir-GappedGovernment

Stay in the Loop

Get the latest insights on cloud migration, Kubernetes, and enterprise distribution delivered to your inbox.