Back to Blog
isv

Your SaaS Can't Get Past the Lobby: The ISV Enterprise Distribution Playbook

92% of ISVs report on-prem sales growth. GitLab does $459M in self-managed revenue. Here's how to package your SaaS for any Kubernetes cluster — VPC, on-prem, or air-gapped.

Oikonex TeamFeb 8, 202613 min read

The Demo Goes Great. Then the CISO Talks.

You know the meeting. Forty-five minutes in, the technical champion is sold. The product manager is nodding. The VP of Engineering is already asking about timeline. You're mentally planning the celebration dinner.

Then the CISO leans forward and says nine words that will rearrange your entire quarter:

"This needs to deploy inside our VPC. Can it?"

If the honest answer is no — if your SaaS is a tangle of managed services, hardcoded AWS SDK calls, and container images that pull from Docker Hub at boot — you just lost the deal. Not because your product is bad. Because your packaging can't get past the lobby.

This keeps happening. It happens at banks, at defense contractors, at healthcare systems, at telcos. It happens to seed-stage startups and it happens to Series C companies with 200 engineers. The product is great. The deployment model is a dealbreaker.

The thing is, this isn't a niche problem anymore. It's where the money is going.

The Money: On-Prem Is Not Dead. It's Growing.

Let's kill the myth that "everything is moving to the cloud" with some numbers.

A Replicated / Dimensional Research survey of 405 ISVs found that 92% report on-prem demand grew over the past five years. Not flat. Not declining. Growing. 50% reported strong growth. Only 3% saw a decrease. And 54% of those ISVs derive the majority of their revenue from on-prem deployments. That same survey found that 86% are using Kubernetes and 90% use containers in production as the delivery mechanism.

Look at the public companies. GitLab reported $458.88M in self-managed revenue in FY2025 — that's 60% of their $759.25M total. Six out of ten dollars GitLab earns come from software running on someone else's infrastructure.

Confluent told a similar story. Their S-1 filing at IPO showed roughly 80% of subscription revenue came from the self-managed Confluent Platform. Not Confluent Cloud. The thing they installed on your Kubernetes cluster.

Gartner projects that 50% of critical enterprise applications will reside outside centralized public cloud locations through 2027, and estimates that 70% of workloads still haven't migrated. Half of the enterprise software market isn't in AWS. It's behind firewalls, in private data centers, on classified networks.

And the contracts are enormous. Palantir pulled in $2.87B in FY2024 revenue, 55% of it from government customers, including a $10B Army contract. Rocket.Chat won the NATO Cooperative Cyber Defence Centre of Excellence — that's 40+ nations — by offering self-hosted deployment. They run on NIPRNet, SIPRNet, and JWICS with a DoD ATO up to Impact Level 6. They didn't win that by sending a sign-up link.

Even at the startup stage, this shows up fast. The Work-Bench BYOC analysis documented how ParadeDB sees roughly 50% of adoption through self-hosted deployment, and how Earthly had to build BYOC (Bring Your Own Cloud) capability at the seed stage because banking customers flatly refused managed cloud. Not "preferred" their own infrastructure. Refused.

The pattern is clear: if you're selling to enterprises, a SaaS-only delivery model is leaving revenue on the table. Sometimes most of the revenue.

The Architecture: What a Portable Kubernetes App Looks Like

The good news is there's a well-established architecture for this. The bad news is it requires you to eliminate every hard dependency on managed cloud services and make every component of your stack swappable.

A portable enterprise application looks like this:

  • Stateless application containers that read their configuration from environment variables, ConfigMaps, and Secrets — not from IMDS or cloud-specific metadata endpoints
  • Helm charts as the installation and configuration API — the customer runs helm install, not a 47-step wiki page
  • Swappable backing services: bundled PostgreSQL/Redis for quick starts, external database support for production. The customer's DBA has opinions. Respect them.
  • No outbound calls required: licensing works offline, telemetry is optional, no phone-home on startup. If the app behaves differently without internet, it's not portable.
  • OCI artifacts for everything: container images, Helm charts, and config bundles all stored as OCI artifacts in a single registry

The critical shift is mental. In a SaaS architecture, you control the environment. In enterprise distribution, the customer controls the environment. You don't pick the Kubernetes version, the CNI plugin, the ingress controller, the storage class, or the Linux distro. Your software has to work on all of them, or at least fail gracefully with a clear error message.

The Helm Chart: Your Installation API

Helm (github.com/helm/helm, 29k+ stars, CNCF Graduated) is the de facto standard for packaging Kubernetes applications. A Helm chart is your installation API. The values.yaml file is the contract between your software and the customer's infrastructure team.

Here's what the chart structure looks like for a real application:

acme-platform/
├── Chart.yaml                  # Chart metadata and dependencies
├── Chart.lock                  # Dependency lock file
├── values.yaml                 # Default configuration — this is the big one
├── values.schema.json          # JSON Schema validation for values
├── templates/
│   ├── _helpers.tpl            # Template helpers and naming conventions
│   ├── deployment.yaml         # Application deployment
│   ├── service.yaml            # ClusterIP service
│   ├── ingress.yaml            # Ingress resource (customer-configurable)
│   ├── configmap.yaml          # Application configuration
│   ├── secret.yaml             # Credentials (or reference to external secrets)
│   ├── serviceaccount.yaml     # RBAC: least-privilege service account
│   ├── networkpolicy.yaml      # Network segmentation (enterprise teams expect this)
│   ├── hpa.yaml                # Horizontal Pod Autoscaler
│   ├── pdb.yaml                # Pod Disruption Budget for HA
│   └── tests/
│       └── test-connection.yaml  # helm test smoke check
├── charts/                     # Subcharts: postgres, redis, etc.
└── ci/
    ├── test-values.yaml        # CI test configuration
    └── airgap-values.yaml      # Air-gapped override values

The Chart.yaml declares what your chart is and what it depends on:

# Chart.yaml
apiVersion: v2
name: acme-platform
description: ACME Platform — deploy to any Kubernetes cluster
type: application
version: 1.4.0        # Chart version (packaging version)
appVersion: "2.8.1"   # Application version (your software version)

dependencies:
  - name: postgresql
    version: "15.x.x"
    repository: "oci://registry.example.com/charts"
    condition: postgresql.enabled    # Customer can disable and use external DB
  - name: redis
    version: "19.x.x"
    repository: "oci://registry.example.com/charts"
    condition: redis.enabled

And the values.yaml — the part that matters most. Every option the customer might need to configure goes here. Miss one, and you'll get a support ticket. Miss an important one, and you'll lose the deal.

# values.yaml — Customer-facing configuration contract

global:
  # Customers with private registries override this once, all images follow
  imageRegistry: ""
  # Air-gapped deployments set this to true
  airgap: false
  # Enterprise proxy settings — you WILL encounter corporate proxies
  proxy:
    httpProxy: ""
    httpsProxy: ""
    noProxy: ""

image:
  repository: registry.example.com/acme/platform
  tag: ""  # Defaults to Chart.appVersion
  pullPolicy: IfNotPresent

imagePullSecrets: []
#  - name: my-registry-secret

replicaCount: 2

resources:
  requests:
    memory: "512Mi"
    cpu: "250m"
  limits:
    memory: "1Gi"
    cpu: "1000m"

ingress:
  enabled: true
  className: ""  # nginx, traefik, haproxy — let the customer decide
  annotations: {}
  hosts:
    - host: acme.internal.example.com
      paths:
        - path: /
          pathType: Prefix
  tls: []
  #  - secretName: acme-tls
  #    hosts:
  #      - acme.internal.example.com

# External database support — the customer's DBA has opinions
postgresql:
  enabled: true        # false = use externalDatabase instead
  auth:
    postgresPassword: ""
    database: acme
  primary:
    persistence:
      size: 50Gi
      storageClass: ""  # Customer specifies their storage class

externalDatabase:
  host: ""
  port: 5432
  database: "acme"
  username: "acme"
  existingSecret: ""   # Secret containing 'password' key

# Authentication — "just use our built-in auth" said no enterprise buyer ever
auth:
  provider: "builtin"  # builtin | oidc | saml | ldap
  oidc:
    issuerUrl: ""
    clientId: ""
    clientSecretRef:
      name: ""
      key: ""

# Licensing — must work fully offline
licensing:
  mode: "online"  # online | offline
  offlineLicenseSecret: ""  # Name of Secret containing license key

# Telemetry — always optional, always off in air-gapped
telemetry:
  enabled: true

Every field is something a real customer will need to configure. Private registries, corporate proxies, external databases, SSO, offline licensing — skip any of these and you'll learn about it the hard way during a deployment call.

Customer-side installation then looks like this:

# Customer adds the chart repository (or uses OCI registry directly)
helm repo add acme https://charts.example.com
helm repo update

# Install with their environment-specific configuration
helm install acme-platform acme/acme-platform \
  --namespace acme \
  --create-namespace \
  --values my-environment-values.yaml \
  --set global.imageRegistry=registry.internal.corp.net \
  --set auth.provider=oidc \
  --set auth.oidc.issuerUrl=https://sso.corp.net/realms/main \
  --set postgresql.enabled=false \
  --set externalDatabase.host=db-prod.corp.net \
  --set externalDatabase.existingSecret=acme-db-credentials \
  --wait --timeout 10m

One command. The customer's infra team runs it, points it at their own registry, their own database, their own SSO provider. That's the experience you're aiming for.

Air-Gapped Delivery: Bundles, Registries, and Offline Install

Air-gapped networks — environments with zero internet connectivity — are where enterprise distribution gets genuinely hard. Defense, intelligence, classified government systems, some financial institutions, critical infrastructure operators. These networks are physically isolated. No NAT gateway. No proxy. No sneaky workaround. The air gap is literal.

And the market is real: 34% of ISVs ship air-gapped, according to that same Replicated survey.

The core problem: your Kubernetes manifests reference container images by tag. Normally, the kubelet pulls those images from a registry over the internet. In an air-gapped cluster, there is no internet. You need to physically deliver every image and every chart artifact, then load them into an internal registry the cluster can reach.

Here's a real air-gapped bundling script:

#!/usr/bin/env bash
set -euo pipefail

# airgap-bundle.sh — Package a Helm chart + all container images for offline install
# Usage: ./airgap-bundle.sh <chart-dir> <version>

CHART_DIR="${1:?Usage: $0 <chart-dir> <version>}"
VERSION="${2:?Usage: $0 <chart-dir> <version>}"
BUNDLE="airgap-bundle-${VERSION}"

echo "=== Building air-gap bundle v${VERSION} ==="
mkdir -p "${BUNDLE}/images"

# Step 1: Package the Helm chart
echo "[1/4] Packaging Helm chart..."
helm package "${CHART_DIR}" --version "${VERSION}" --destination "${BUNDLE}/"

# Step 2: Discover all container images referenced in the rendered manifests
echo "[2/4] Discovering container images..."
IMAGES=$(helm template release-name "${CHART_DIR}" \
  --set global.airgap=true \
  | grep -oP 'image:\s*"?\K[^"\s]+' \
  | sort -u)
echo "    Found $(echo "${IMAGES}" | wc -l) unique images"

# Step 3: Pull and export all images to tarballs
echo "[3/4] Pulling and saving images..."
for img in ${IMAGES}; do
  echo "    Pulling ${img}"
  docker pull --quiet "${img}"
done
# Save all images into a single tarball
# shellcheck disable=SC2086
docker save ${IMAGES} | gzip > "${BUNDLE}/images/all-images.tar.gz"
echo "    Image archive: $(du -h "${BUNDLE}/images/all-images.tar.gz" | cut -f1)"

# Step 4: Generate checksums and create final archive
echo "[4/4] Finalizing bundle..."
cd "${BUNDLE}"
find . -type f ! -name 'SHA256SUMS' -exec sha256sum {} \; > SHA256SUMS
cd ..
tar cf "${BUNDLE}.tar" "${BUNDLE}/"
sha256sum "${BUNDLE}.tar" > "${BUNDLE}.tar.sha256"

echo ""
echo "=== Bundle ready ==="
echo "  Archive:  ${BUNDLE}.tar ($(du -h "${BUNDLE}.tar" | cut -f1))"
echo "  Checksum: ${BUNDLE}.tar.sha256"
echo ""
echo "  On the target (air-gapped) machine:"
echo "    tar xf ${BUNDLE}.tar && cd ${BUNDLE}"
echo "    docker load -i images/all-images.tar.gz"
echo "    # Re-tag images for internal registry, then:"
echo "    helm install acme ./*.tgz --set global.imageRegistry=registry.local:5000"

On the receiving end, inside the air-gapped network, the operator loads the images into whatever internal registry they run — Harbor, Docker Registry, or an OCI-native registry like zot (github.com/project-zot/zot, 1.8k+ stars), which is purpose-built for this. Then they install the Helm chart, pointing global.imageRegistry at their internal registry. Every image reference in your templates resolves internally. No outbound calls. No surprises.

The checksums matter more than you'd think. When you're transferring 4 GB via approved physical media into a facility where you can't re-download anything, a single bit flip during transfer means the entire installation fails and someone has to drive back to the office. SHA256 verification isn't paranoia. It's operational hygiene.

The Toolchain: What's Available

You don't have to build all of this from scratch. There's a real ecosystem for ISV distribution to enterprise environments:

Helm (github.com/helm/helm) — The packaging standard. 29k+ GitHub stars, CNCF Graduated project. If your app runs on Kubernetes, it ships as a Helm chart. Non-negotiable at this point.

Replicated KOTS (github.com/replicatedhq/kots) — Kubernetes Off-The-Shelf software. An admin console that gives customers a UI for installing, configuring, and updating your application. Handles license management, air-gapped delivery, config UI generation, and preflight checks. 940+ stars. This is particularly useful when your customer's operator isn't a Kubernetes expert — KOTS gives them a web UI instead of kubectl.

Replicated Embedded Cluster (github.com/replicatedhq/embedded-cluster) — Goes a step further: packages your app with a Kubernetes distribution (k0s) so the customer doesn't even need an existing cluster. They run a single installer on bare Linux and get your application plus the cluster it runs on. Useful for customers who say "we have servers, but not Kubernetes."

Distr by Glasskube (github.com/glasskube/distr) — A newer entrant focused on software distribution for on-prem and self-managed deployments. 950+ stars. Provides a distribution platform for managing customer access to artifacts, versions, and licenses. Worth watching if you want an alternative to the Replicated ecosystem.

zot (github.com/project-zot/zot) — An OCI-native image registry designed for air-gapped and edge environments. 1.8k+ stars. Lightweight, single binary, supports OCI artifacts (images, Helm charts, SBOMs all in one registry). Useful both as the registry you recommend customers run internally and as the registry inside your air-gapped bundle.

The maturity of these tools reflects the market demand. This isn't a cottage industry. These are production-grade projects backed by significant investment because ISVs need them to close deals.

The Objections (And Why They're Wrong)

"Our customers are all cloud-native." They are, until you try to sell to a bank, a hospital system, a telco, or any government agency. The Fortune 500 has cloud-native teams and also teams that run on-prem Kubernetes clusters and also teams that run air-gapped VMware environments. One company, three deployment models. Your largest deals will require at least two.

"We'll build it when we need it." You need it when the deal is in the pipeline, not when the PO is signed. Enterprise procurement cycles run 3-9 months. If a customer asks for VPC deployment and you say "we'll have it in six months," they move on. Earthly had to build BYOC at the seed stage because the banking customers they needed weren't going to wait.

"The engineering cost is too high." The engineering cost of Helm-packaging an already-containerized application is measured in weeks, not quarters. The revenue impact of being able to say "yes" to on-prem deployment is measured in multiples of ARR. GitLab earns $459M per year from self-managed deployments. The ROI math is not subtle.

The Bottom Line

Enterprise software distribution is not a feature request. It's a go-to-market strategy. The data is unambiguous: on-prem demand is growing, the majority of enterprise workloads haven't moved to public cloud, and the largest contracts in software — defense, government, financial services, healthcare — require deployment behind the customer's firewall.

The ISVs capturing this revenue aren't doing anything exotic. They're packaging containerized applications as Helm charts, building air-gapped bundles, supporting offline licensing, and testing on the Kubernetes distributions their customers actually run. The toolchain exists. The architecture patterns are well-documented. The revenue is waiting.

Your SaaS demo is great. Now make sure it can get past the lobby.


Sources

isventerprise-distributionhelmair-gappedkubernetes

Stay in the Loop

Get the latest insights on cloud migration, Kubernetes, and enterprise distribution delivered to your inbox.