Back to Blog
Enterprise Distribution

Offline License Validation: 5 Strategies for Air-Gapped Enterprise Software

Your SaaS license server won't work when your customer's network thinks the internet is a myth. Here's how we built licensing that works in a SCIF.

Oikonex TeamJan 11, 202621 min read

The Phone Call That Changed Everything

The call came in on a Thursday. A government contractor, very interested in our client's platform. Big deal. Six figures. Multi-year contract. The kind of deal where everyone in the company suddenly knows your name.

"One question," the procurement lead said. "Our deployment environment doesn't have internet access."

We nodded. "Sure, we can work with limited connectivity. We'll set up a VPN or —"

"No. You don't understand. There is no internet. The network has never seen the internet. The network does not believe in the internet."

Long pause.

"...come again?"

That phone call sent us down a six-month rabbit hole of building licensing infrastructure that works when your customer's network topology looks like a submarine. No phone-home. No heartbeat. No "just add an egress rule." The packets do not leave the building.

Here are the five strategies we built, what worked, what didn't, and what we wish someone had told us before we started.


Strategy 1: Cryptographically Signed License Files

JWTs: Not Just for Authentication Anymore

The simplest approach, and honestly the one you should start with. You generate a signed file containing the customer's entitlements. The software validates the signature using an embedded public key. No network required. No server required. Just math.

We use JWTs because every language has a JWT library, the format is well-understood, and you can decode them with base64 and your eyeballs if you need to debug something at 11pm.

License generation (your server, not the customer's):

const jwt = require('jsonwebtoken');
const fs = require('fs');

// Your PRIVATE key — lives on your license server, nowhere else.
// (Keep reading for the story about the time we violated this rule.)
const privateKey = fs.readFileSync('/etc/license-server/private-key.pem');

function generateLicense(customer) {
  const payload = {
    iss: 'license.oikonex.com',           // Issuer — that's us
    sub: customer.id,                       // Subject — that's them
    iat: Math.floor(Date.now() / 1000),    // Issued at
    exp: Math.floor(Date.now() / 1000) + (365 * 24 * 60 * 60),  // 1 year

    // The good stuff
    entitlements: {
      product: 'enterprise',
      tier: customer.tier,
      seats: customer.seats,
      features: customer.features,         // ['analytics', 'sso', 'audit-logs']
      nodes: customer.maxNodes,            // Max Kubernetes nodes
    },

    // Metadata for support and auditing
    customer: {
      name: customer.name,
      contract: customer.contractId,
      supportTier: customer.supportTier,
    },
  };

  return jwt.sign(payload, privateKey, {
    algorithm: 'RS256',
    keyid: 'license-key-v2',  // Key ID for rotation. We're on v2.
                               // v1 is... a story.
  });
}

License validation (runs in the customer's air-gapped environment):

const jwt = require('jsonwebtoken');
const fs = require('fs');

// PUBLIC key — embedded in the application. This is safe to distribute.
const publicKey = fs.readFileSync('/etc/myapp/license-public-key.pem');

function validateLicense(licenseFilePath) {
  const licenseToken = fs.readFileSync(licenseFilePath, 'utf8').trim();

  try {
    const decoded = jwt.verify(licenseToken, publicKey, {
      algorithms: ['RS256'],          // Only accept RS256. Don't let the
                                       // JWT library "helpfully" accept
                                       // HS256 with the public key as the
                                       // secret. Yes, that's a real attack.
                                       // Yes, it has a CVE. Several, actually.
      issuer: 'license.oikonex.com',
      clockTolerance: 300,             // 5 minutes of clock skew tolerance.
                                       // Air-gapped servers often have
                                       // drifting clocks because NTP requires
                                       // network access. Funny how that works.
    });

    // Check feature entitlements
    const requiredFeatures = getRequiredFeatures();  // What does this install need?
    const hasAllFeatures = requiredFeatures.every(f =>
      decoded.entitlements.features.includes(f)
    );

    if (!hasAllFeatures) {
      const missing = requiredFeatures.filter(f =>
        !decoded.entitlements.features.includes(f)
      );
      return {
        valid: false,
        reason: `License missing features: ${missing.join(', ')}`,
      };
    }

    return {
      valid: true,
      entitlements: decoded.entitlements,
      expiresAt: new Date(decoded.exp * 1000),
      customer: decoded.customer,
    };
  } catch (err) {
    if (err.name === 'TokenExpiredError') {
      return { valid: false, reason: 'License expired. Contact sales for renewal.' };
    }
    if (err.name === 'JsonWebTokenError') {
      return { valid: false, reason: 'License signature invalid. File may be corrupted.' };
    }
    return { valid: false, reason: `License validation error: ${err.message}` };
  }
}

The Key Rotation Story (Or: The Time We Shipped the Private Key)

We shipped v1 with the private key baked into the Docker image.

Yes, really.

No, we don't want to talk about it.

Fine. We'll talk about it. Our first iteration embedded both keys in the application for "testing convenience." Somehow that testing convenience survived three code reviews, a staging deployment, and a production release. A security auditor found it six weeks later and aged visibly during the call.

The fix was obvious: the private key lives on the license generation server and only the license generation server. The application only gets the public key. But the real fix was adding a CI check:

#!/bin/bash
# pre-push hook: detect private keys in the image
if docker run --rm "$IMAGE" find / -name "*.pem" -exec grep -l "PRIVATE KEY" {} \; | grep -q .; then
  echo "PRIVATE KEY DETECTED IN DOCKER IMAGE. ABORTING."
  echo "You know what you did."
  exit 1
fi

We're on key v2 now. v1 was revoked. We don't talk about v1.


Strategy 2: Hardware-Bound Licenses

Tying Licenses to Kubernetes Clusters

Sometimes a signed file isn't enough. Your customer's security team wants assurance that the license can't be copied from Cluster A to Cluster B. Fair enough. Let's fingerprint the cluster.

const crypto = require('crypto');
const k8s = require('@kubernetes/client-node');

async function generateClusterFingerprint() {
  const kc = new k8s.KubeConfig();
  kc.loadFromCluster();  // Running inside the cluster

  const coreApi = kc.makeApiClient(k8s.CoreV1Api);

  // Get the kube-system namespace UID — this is unique per cluster
  // and doesn't change unless you rebuild the entire cluster
  const ns = await coreApi.readNamespace('kube-system');
  const clusterUID = ns.body.metadata.uid;

  // Get the cluster's service account CA cert hash — another stable identifier
  const configMap = await coreApi.readNamespacedConfigMap(
    'kube-root-ca.crt', 'kube-system'
  );
  const caHash = crypto.createHash('sha256')
    .update(configMap.body.data['ca.crt'])
    .digest('hex');

  // Combine into a composite fingerprint
  const fingerprint = crypto.createHash('sha256')
    .update(`${clusterUID}:${caHash}`)
    .digest('hex');

  return {
    fingerprint,
    components: {
      clusterUID,
      caHash: caHash.substring(0, 16) + '...',  // Truncated for display
    },
  };
}

// During license generation, the customer sends us their fingerprint.
// We embed it in the license JWT:
// {
//   ...entitlements,
//   "hardware": {
//     "fingerprint": "sha256:a1b2c3d4...",
//     "description": "Production cluster - Acme Corp JWICS"
//   }
// }

async function validateHardwareLicense(license) {
  const { fingerprint } = await generateClusterFingerprint();

  if (license.hardware.fingerprint !== fingerprint) {
    return {
      valid: false,
      reason: 'License is bound to a different cluster. ' +
              'If this is a new cluster, contact support for re-binding.',
      expectedFingerprint: license.hardware.fingerprint,
      actualFingerprint: fingerprint,
    };
  }

  return { valid: true, entitlements: license.entitlements };
}

The DR Problem: "I Don't Know Her"

Here's the nightmare scenario we didn't think about until it happened to a customer:

Their building flooded. Not the server room — the building. They activated their disaster recovery plan, spun up a fresh Kubernetes cluster in their secondary site, loaded the application, and... the license said no.

New cluster means new UID. New CA cert. New fingerprint. The license was bound to hardware that was currently underwater.

Our customer's DR event turned into a licensing emergency. At 6am. On a Saturday.

The fix: Emergency bypass tokens.

// Emergency re-binding token — single-use, time-limited
function generateEmergencyToken(customerId, expiresInHours = 72) {
  return jwt.sign({
    type: 'emergency-rebind',
    sub: customerId,
    exp: Math.floor(Date.now() / 1000) + (expiresInHours * 3600),
    singleUse: true,
    // Include a nonce so the same token can't be used twice
    nonce: crypto.randomBytes(16).toString('hex'),
  }, privateKey, { algorithm: 'RS256' });
}

// In the application: if hardware validation fails and an emergency
// token is present, accept it and re-bind to the new cluster
async function validateWithEmergencyFallback(license, emergencyToken) {
  const hwResult = await validateHardwareLicense(license);
  if (hwResult.valid) return hwResult;

  if (!emergencyToken) {
    return hwResult;  // No emergency token, fail normally
  }

  try {
    const decoded = jwt.verify(emergencyToken, publicKey, {
      algorithms: ['RS256'],
    });

    if (decoded.type !== 'emergency-rebind') {
      return { valid: false, reason: 'Invalid emergency token type' };
    }

    // Check if this token was already used
    if (await isTokenUsed(decoded.nonce)) {
      return { valid: false, reason: 'Emergency token already used' };
    }

    // Mark token as used
    await markTokenUsed(decoded.nonce);

    // Re-bind license to new cluster
    const newFingerprint = await generateClusterFingerprint();
    await updateLicenseBinding(license, newFingerprint);

    console.log(`License re-bound to new cluster: ${newFingerprint.fingerprint}`);
    return { valid: true, entitlements: license.entitlements, rebound: true };
  } catch (err) {
    return { valid: false, reason: `Emergency token invalid: ${err.message}` };
  }
}

Now we generate emergency tokens as part of every enterprise onboarding. The customer stores them in a safe (sometimes literally a physical safe). DR planning should include "how do I re-license my software," right next to "how do I restore my backups."

TPM Attestation, Explained Like You're a Golden Retriever

Okay, imagine you have a favorite toy. (Stay with me.) The toy has a special tag inside it that only the toy factory can read. When someone gives you a toy and says "this is YOUR toy," you can sniff the tag and confirm: yes, this is the toy from the factory, and nobody has swapped it out.

TPM attestation works the same way, but for computers. There's a chip on the motherboard (the tag) that can cryptographically prove: "I am this specific machine, running this specific software, and nobody has tampered with the boot process." The TPM chip has a private key that was burned in during manufacturing and literally cannot be extracted — not by software, not by the OS, not by someone with a soldering iron and bad intentions.

For licensing, this means: the software can prove it's running on authorized hardware without phoning home. The TPM signs an attestation report, the license validator checks the signature against a known-good TPM public key, and you have hardware-backed proof of identity.

Is it overkill for most use cases? Absolutely. But when your customer has a three-letter agency name, they don't call it "overkill." They call it "minimum requirements."


Strategy 3: Offline Activation with Grace Periods

The Goldilocks Zone

Some customers aren't fully air-gapped. They have occasional connectivity — maybe during maintenance windows, maybe through a one-way data diode, maybe through a human being who carries a USB drive between networks (sneakernet: the original cloud sync).

For these customers, we built a hybrid: activate online once, then run offline with a grace period for re-validation.

// License states — this is a state machine, not a boolean.
// We learned this the hard way when "valid" and "expired" weren't enough
// to describe "valid but please reconnect soon."
const LicenseState = {
  ACTIVE: 'active',           // Online validation current. All good.
  GRACE: 'grace',             // Offline, but within grace period. Show a warning.
  EXPIRED_GRACE: 'expired',   // Grace period exceeded. Degrade functionality.
  INVALID: 'invalid',         // License is invalid or tampered with.
};

const REVALIDATION_INTERVAL_DAYS = 90;   // Check in every 90 days
const GRACE_PERIOD_DAYS = 30;             // 30 more days after that

// 30 days is the Goldilocks zone:
// - 7 days makes customers anxious. They start calling support on day 3.
// - 90 days means you'll never see a revalidation. The grace period
//   becomes the new normal, and "please reconnect" becomes "meh."
// - 30 days is enough time to schedule a maintenance window without
//   causing panic.

function evaluateLicenseState(license) {
  const now = Date.now();
  const lastValidation = license.lastOnlineValidation;  // epoch ms
  const daysSinceValidation = (now - lastValidation) / (1000 * 60 * 60 * 24);

  // State machine transitions
  if (daysSinceValidation < REVALIDATION_INTERVAL_DAYS) {
    return {
      state: LicenseState.ACTIVE,
      daysUntilGrace: Math.floor(REVALIDATION_INTERVAL_DAYS - daysSinceValidation),
      message: null,
    };
  }

  const daysIntoGrace = daysSinceValidation - REVALIDATION_INTERVAL_DAYS;

  if (daysIntoGrace < GRACE_PERIOD_DAYS) {
    const daysRemaining = Math.floor(GRACE_PERIOD_DAYS - daysIntoGrace);
    return {
      state: LicenseState.GRACE,
      daysRemaining,
      message: `License revalidation required within ${daysRemaining} day${daysRemaining === 1 ? '' : 's'}. ` +
               `Connect to the license server or apply an updated license file.`,
    };
  }

  return {
    state: LicenseState.EXPIRED_GRACE,
    daysPastExpiry: Math.floor(daysIntoGrace - GRACE_PERIOD_DAYS),
    message: 'License grace period expired. Some features have been disabled. ' +
             'Please contact your administrator to revalidate.',
  };
}

// What to do with each state:
function enforceLicense(licenseState) {
  switch (licenseState.state) {
    case LicenseState.ACTIVE:
      return { allowed: true, features: 'all' };

    case LicenseState.GRACE:
      // Full functionality, but show a banner. Don't block anything.
      // Blocking features during grace is a great way to make customers
      // hate you. They're still paying. They're just offline.
      console.warn(`[LICENSE] ${licenseState.message}`);
      return { allowed: true, features: 'all', warning: licenseState.message };

    case LicenseState.EXPIRED_GRACE:
      // Degrade gracefully: read-only mode, or disable non-essential features.
      // NEVER hard-block. If this is a hospital or military system,
      // your license enforcement should not be the reason something
      // mission-critical stops working.
      return {
        allowed: true,
        features: 'read-only',
        warning: licenseState.message,
        degraded: true,
      };

    case LicenseState.INVALID:
      return { allowed: false, reason: 'Invalid license' };
  }
}

Critical design decision: we never hard-block on license expiry in air-gapped environments. Degraded mode, yes. Warnings, absolutely. But a complete shutdown? No. We're not going to be the reason an air-gapped system can't do its job because a timestamp rolled over. Your enterprise customers are running critical workloads. Act accordingly.


Strategy 4: Usage-Based Offline Metering

The Blockchain-Style Tamper-Resistant Ledger

Yes, we're using blockchain concepts. No, we're not doing crypto. Please don't leave.

The problem: your customer is running your software in an air-gapped environment with usage-based pricing. They say they used 50,000 API calls last month. Did they? You have no way to verify. They could say 5,000 and you'd just have to trust them.

The solution: a local ledger where each entry is cryptographically chained to the previous one — like a blockchain, but without the consensus mechanism, the energy consumption, or the insufferable evangelists.

const crypto = require('crypto');
const fs = require('fs');

class UsageLedger {
  constructor(ledgerPath, signingKey) {
    this.ledgerPath = ledgerPath;
    this.signingKey = signingKey;
    this.entries = this.loadLedger();
  }

  loadLedger() {
    try {
      const data = fs.readFileSync(this.ledgerPath, 'utf8');
      return JSON.parse(data);
    } catch {
      return { entries: [], genesisHash: this.generateGenesisHash() };
    }
  }

  generateGenesisHash() {
    // The genesis hash includes a timestamp and random nonce
    // so each ledger is unique and can't be swapped
    return crypto.createHash('sha256')
      .update(`genesis:${Date.now()}:${crypto.randomBytes(16).toString('hex')}`)
      .digest('hex');
  }

  // Record a usage event. Each entry chains to the previous one.
  recordUsage(metric, value, metadata = {}) {
    const previousHash = this.entries.entries.length > 0
      ? this.entries.entries[this.entries.entries.length - 1].hash
      : this.entries.genesisHash;

    const entry = {
      timestamp: new Date().toISOString(),
      sequence: this.entries.entries.length + 1,
      metric,        // e.g., 'api_calls', 'storage_gb', 'compute_hours'
      value,         // e.g., 1, 250, 0.5
      metadata,      // e.g., { endpoint: '/api/v1/query', userId: '...' }
      previousHash,  // Chain link — this is what makes it tamper-evident
    };

    // Hash includes all fields + the previous hash
    // If anyone modifies an entry, every subsequent hash breaks
    entry.hash = crypto.createHash('sha256')
      .update(JSON.stringify({
        timestamp: entry.timestamp,
        sequence: entry.sequence,
        metric: entry.metric,
        value: entry.value,
        previousHash: entry.previousHash,
      }))
      .digest('hex');

    this.entries.entries.push(entry);
    this.saveLedger();

    return entry;
  }

  // Verify the entire chain is intact — no entries modified or removed
  verifyIntegrity() {
    let expectedPreviousHash = this.entries.genesisHash;

    for (const entry of this.entries.entries) {
      // Check chain link
      if (entry.previousHash !== expectedPreviousHash) {
        return {
          valid: false,
          brokenAt: entry.sequence,
          reason: 'Chain broken — an entry was modified or removed',
        };
      }

      // Verify hash
      const computedHash = crypto.createHash('sha256')
        .update(JSON.stringify({
          timestamp: entry.timestamp,
          sequence: entry.sequence,
          metric: entry.metric,
          value: entry.value,
          previousHash: entry.previousHash,
        }))
        .digest('hex');

      if (computedHash !== entry.hash) {
        return {
          valid: false,
          brokenAt: entry.sequence,
          reason: `Hash mismatch at entry ${entry.sequence} — data was tampered with`,
        };
      }

      expectedPreviousHash = entry.hash;
    }

    return { valid: true, entries: this.entries.entries.length };
  }

  // Generate a signed usage report for export
  generateReport(periodStart, periodEnd) {
    const periodEntries = this.entries.entries.filter(e => {
      const ts = new Date(e.timestamp).getTime();
      return ts >= periodStart && ts <= periodEnd;
    });

    // Aggregate by metric
    const aggregated = {};
    for (const entry of periodEntries) {
      if (!aggregated[entry.metric]) {
        aggregated[entry.metric] = { total: 0, count: 0 };
      }
      aggregated[entry.metric].total += entry.value;
      aggregated[entry.metric].count += 1;
    }

    const report = {
      generatedAt: new Date().toISOString(),
      periodStart: new Date(periodStart).toISOString(),
      periodEnd: new Date(periodEnd).toISOString(),
      metrics: aggregated,
      entryCount: periodEntries.length,
      chainIntegrity: this.verifyIntegrity(),
      // Include the first and last hash for verification
      firstHash: periodEntries[0]?.hash,
      lastHash: periodEntries[periodEntries.length - 1]?.hash,
    };

    // Sign the report so we can verify it wasn't modified in transit
    report.signature = crypto.createSign('RSA-SHA256')
      .update(JSON.stringify(report))
      .sign(this.signingKey, 'hex');

    return report;
  }

  saveLedger() {
    fs.writeFileSync(this.ledgerPath, JSON.stringify(this.entries, null, 2));
  }
}

// Usage in your application:
const ledger = new UsageLedger('/var/lib/myapp/usage-ledger.json', privateKey);

// Record every billable event
app.use('/api/*', (req, res, next) => {
  res.on('finish', () => {
    if (res.statusCode < 400) {  // Only count successful requests
      ledger.recordUsage('api_calls', 1, {
        endpoint: req.path,
        method: req.method,
      });
    }
  });
  next();
});

The export process is simple: the customer runs a CLI command, copies the signed report to a USB drive (or prints it on paper, we've seen that too), and delivers it through whatever channel works for their security requirements. We verify the chain integrity and the signature, calculate the bill, and send an invoice.

Is it perfect? No. A determined adversary could stop recording events. But that's what audit clauses in your enterprise contract are for. The tamper-evident chain proves that what was recorded hasn't been modified. That's usually enough.


Strategy 5: Trusted Execution with SGX/TPM

The Bank Vault Metaphor

Imagine your license validation code is a vault inspector. In a normal environment, the inspector walks into the bank, checks the vault, and reports back. But what if the bank is... untrustworthy? What if someone could intercept the inspector on the way out and change their report from "vault is empty" to "vault is full of gold"?

Trusted Execution Environments (TEEs) solve this by putting the inspector in a bulletproof, tamper-proof bubble. The inspector can see out (read data), but nobody can see in (read the inspector's secrets) or modify the inspector's report. The bubble is sealed by the CPU hardware itself, and only Intel/AMD/ARM can forge the bubble's seal.

In practice, your license validation logic runs inside an SGX enclave or a Confidential VM. The enclave can:

  1. Store the license validation key in hardware-sealed memory
  2. Perform the validation check where no other software can observe it
  3. Generate an attestation report proving "yes, the real validation code ran, and here's what it found"
 Normal execution:
 ┌─────────────────────────────────────────┐
 │ OS / Hypervisor (can see everything)    │
 │  ┌──────────────────────────────────┐   │
 │  │ Your App                         │   │
 │  │   license_check() → true/false   │   │  Anyone can patch this binary
 │  └──────────────────────────────────┘   │  and make it always return true.
 └─────────────────────────────────────────┘

 SGX execution:
 ┌─────────────────────────────────────────┐
 │ OS / Hypervisor (CANNOT see inside)     │
 │  ┌─ SGX Enclave ─────────────────────┐  │
 │  │ ┌──────────────────────────────┐  │  │
 │  │ │ license_check() → true/false │  │  │  Tampering = enclave refuses
 │  │ │ Sealed key material          │  │  │  to run. Hardware-enforced.
 │  │ └──────────────────────────────┘  │  │
 │  │ Attestation: "I am the real code" │  │
 │  └───────────────────────────────────┘  │
 └─────────────────────────────────────────┘

The Honest Assessment

Cool tech. Brutal implementation. Only do this if your customer literally works for a three-letter agency.

Here's why:

  • Hardware requirements: SGX requires specific Intel CPUs (Xeon E3 v5+ or select Xeon Scalable). AMD SEV is more broadly available but has different trade-offs. Not every server in your customer's data center will support it.
  • Development complexity: Writing enclave code means learning a new SDK, dealing with limited enclave memory (typically 128MB-256MB), and accepting that you can't make syscalls from inside the enclave. No file I/O. No network calls. No console.log. Debugging is... an experience.
  • Supply chain concerns: You're trusting Intel/AMD's attestation infrastructure. For some government customers, that's fine. For others, trusting a CPU vendor is itself a security concern. (Welcome to the fun hall of mirrors that is government InfoSec.)
  • Maintenance burden: Every CPU microcode update can potentially change attestation measurements. You need a process for updating your expected measurement values when hardware gets patched.

We've implemented this exactly once, for a customer whose threat model included "a nation-state adversary with physical access to the servers." For that use case, SGX was the right answer. For everyone else, Strategies 1-4 are more than sufficient.


Choosing Your Strategy: The Decision Matrix

Not every customer needs the same approach. Here's how we decide:

ScenarioStrategyWhy
Standard enterprise, occasional connectivityStrategy 3 (Grace Periods)Simple. Customers understand it. Allows revocation.
Fully air-gapped, perpetual licenseStrategy 1 (Signed Files)No connectivity assumptions. Just works.
Air-gapped + anti-sharing requirementStrategy 1 + 2 (Signed + Hardware)Cluster binding prevents casual copying.
Usage-based pricing, quarterly settlementsStrategy 4 (Offline Metering)Tamper-evident tracking. Contract-backed trust.
Defense/intelligence, maximum paranoiaStrategy 5 (Trusted Execution)Hardware-backed. For when trust is not optional.
Trial/POC in air-gapped environmentStrategy 1 (Signed Files) with 30-day expLow friction. Get the POC done. Worry about production licensing later.
Multi-cluster enterprise deploymentStrategy 1 + 2 with emergency tokensOne license per cluster, with DR provisions.

The Combination Play

In practice, most of our enterprise customers end up with Strategy 1 + Strategy 3: a signed license file with a built-in grace period for re-validation. It handles the common case (occasional connectivity for renewals) and the edge case (fully offline for extended periods).

The signed file provides the base entitlements. The grace period provides flexibility. The combination covers about 90% of enterprise scenarios without over-engineering.


Implementation Checklist: The Stuff You'll Forget

We forgot at least half of these on our first implementation. Learn from our mistakes:

  • Key management: Where does the private key live? Who has access? How do you rotate it? (Not in the Docker image. NOT IN THE DOCKER IMAGE.)
  • Clock skew: Air-gapped servers often have drifting clocks. Use generous clockTolerance values and consider NTP alternatives like PTP or GPS-synced clocks.
  • Renewal workflow: How does the customer get a new license when the old one expires? Email? USB? Carrier pigeon? Document it.
  • Revocation: How do you invalidate a compromised license? With signed files, you can't — unless you push a new public key. Plan for this.
  • Graceful degradation: What happens when the license expires? Hard stop = angry customer. Read-only mode = mild inconvenience. Choose wisely.
  • Tampering alerts: Log and alert on validation failures. Three failed validations in a row might mean corruption. Thirty might mean someone's poking at your licensing.
  • Customer support tooling: Build an internal dashboard for looking up license status by customer. Your support team will thank you approximately 400 times per quarter.
  • Documentation: Write air-gapped installation instructions that assume the reader has never seen your product. Because they haven't. And they can't Google it.

A Note on DRM (Or: Your Customers Are Not Pirates)

There's a temptation to treat enterprise licensing like digital rights management — to build elaborate anti-tampering systems that make it hard for customers to bypass your license checks.

Resist this temptation.

Your enterprise customers are businesses. They have legal departments. They signed contracts. They're paying you real money. Most of them will comply with license terms if you make it easy for them to do so.

The goal of offline licensing isn't to build an unbreakable fortress. It's to:

  1. Enable the sale — "yes, our software works in your air-gapped environment"
  2. Track entitlements — "this customer is licensed for 100 seats on 10 nodes"
  3. Support renewals — "your license expires in 30 days, here's how to renew"

If you spend six months building a licensing system that's 99.9% tamper-proof instead of two months building one that's 95% tamper-proof and ships four months sooner, you've lost four months of revenue. And the 0.1% of customers who would tamper with your licensing were never going to pay you anyway.


Start Simple. Close the Deal. Iterate.

Strategy 1 — cryptographically signed license files — handles 80% of enterprise use cases. It's simple to implement, simple to explain, and simple for customers to manage.

Ship that. Close the deal. Get revenue flowing.

Then, when a customer says "we need hardware binding," you add Strategy 2. When another says "we need usage metering," you add Strategy 4. Build licensing infrastructure in response to real customer requirements, not hypothetical ones.

The worst licensing system is the one that delays your first enterprise sale by six months because you were busy implementing TPM attestation for a customer who would have been perfectly happy with a JWT.

We know this because we were that team. Don't be us. Ship the JWT. Close the deal. Sleep well.

Then maybe start thinking about TPM attestation. But only if someone asks.

Enterprise DistributionLicensingAir-Gapped

Stay in the Loop

Get the latest insights on cloud migration, Kubernetes, and enterprise distribution delivered to your inbox.