Deployment procedure

Step-by-step install. The shape is the same for both Bundled Evaluation and Production modes; the differences are isolated to §2 (pre-create resources) and the contents of the values.yaml file you write in §1.

If you have not yet decided which mode you're installing, read two-modes.md first.


§0 — Prerequisites

On your workstation

Tool Minimum version Used for
oc (OpenShift CLI) 4.12 Authenticate to the cluster, inspect resources, run the install commands below
helm 3.11 Render and install the chart

On macOS:

brew install openshift-cli helm

Cluster access

You need:

  • Login to the target OpenShift cluster with permission to create resources in your project / namespace. A namespace-admin role is sufficient for most installs; cluster-admin is not required.
  • An empty (or pre-created) project / namespace dedicated to this deployment. The chart does not create namespaces and does not install cluster-scoped resources.
  • The cluster's StorageClass name if your default isn't suitable or if you want to override it. List with oc get storageclass.
  • The cluster's ingress domain (used for the application's Route hostname when you don't pin a custom one): sh oc get ingresscontroller default -n openshift-ingress-operator \ -o jsonpath='{.status.domain}'

Credentials and assets from the vendor

The vendor delivers two artifact formats per release. Pick the one that matches your install path:

Artifact When to use
Helm chart (<chart>-<version>.tgz, ~200 KB) OpenShift / Kubernetes installs (this guide). Contains chart manifests + Bitnami subcharts; no container image.
Offline tarball (.tar.gz, ~800 MB) Single-VM Docker installs (separate docker compose deploy guide). You can also extract the image-tarball portion and mirror it for an OpenShift install instead of pulling from the vendor registry.

You'll always need:

  • A license file. Signed JSON or .lic file unique to your organization.
  • The Helm chart artifact for OpenShift installs.
  • The application image. Either a public registry path (ghcr.io/<vendor>/<app>:<version>) the cluster can pull from with a vendor-issued credential, OR — for air-gapped or highest-assurance environments — the image tarball extracted from the offline-tarball delivery and mirrored into your internal registry.
  • A registry pull credential when pulling from a credentialed registry.

Air-gapped or restricted-egress clusters

If your cluster cannot reach the public internet, you'll do the mirroring yourself before the install:

  1. Mirror the application image into an internal registry the cluster can pull from:

    skopeo copy oci-archive:<image-tarball-from-vendor>.tar \
     docker://registry.internal.example.com/<app>:<version>
    

    The vendor provides the source tarball when this delivery model is in use.

  2. Mirror the chart's PostgreSQL and Redis images if you're using Bundled mode. The chart's defaults reference Bitnami images on Docker Hub. Two paths:

(a) Manual mirror — pull from bitnamilegacy/ and push to your internal registry:

   skopeo copy docker://bitnamilegacy/postgresql:<tag> \
     docker://registry.internal.example.com/postgresql:<tag>
   skopeo copy docker://bitnamilegacy/redis:<tag> \
     docker://registry.internal.example.com/redis:<tag>

(b) Vendor-prepared bundle — request the offline-tarball delivery to be built with the "bundle dependencies" option. That tarball includes pre-saved postgres + redis image tarballs alongside the application image, so a single delivery covers all three. Ask your vendor contact whether that build option is available for your release; the resulting tarball is ~175 MB larger but eliminates two mirroring steps.

  1. Override image.repository and the subchart image repositories in your values.yaml to point at the internal mirrors.

In Production mode the chart doesn't pull postgres/redis at all — your existing operator-managed datastores already exist on-cluster and the chart only sets connection details.


§1 — Prepare values.yaml

Write a values.yaml for your install based on the templates in two-modes.md. Save this file alongside your deployment runbooks; you'll need it again for every upgrade.

A complete reference of every value the chart accepts ships with the chart at deploy/helm/<app>/values.yaml, fully commented in place. The two-modes guide has working values.yaml examples for both modes.

Decisions to make before writing values

  • Image source. Vendor-public registry (with pull secret) vs customer-internal mirror.
  • Mode. Bundled Evaluation vs Production (see two-modes.md).
  • Route hostname. Either let OpenShift assign one from the cluster's ingress domain, or pin a custom hostname (and provide a TLS cert).
  • TLS termination. edge (cluster wildcard cert) is the simplest; reencrypt or passthrough if you need end-to-end TLS to the pod.
  • Storage class. The cluster default is fine for single-replica installs. Multi-replica deployments require a ReadWriteMany storage class (NFS, CephFS, EFS, Trident) or S3-backed Active Storage.
  • Replica count. 1 is safe and the default. 2+ requires RWX storage as above.

Why the chart pins Puma to 8080 (informational, no override needed)

OpenShift's restricted-v2 SCC denies binding to ports below 1024 — even port 80 — so the chart can't run a containerized web server on the conventional ports. The deployment ships with two non-privileged ports wired up via ConfigMap env vars:

Var Value Role
HTTP_PORT 3000 Thruster (HTTP/2 reverse proxy) — what the Service points at
TARGET_PORT 8080 Puma (Rails) — internal only, behind Thruster

The Deployment overrides Rails's startup command to listen on 8080, so Thruster and Puma are aligned by default. Customers don't need to set anything for this to work; this is documented here so an operator running oc describe pod doesn't have to reverse-engineer why the container exposes 3000 instead of 80/443.

Subchart image repository note

At time of writing, the chart's default Bundled-mode values.yaml expects a working override on the Bitnami subchart image repositories due to upstream hosting changes:

postgresql:
  image:
    repository: bitnamilegacy/postgresql
redis:
  image:
    repository: bitnamilegacy/redis

If you're mirroring images to your own registry (recommended for production and required for air-gapped clusters), you'll point these at the internal mirror instead. The chart ships a values-sandbox.yaml example you can copy from.


§2 — Pre-create resources

The chart consumes resources it does not own. Create these in your target namespace before running helm install.

License Secret (both modes)

oc create secret generic <app>-license \
  --from-file=license.json=<path-to-license-file> \
  -n <namespace>

Reference via licensing.fileSecretName: <app>-license. The chart mounts this secret as a file at /rails/config/license.json and the application reads it on startup.

Image-pull Secret (when registry is credentialed)

oc create secret docker-registry <app>-registry \
  --docker-server=<registry-host> \
  --docker-username=<service-account> \
  --docker-password=<service-token> \
  -n <namespace>

Reference via:

image:
  pullSecrets:
    - name: <app>-registry

Application secrets — Production mode only

In Production mode, you pre-create the secrets the chart references (Bundled mode generates a non-production-grade secret automatically on first install).

# Rails application secrets
oc create secret generic <app>-rails \
  --from-literal=RAILS_MASTER_KEY=<master-key-from-vendor> \
  --from-literal=SECRET_KEY_BASE=$(openssl rand -hex 64) \
  -n <namespace>

# Database password. The username is a role name, not a credential,
# so it goes in values.yaml under externalDatabase.username — not
# in this Secret. The Secret holds only the password.
oc create secret generic <app>-db \
  --from-literal=password=<db-password> \
  -n <namespace>

# Redis credentials — only if your redis requires auth
oc create secret generic <app>-redis \
  --from-literal=password=<redis-password> \
  -n <namespace>

References go in values.yaml:

secrets:
  existingSecret: <app>-rails
  generate: false
externalDatabase:
  username: <db-user>            # role name, e.g. "app"
  existingSecret: <app>-db       # Secret holding the password
externalRedis:
  existingSecret: <app>-redis    # only if redis is auth-protected

The Rails master key comes from the vendor and is unique per build. Do not generate or rotate it without coordinating with the vendor. SECRET_KEY_BASE is internal to your deployment and you generate it once at install time — keep the value stable across upgrades or sessions invalidate on every rollout.

TLS certificate (when not using cluster wildcard)

If you're using a custom cert instead of the cluster's wildcard:

oc create secret tls <app>-tls \
  --cert=<path-to-cert.pem> \
  --key=<path-to-key.pem> \
  -n <namespace>

Reference via route.tls.certSecretName: <app>-tls.


§3 — Install

From a published Helm repository

helm repo add <vendor> https://<vendor>/charts
helm repo update

helm install <release-name> <vendor>/<chart-name> \
  --namespace <namespace> \
  --values values.yaml \
  --version <chart-version>

From a chart tarball

helm install <release-name> <chart-name>-<version>.tgz \
  --namespace <namespace> \
  --values values.yaml

What happens during install

  1. Helm renders the templates against your values.yaml.
  2. The application Deployment, Service, and Route roll out. In Bundled mode the postgres + redis subcharts deploy alongside.
  3. A migration Job runs as a Helm post-install,post-upgrade hook. It carries a busybox:1.36 init container that probes the configured ${PGHOST}:${PGPORT} with nc -z in a loop until the port opens — only then does the main bin/rails db:prepare db:seed container start. This guarantees the migration never races the database, whether postgres comes from the bundled subchart (still rolling) or an external operator-managed instance (already up but maybe behind a still-warming Service endpoint).
  4. While the init container waits and migrations run, the application pod may briefly crash-loop — its initial readiness check fails until the schema exists. This is expected and resolves automatically once the migration Job completes (typically within ~30 seconds end-to-end).
  5. Helm aborts the release with a clear failure if migrations themselves fail (backoffLimit: 2, restartPolicy: Never), so a broken migration won't deploy a half-running stack.

A successful install completes in 2–5 minutes on most clusters. Image pull latency is the largest variable.


§4 — Verify

Pod health

oc get pods -n <namespace>

Expected state once the install settles: every pod Running, READY columns showing N/N. The migration pod will show 0/1 Completed (it's a one-shot Job).

Migration Job

oc get jobs -n <namespace>
oc logs -n <namespace> -l app.kubernetes.io/component=migration

Expected: Job Complete, log shows successful schema migrations ending with the bootstrap admin email and password.

Ingress

oc get route -n <namespace>
ROUTE_HOST=$(oc get route -n <namespace> \
  -l app.kubernetes.io/instance=<release-name> \
  -o jsonpath='{.items[0].spec.host}')
curl -I "https://$ROUTE_HOST/sign_in"

Expected: HTTP 200 from the sign-in page. (/health returns 302 to /sign_in for unauthenticated visitors but is still a 200-class response from the cluster's perspective; the readiness probe accepts both.)

Random-UID confirmation (OpenShift)

oc exec deploy/<release-name>-<app> -n <namespace> -- id

Expected: uid=10006… (a high-numbered UID assigned by the cluster's restricted-v2 SCC). If you see uid=1000 or uid=0, the install was applied with a relaxed SCC; coordinate with your cluster admin to use restricted-v2.

helm test

The chart ships a smoke-test pod that probes /health from inside the cluster:

helm test <release-name> -n <namespace>

A pass confirms application + ingress + Service wiring end-to-end.


§5 — Initial admin login

The migration Job seeded a bootstrap admin user with a known initial password. The application forces a password change and Terms of Service acceptance on first login.

  1. Browse to https://<route-host> and sign in:
    • Email: admin@example.com
    • Password: Password123!
  2. The application prompts you to:
    • Change your email address (replace the placeholder).
    • Update your name.
    • Set a new password.
    • Accept the Terms of Service.
  3. After completing first-run setup, navigate to /licensing/installations/new and upload your license file (the vendor-provided file you stored as a Secret in §2).
  4. Confirm the license appears as Active at /licensing/installations.

§6 — Day-0 admin setup

Through the running application's admin UI:

  • Site Settings (/admin/site_settings/edit) — set the white-label brand name, upload a logo, configure the use-agreement banner if your environment requires one, choose Commercial vs Security Compliance logging mode, and (optional) enable license checkout mode (see below).

White-label lock: the brand-name field shows a small badge reflecting where the current value came from — default, build-provisioned (carried in from the build's white-label arg), or admin-locked (you saved it from this form). Saving the form switches it to admin-locked, which means future builds will not overwrite your customization. Until then, every release deploy can change the brand if the build was configured with a new white-label name.

License checkout mode: the License Checkout section has a single toggle. When ON, users with no active feature assignments are redirected to a self-service /checkout page on first sign-in where they can claim available features themselves. Every assignment — admin or self-checkout — is locked to the user for 14 days; admin override before that window requires a logged justification. Default off. When OFF, all feature assignment goes through admins on the Users page (the original behavior). Toggle this based on your org's preferred onboarding model.

  • Users (/admin/users) — create user accounts and assign licensed features. Each assignment table now shows a lock badge ("Locked until ") on still-locked rows and a self badge on rows the user self-checked-out. Force-release before the lock expires opens a modal that requires a written reason; the reason lands in the audit trail.

  • Organizations (/admin/organizations) — adjust the default organization or add organizations if your deployment is multi-tenant.

These settings persist in the database and survive chart upgrades.


§7 — What to capture for support

Save these alongside your deployment runbook:

  • The values.yaml you installed with.
  • The chart version (helm list -n <namespace>).
  • The image digest the deployment is running: sh oc get deploy/<release-name>-<app> -n <namespace> \ -o jsonpath='{.spec.template.spec.containers[0].image}'
  • The original license file (the Secret in the cluster is one copy; your own archive is another).
  • The bootstrap admin email and (post-change) password, in your password manager.

When filing a support request later, include all of the above plus the diagnostic output described in troubleshooting.md.