Architecture — what the chart deploys
This document describes the shape of a running deployment: which pods exist, how they talk to each other, what state they hold, and where customer-controlled boundaries sit. Read this before deploy.md so the install steps make sense in context.
Components
A complete deployment is four logical pieces:
| Component | Role | Always required |
|---|---|---|
| Application | Rails web app — serves the UI, accepts uploads, runs background work, enforces licensing | Yes |
| PostgreSQL 15+ | Primary datastore — users, content, licenses, audit trail | Yes |
| Redis 7+ | Cache, session store, ActionCable backplane, job queues | Yes |
| Object storage (optional) | Active Storage backend for uploaded files; alternative to a PVC | No (PVC default) |
PostgreSQL and Redis can be bundled (chart deploys them as subcharts) or external (customer points at endpoints they already operate). See two-modes.md for the choice and its implications.
Pod topology — Bundled Evaluation mode
┌─────────────────────────────────────────────────────────────┐
│ OpenShift project: <customer-namespace> │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ application │ │ postgresql │ │ redis │ │
│ │ (Rails) │ │ │ │ │ │
│ │ Deployment │ │ StatefulSet │ │ Deployment │ │
│ │ replicas: 1 │ │ replicas:1 │ │ replicas: 1 │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │ │
│ ┌──────▼───────┐ ┌──────▼───────┐ ┌──────▼───────┐ │
│ │ Service │ │ Service │ │ Service │ │
│ │ port 3000 │ │ port 5432 │ │ port 6379 │ │
│ └──────┬───────┘ └──────────────┘ └──────────────┘ │
│ │ │
│ ┌──────▼───────┐ │
│ │ Route │ │
│ │ (TLS edge) │ │
│ └──────┬───────┘ │
│ │ │
│ ┌──────▼───────────────────────────────────────────┐ │
│ │ PVCs: app-storage, postgres-data, redis-data │ │
│ └──────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
▼
end users (HTTPS)
Pod topology — Production mode
┌─────────────────────────────────────────────────────────────┐
│ Customer-managed (outside chart) │
│ │
│ ┌──────────────────┐ ┌──────────────────┐ │
│ │ Postgres │ │ Redis │ │
│ │ (Crunchy / │ │ (Redis │ │
│ │ CloudNativePG │ │ Enterprise / │ │
│ │ / managed) │ │ managed) │ │
│ └────────┬─────────┘ └─────────┬────────┘ │
└───────────┼────────────────────────┼─────────────────────────┘
│ │
│ internal Service DNS │
▼ ▼
┌─────────────────────────────────────────────────────────────┐
│ OpenShift project: <customer-namespace> │
│ │
│ ┌──────────────┐ │
│ │ application │ │
│ │ Deployment │ │
│ │ replicas: N │ (HA possible — see "Replicas" below) │
│ └──────┬───────┘ │
│ │ │
│ ┌──────▼───────┐ │
│ │ Service │ │
│ └──────┬───────┘ │
│ │ │
│ ┌──────▼───────┐ │
│ │ Route │ │
│ └──────┬───────┘ │
│ │ │
│ ┌──────▼───────────────────────────────────────┐ │
│ │ PVC: app-storage *(or S3-compatible)* │ │
│ └──────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
▼
end users (HTTPS)
The chart-installed surface shrinks dramatically — postgres and redis become inputs to the chart instead of outputs.
Networking
- Inbound — one OpenShift Route per deployment. TLS terminates
at the cluster edge by default (cluster wildcard cert) or in the
application Pod (passthrough/reencrypt) when the customer wants
end-to-end TLS. Health checks (
/health) are exposed but unauthenticated; the customer's monitoring can probe them. - Internal — application talks to postgres on TCP/5432 and redis on TCP/6379, both via cluster-internal Service DNS. No external egress is required for normal operation.
- License validation — fully offline. The image ships with the public key needed to verify the customer's license file; no call-home, no internet round-trip on startup.
- Outbound (optional) — when an admin sets the Compliance
syslog host field on
/admin/site_settings/edit(or the matchingconfig.complianceSyslogHostHelm value /COMPLIANCE_ SYSLOG_HOSTenv), the application emits RFC 5424 syslog frames over plain TCP to that destination — typically a sibling rsyslog/SIEM Service or an external collector reachable from the cluster. Blank → events are written to the local UNIX socket (/dev/log) instead, the legacy single-host pattern that expects a host-level rsyslog forwarder. Either way, the application makes no other outbound connections; license validation and assets are fully offline.
Storage
| What | Where | Mode | Backed by |
|---|---|---|---|
| Active Storage uploads | /rails/storage |
RWO PVC (default) or S3-compatible bucket | Cluster's default StorageClass, or customer-provided S3 |
| PostgreSQL data | postgres pod | RWO PVC | Cluster's default StorageClass (Bundled) / customer's storage (Production) |
| Redis persistence | redis pod | RWO PVC (optional) | Same as postgres |
| Application logs | /rails/log |
emptyDir | Pod-local; ship via stdout for cluster log aggregation |
Multi-replica caveat. A ReadWriteOnce PVC for Active Storage
is incompatible with replicas > 1 — the second replica can't
mount the volume. Production deployments running multiple replicas
must either configure a ReadWriteMany storage class (NFS,
CephFS, EFS) or point Active Storage at S3-compatible object
storage. The chart defaults are safe for single-replica installs;
the customer is expected to make the call for HA.
Configuration model
Configuration arrives in three layers, in priority order:
- Build-time bake-in. A handful of values are fixed at image build time: the application name (white-label brand), the licensing public key, the build expiration date, and the pre-bundled feature modules. These cannot be overridden at deploy time without rebuilding the image.
- Helm values. Cluster-shape settings — replica counts,
storage class, route hostname, secret references, postgres /
redis endpoints. Set in
values.yaml; persisted in the cluster as a Helm release. - Admin UI. Day-2 operational settings — SMTP config, audit
destination, session timeouts, white-label logo and brand
override, banner text, use-agreement language. Persisted in the
site_settingstable; survive chart upgrades.
The chart doesn't touch admin-UI settings; once a customer admin has made a choice in the application, it's authoritative.
Secrets
The chart consumes the following secrets. The customer can either pre-create them and reference by name, or let the chart create them from values (Bundled Evaluation only — production-grade secret management means pre-creation).
| Key | Purpose | Required |
|---|---|---|
RAILS_MASTER_KEY |
Decrypts encrypted Rails credentials baked into the image | Yes |
SECRET_KEY_BASE |
Signs cookies and verifies session tokens | Yes |
POSTGRES_PASSWORD |
Database authentication | Yes (both modes) |
LICENSE_FILE |
Customer's signed license file (mounted as a file) | Yes (post-install) |
IMAGE_PULL_SECRET |
If the image lives in a credentialed registry | Conditional |
The application reads RAILS_MASTER_KEY and SECRET_KEY_BASE at
boot; both must be stable across pod restarts and replica scaling
or sessions invalidate.
Replicas and high availability
- Single-replica (default). Adequate for evaluation, demo, and smaller production deployments. Pod restart causes a ~30s outage window. Suitable when the customer's RTO permits restart time.
- Multi-replica. Requires
ReadWriteManystorage or S3-backed Active Storage. Sessions and cache live in Redis, so multiple application pods are stateless and Route load-balancing works. Database and redis backing must support concurrent connections (default postgres / redis configurations do).
Observability
The application emits:
- Stdout / stderr logs — JSON-formatted in production; consumable by OpenShift's cluster logging stack (EFK / Loki) without further configuration.
- Audit events (optional) — a separate stream of compliance- grade audit events. Two destinations are supported simultaneously: on-cluster (file or stdout) and remote syslog/SIEM. Configured via the admin UI; off by default.
- Health endpoint —
/healthreturns 200 when the application has started and the database is reachable. Used by the cluster's liveness and readiness probes; safe for external monitoring.
There is no built-in metrics endpoint (Prometheus, etc.) at this time. Performance monitoring is done through the customer's existing cluster observability stack via container CPU/memory metrics.
Backup considerations
The chart does not provide backup automation. Components and their backup ownership:
| Asset | Where it lives | Backup ownership |
|---|---|---|
| PostgreSQL data | postgres PVC (Bundled) / customer postgres (Production) | Customer — pg_dump schedule, WAL archive, etc. |
| Active Storage uploads | app-storage PVC or S3 bucket | Customer — volume snapshot or S3 lifecycle |
| License file | Secret + mounted file | Customer — keep the original file from initial delivery |
| Configuration | Helm release + admin-UI settings table | Customer — helm get values for chart, postgres backup for app settings |
Disaster recovery for an entire deployment is: restore postgres,
restore Active Storage volume (or S3 bucket), helm install against
the same values used originally, application reads its state back
from the restored database.