Files
Gists/helm/opengist/README.md
Thomas Miceli 4d29a50e64 v1.12.1
2026-02-03 15:59:29 +07:00

14 KiB
Raw Blame History

Opengist Helm Chart

Version: 0.6.0 AppVersion: 1.12.1

Opengist Helm chart for Kubernetes. Check CHANGELOG.md for release notes.

Install

helm repo add opengist https://helm.opengist.io
 
helm install opengist opengist/opengist

Configuration

This part explains how to configure the Opengist instance using the Helm chart. The config.yml file used by Opengist is mounted from a Kubernetes Secret with a key config.yml and the values formatted as YAML.

Using values

Using Helm values, you can define the values from a key name config

config:
  log-level: "warn"
  log-output: "stdout"

This will create a Kubernetes secret named opengist mounted to the pod as a file with the YAML content of the secret, used by Opengist.

Using an existing secret

If you wish to not store sensitive data in your Helm values, you can create a Kubernetes secret with a key config.yml and values formatted as YAML. You can then reference this secret in the Helm chart with the configExistingSecret key.

If defined, this existing secret will be used instead of creating a new one.

configExistingSecret: <name of the secret>

Metrics & Monitoring

Opengist exposes Prometheus metrics on a separate port (default: 6158). The metrics server runs independently from the main HTTP server for security.

Enabling Metrics

To enable metrics, set metrics.enabled: true in your Opengist config:

config:
  metrics.enabled: true

This will:

  1. Start a metrics server on port 6158 inside the container
  2. Create a Kubernetes Service exposing the metrics ports

Available Metrics

Metric Name Type Description
opengist_users_total Gauge Total number of registered users
opengist_gists_total Gauge Total number of gists
opengist_ssh_keys_total Gauge Total number of SSH keys
opengist_request_duration_seconds_* Histogram HTTP request duration metrics

ServiceMonitor for Prometheus Operator

If you're using Prometheus Operator, you can enable automatic service discovery with a ServiceMonitor:

config:
  metrics.enabled: true

service:
  metrics:
    serviceMonitor:
      enabled: true
      labels:
        release: prometheus  # match your Prometheus serviceMonitorSelector

Manual Prometheus Configuration

If you're not using Prometheus Operator, you can configure Prometheus to scrape the metrics endpoint directly:

scrape_configs:
  - job_name: 'opengist'
    static_configs:
      - targets: ['opengist-metrics:6158']
    metrics_path: /metrics

Or use Kubernetes service discovery:

scrape_configs:
  - job_name: 'opengist'
    kubernetes_sd_configs:
      - role: service
    relabel_configs:
      - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_component]
        regex: metrics
        action: keep
      - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]
        regex: opengist
        action: keep

Dependencies

Meilisearch Indexer

By default, Opengist uses the bleve indexer. It is NOT available if there is multiple replicas of the opengist pod (only one pod can open the index at the same time).

Instead, for multiple replicas setups, you MUST use the meilisearch indexer.

By setting meilisearch.enabled: true, the Meilisearch chart will be deployed aswell. You must define the meilisearch.host (Kubernetes Service) and meilisearch.key (value created by Meilisearch) values to connect to the Meilisearch instance in your Opengist config :

index: meilisearch
index.meili.host: http://opengist-meilisearch:7700 # pointing to the K8S Service
index.meili.api-key: MASTER_KEY                    # generated by Meilisearch

If you want to use the bleve indexer, you need to set the replicas to 1.

Passing Meilisearch configuration via nested Helm values

When using the Helm CLI with --set, avoid mixing a scalar config.index value with nested config.index.meili.* keys. Instead use a nested map and a type field which the chart flattens automatically. Example:

helm template opengist ./helm/opengist \
  --set statefulSet.enabled=true \
  --set replicaCount=2 \
  --set persistence.enabled=true \
  --set persistence.existingClaim=opengist-shared-rwx \
  --set postgresql.enabled=false \
  --set config.db-uri="postgres://user:pass@db-host:5432/opengist" \
  --set meilisearch.enabled=true \
  --set config.index.type=meilisearch \
  --set config.index.meili.host="http://opengist-meilisearch:7700" \
  --set config.index.meili.api-key="MASTER_KEY"

Rendered config.yml fragment:

index: meilisearch
index.meili.host: http://opengist-meilisearch:7700
index.meili.api-key: MASTER_KEY

How it works:

  • You provide a map under config.index with keys type and meili.
  • The template detects config.index.type and rewrites index: <type>.
  • Nested config.index.meili.host / api-key are lifted to flat keys index.meili.host and index.meili.api-key required by Opengist.

If you set --set config.index=meilisearch directly and also try to set --set config.index.meili.host=..., Helm will first create the nested structure then overwrite it with the scalar, losing the host. Always prefer the config.index.type pattern for CLI usage.

PostgreSQL Database

By default, Opengist uses the sqlite database. If needed, this chart also deploys a PostgreSQL instance.

By setting postgresql.enabled: true, the Bitnami PostgreSQL chart will be deployed aswell. You must define the postgresql.host, postgresql.port, postgresql.database, postgresql.username and postgresql.password values to connect to the PostgreSQL instance.

Then define the connection string in your Opengist config:

db-uri: postgres://user:password@opengist-postgresql:5432/opengist

Note: opengist-postgresql is the name of the K8S Service deployed by this chart.

Database Configuration

You can supply an externally managed database connection explicitly via config.db-uri (PostgreSQL/MySQL) or enable the bundled PostgreSQL subchart.

Behavior:

  • If postgresql.enabled: true and config.db-uri is omitted, the chart auto-generates: postgres://<username>:<password>@<release-name>-postgresql:<port>/<database> using values under postgresql.global.postgresql.auth.*.
  • If any of username/password/database are missing, templating fails fast with an error message.
  • If you prefer an external database or a different Postgres distribution, set postgresql.enabled: false and provide config.db-uri yourself.

Licensing note: Bitnami's PostgreSQL distribution may have licensing constraints. For strictly open alternatives use an external managed PostgreSQL/MySQL service and disable the subchart.

Multi-Replica Requirements

Running more than one Opengist replica (Deployment or StatefulSet) requires:

  1. Non-SQLite database (config.db-uri must start with postgres:// or mysql://).
  2. Shared RWX storage if using StatefulSet with replicaCount > 1 (provide persistence.existingClaim). The chart now fails fast if you attempt replicaCount > 1 without an explicit shared claim to prevent silent data divergence across perpod PVCs.

The chart will fail fast during templating if these conditions are not met when scaling above 1 replica.

Examples:

  • External PostgreSQL:
postgresql:
  enabled: false
config:
  db-uri: postgres://user:pass@db-host:5432/opengist
  index: meilisearch
statefulSet:
  enabled: true
replicaCount: 2
persistence:
  existingClaim: opengist-shared-rwx

Bundled PostgreSQL (auto db-uri):

postgresql:
  enabled: true
config:
  index: meilisearch
statefulSet:
  enabled: true
replicaCount: 2
persistence:
  existingClaim: opengist-shared-rwx

Recovering from an initial misconfiguration

If you previously scaled a StatefulSet above 1 replica without an existingClaim, each pod received its own PVC and only one held the authoritative /opengist data. To consolidate:

  1. Scale down to 1 replica (keep the pod with the desired data):
kubectl scale sts/opengist --replicas=1
  1. (Optional) Inspect other PVCs and manually copy any missing files by temporarily attaching them to a debug pod.
  2. Create or provision a ReadWriteMany (NFS / CephFS / Longhorn RWX / etc.) PersistentVolumeClaim named (for example) opengist-shared-rwx.
  3. Update values with persistence.existingClaim: opengist-shared-rwx and redeploy.
  4. Scale back up:
kubectl scale sts/opengist --replicas=2

Going forward, all replicas mount the same shared volume and data remains consistent.

Quick Start Examples

Common deployment scenarios with copy-paste configurations:

Scenario 1: Single replica with SQLite (default)

Minimal local development setup with ephemeral or persistent storage:

# Ephemeral (emptyDir)
statefulSet:
  enabled: true
replicaCount: 1
persistence:
  enabled: false

# OR with persistent RWO storage
statefulSet:
  enabled: true
replicaCount: 1
persistence:
  enabled: true
  mode: perReplica  # default

Scenario 2: Multi-replica with external PostgreSQL + existing RWX PVC

Production HA setup with your own database and storage:

statefulSet:
  enabled: true
replicaCount: 2
postgresql:
  enabled: false
config:
  db-uri: "postgres://user:pass@db-host:5432/opengist"
  index: meilisearch  # required for multi-replica
persistence:
  enabled: true
  mode: shared
  existingClaim: "opengist-shared-rwx"  # pre-created RWX PVC
meilisearch:
  enabled: true

Scenario 3: Multi-replica with bundled PostgreSQL + auto-created RWX PVC

Chart manages both database and storage:

statefulSet:
  enabled: true
replicaCount: 2
postgresql:
  enabled: true
  global:
    postgresql:
      auth:
        username: opengist
        password: changeme
        database: opengist
config:
  index: meilisearch
persistence:
  enabled: true
  mode: shared
  existingClaim: ""  # empty to trigger auto-creation
  create:
    enabled: true
    accessModes: [ReadWriteMany]
    storageClass: "nfs-client"  # your RWX-capable storage class
    size: 20Gi
meilisearch:
  enabled: true

Persistence Modes

The chart supports two persistence strategies controlled by persistence.mode:

Mode Behavior Scaling Storage Objects Recommended Use
perReplica (default) One PVC per pod via StatefulSet volumeClaimTemplates (RWO) when no existingClaim Safe only at replicaCount=1 unless you supply existingClaim One PVC per replica Local dev, quick single-node trials
shared Single RWX PVC (existing or auto-created) mounted by all pods Horizontally scalable One shared PVC Production / HA

Configuration examples:

Per-replica (single node):

statefulSet:
  enabled: true
persistence:
  mode: perReplica
  enabled: true
  accessModes:
    - ReadWriteOnce

Shared (scale ready) with an existing RWX claim:

statefulSet:
  enabled: true
replicaCount: 2
persistence:
  mode: shared
  existingClaim: opengist-shared-rwx

Shared with chart-created RWX PVC:

statefulSet:
  enabled: true
replicaCount: 2
persistence:
  mode: shared
  existingClaim: ""    # leave empty
  create:
    enabled: true
    accessModes: [ReadWriteMany]
    size: 10Gi

When mode=shared and existingClaim is empty, the chart creates a single PVC named <release>-shared (suffix configurable via persistence.create.nameSuffix).

Fail-fast conditions:

  • replicaCount>1 & missing external DB (still enforced).
  • replicaCount>1 & persistence disabled.
  • replicaCount>1 & neither existingClaim nor mode=shared.
  • mode=shared & create.enabled=true but accessModes lacks ReadWriteMany.

Migration (perReplica → shared): scale down to 1, create RWX claim (or rely on create.enabled), copy data, switch mode to shared, scale up.

Troubleshooting

Common Errors and Solutions

Error: "replicaCount=2 requires PostgreSQL/MySQL config.db-uri; scheme 'sqlite' unsupported"
  • Cause: Multi-replica with SQLite database
  • Solution: Either scale down to replicaCount: 1 or configure external database:
config:
  db-uri: "postgres://user:pass@host:5432/opengist"
Error: "replicaCount=2 requires either persistence.existingClaim OR persistence.mode=shared"
  • Cause: Multi-replica without shared storage
  • Solution: Choose one approach:
# Option A: Use existing PVC
persistence:
  existingClaim: "my-rwx-pvc"

# Option B: Let chart create PVC
persistence:
  mode: shared
  create:
    enabled: true
    accessModes: [ReadWriteMany]
Error: "persistence.mode=shared create.accessModes must include ReadWriteMany for multi-replica"
  • Cause: Chart-created PVC lacks RWX access mode
  • Solution: Ensure RWX is specified:
persistence:
  create:
    accessModes:
      - ReadWriteMany
Pods mount different data (data divergence)
  • Cause: Previously scaled with perReplica mode and replicaCount > 1
  • Solution: Follow recovery steps in "Recovering from an initial misconfiguration" section above
PVC creation fails: "no storage class available with ReadWriteMany"
  • Cause: Cluster lacks RWX-capable storage provisioner
  • Solution: Install a storage provider (NFS, CephFS, Longhorn) or use external managed storage and provide existingClaim