added couchdb

This commit is contained in:
kaverkiev 2025-10-13 08:56:17 +03:00
parent 36b10be80f
commit b1eba7ca00
23 changed files with 1837 additions and 0 deletions

20
charts/couchdb/Chart.yaml Normal file
View File

@ -0,0 +1,20 @@
apiVersion: v1
appVersion: 3.5.0
description: A database featuring seamless multi-master sync, that scales from big
data to mobile, with an intuitive HTTP/JSON API and designed for reliability.
home: https://couchdb.apache.org/
icon: http://couchdb.apache.org/CouchDB-visual-identity/logo/CouchDB-couch-symbol.svg
keywords:
- couchdb
- database
- nosql
maintainers:
- email: kocolosk@apache.org
name: kocolosk
- email: willholley@apache.org
name: willholley
name: couchdb
sources:
- https://github.com/apache/couchdb-helm
- https://github.com/apache/couchdb-docker
version: 4.6.2

72
charts/couchdb/NEWS.md Normal file
View File

@ -0,0 +1,72 @@
# NEWS
## 4.6.2
- Added options to specify `erlangCookie` and `cookieAuthSecret` within the extra secret
## 4.6.1
- Update default CouchDB version to 3.5.0
## 4.5.7
- Add support for extra secrets not created by the chart, such as Hashicorp Vault or OpenBao.
## 4.5.6
- Add `extraPorts` to the network policy when the network policy is enabled.
## 4.5.5
- Give the default port on the CouchDB `Service` a name so that `service.extraPorts` can be used properly.
## 4.5.4
- Expose `extraPorts` and `service.extraPorts` to allow specifying arbitrary ports to be exposed from the CouchDB pods
## 4.5.3
- Fix ability to define pull secrets using `imagePullSecrets`.
## 4.5.2
- Allow to specify a persistentVolumeClaimRetentionPolicy in both the primary and secondary StatefulSet.
## 4.5.1
- Update default CouchDB version to 3.3.3.
## 4.5.0
- Add capability to set pod and container level securityContext settings.
## 4.4.1
- Add possibility to customize `service.targetPort` from values. Set default to 5984.
## 4.3.0
- Use Ingress `className` instead of `kubernetes.io/ingress.class` annotation which has been deprecated since Kubernetes 1.18+ ([#69](https://github.com/apache/couchdb-helm/issues/69))
## 4.1.0
- Added the `autoSetup` to automatically finalize the cluster after installation
## 4.0.0
- Simplified the `adminHash` in the secret
# 3.6.4
- Add `service.labels` value to pass along labels to the client-facing service
- Update `ingress` to use the service created by `service.enabled=true`,
instead of the headless service
([#94](https://github.com/apache/couchdb-helm/issues/94))
- This allows setting `service.annotations`, `service.labels`, etc. in a way that will be picked up by the ingress
# 3.6.3
- Add PersistentVolume annotations
## 3.6.2
- Change the `erlangCookie` to be auto-generated in a stateful fashion (i.e. we auto-generate it once, then leave that
value alone). ([#78](https://github.com/apache/couchdb-helm/issues/78))

290
charts/couchdb/README.md Normal file
View File

@ -0,0 +1,290 @@
# CouchDB
![Version: 4.6.2](https://img.shields.io/badge/Version-4.6.2-informational?style=flat-square) ![AppVersion: 3.5.0](https://img.shields.io/badge/AppVersion-3.5.0-informational?style=flat-square)
Apache CouchDB is a database featuring seamless multi-master sync, that scales
from big data to mobile, with an intuitive HTTP/JSON API and designed for
reliability.
This chart deploys a CouchDB cluster as a StatefulSet. It creates a ClusterIP
Service in front of the Deployment for load balancing by default, but can also
be configured to deploy other Service types or an Ingress Controller. The
default persistence mechanism is simply the ephemeral local filesystem, but
production deployments should set `persistentVolume.enabled` to `true` to attach
storage volumes to each Pod in the Deployment.
## TL;DR
```bash
$ helm repo add couchdb https://apache.github.io/couchdb-helm
$ helm install couchdb/couchdb \
--version=4.6.2 \
--set allowAdminParty=true \
--set couchdbConfig.couchdb.uuid=$(curl https://www.uuidgenerator.net/api/version4 2>/dev/null | tr -d -)
```
## Prerequisites
- Kubernetes 1.9+ with Beta APIs enabled
- Ingress requires Kubernetes 1.19+
## Installing the Chart
To install the chart with the release name `my-release`:
Add the CouchDB Helm repository:
```bash
$ helm repo add couchdb https://apache.github.io/couchdb-helm
```
Afterwards install the chart replacing the UUID
`decafbaddecafbaddecafbaddecafbad` with a custom one:
```bash
$ helm install \
--name my-release \
--version=4.6.2 \
--set couchdbConfig.couchdb.uuid=decafbaddecafbaddecafbaddecafbad \
couchdb/couchdb
```
This will create a Secret containing the admin credentials for the cluster.
Those credentials can be retrieved as follows:
```bash
$ kubectl get secret my-release-couchdb -o go-template='{{ .data.adminPassword }}' | base64 --decode
```
If you prefer to configure the admin credentials directly you can create a
Secret containing `adminUsername`, `adminPassword` and `cookieAuthSecret` keys:
```bash
$ kubectl create secret generic my-release-couchdb --from-literal=adminUsername=foo --from-literal=adminPassword=bar --from-literal=cookieAuthSecret=baz
```
If you want to set the `adminHash` directly to achieve consistent salts between
different nodes you need to add it to the secret:
```bash
$ kubectl create secret generic my-release-couchdb \
--from-literal=adminUsername=foo \
--from-literal=cookieAuthSecret=baz \
--from-literal=adminHash=-pbkdf2-d4b887da....
```
and then install the chart while overriding the `createAdminSecret` setting:
```bash
$ helm install \
--name my-release \
--version=4.6.2 \
--set createAdminSecret=false \
--set couchdbConfig.couchdb.uuid=decafbaddecafbaddecafbaddecafbad \
couchdb/couchdb
```
This Helm chart deploys CouchDB on the Kubernetes cluster in a default
configuration. The [configuration](#configuration) section lists
the parameters that can be configured during installation.
> **Tip**: List all releases using `helm list`
## Uninstalling the Chart
To uninstall/delete the `my-release` Deployment:
```bash
$ helm delete my-release
```
The command removes all the Kubernetes components associated with the chart and
deletes the release.
## Upgrading an existing Release to a new major version
A major chart version change (like v0.2.3 -> v1.0.0) indicates that there is an
incompatible breaking change needing manual actions.
### Upgrade to 3.0.0
Since version 3.0.0 setting the CouchDB server instance UUID is mandatory.
Therefore, you need to generate a UUID and supply it as a value during the
upgrade as follows:
```bash
$ helm upgrade <release-name> \
--version=3.6.4 \
--reuse-values \
--set couchdbConfig.couchdb.uuid=<UUID> \
couchdb/couchdb
```
### Upgrade to 4.0.0
Breaking change between v3 and v4 is the `adminHash` in the secret that no longer uses
the `password.ini`. It stores the `adminHash` only instead, make sure to change it if you
use your own secret.
## Migrating from stable/couchdb
This chart replaces the `stable/couchdb` chart previously hosted by Helm and continues the
version semantics. You can upgrade directly from `stable/couchdb` to this chart using:
```bash
$ helm repo add couchdb https://apache.github.io/couchdb-helm
$ helm upgrade my-release --version=4.6.2 couchdb/couchdb
```
## Configuration
The following table lists the most commonly configured parameters of the
CouchDB chart and their default values:
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| allowAdminParty | bool | `false` | If allowAdminParty is enabled the cluster will start up without any database administrator account; i.e., all users will be granted administrative access. Otherwise, the system will look for a Secret called <ReleaseName>-couchdb containing `adminUsername`, `adminPassword` and `cookieAuthSecret` keys. See the `createAdminSecret` flag. ref: https://kubernetes.io/docs/concepts/configuration/secret/ |
| clusterSize | int | `3` | the initial number of nodes in the CouchDB cluster. |
| couchdbConfig | object | `{"chttpd":{"bind_address":"any","require_valid_user":false}}` | couchdbConfig will override default CouchDB configuration settings. The contents of this map are reformatted into a .ini file laid down by a ConfigMap object. ref: http://docs.couchdb.org/en/latest/config/index.html |
| createAdminSecret | bool | `true` | If createAdminSecret is enabled a Secret called <ReleaseName>-couchdb will be created containing auto-generated credentials. Users who prefer to set these values themselves have a couple of options: 1) The `adminUsername`, `adminPassword`, `adminHash`, and `cookieAuthSecret` can be defined directly in the chart's values. Note that all of a chart's values are currently stored in plaintext in a ConfigMap in the tiller namespace. 2) This flag can be disabled and a Secret with the required keys can be created ahead of time. |
| enableSearch | bool | `false` | Flip this to flag to include the Search container in each Pod |
| erlangFlags | object | `{"name":"couchdb"}` | erlangFlags is a map that is passed to the Erlang VM as flags using the ERL_FLAGS env. The `name` flag is required to establish connectivity between cluster nodes. ref: http://erlang.org/doc/man/erl.html#init_flags |
| persistentVolume | object | `{"accessModes":["ReadWriteOnce"],"enabled":false,"size":"10Gi"}` | The storage volume used by each Pod in the StatefulSet. If a persistentVolume is not enabled, the Pods will use `emptyDir` ephemeral local storage. Setting the storageClass attribute to "-" disables dynamic provisioning of Persistent Volumes; leaving it unset will invoke the default provisioner. |
You can set the values of the `couchdbConfig` map according to the
[official configuration][4]. The following shows the map's default values and
required options to set:
| Parameter | Description | Default |
|---------------------------------|--------------------------------------------------------------------|----------------------------------------|
| `couchdb.uuid` | UUID for this CouchDB server instance ([Required in a cluster][5]) | |
| `chttpd.bind_address` | listens on all interfaces when set to any | any |
| `chttpd.require_valid_user` | disables all the anonymous requests to the port 5984 when true | false |
A variety of other parameters are also configurable. See the comments in the
`values.yaml` file for further details:
| Parameter | Default |
|--------------------------------------| ------------------------------------------------ |
| `adminUsername` | admin |
| `adminPassword` | auto-generated |
| `adminHash` | |
| `extraSecretName` | "" (the name of a secret resource to provide e.g. admin credentials from an ExternalSecret/vault/etc.) |
| `adminUsernameKey` | "" (the string/key to access the admin username secret from an extra secret if different from "adminUsername" |
| `adminPasswordKey` | "" (the string/key to access the admin password secret from an extra secret if different from "adminPassword" |
| `cookieAuthSecretKey` | "" (the string/key to access the cookie auth secret from an extra secret if different from "cookieAuthSecret" |
| `erlangCookieKey` | "" (the string/key to access the erlang cookie secret from an extra secret if different from "erlangCookie" |
| `cookieAuthSecret` | auto-generated |
| `extraPorts` | [] (a list of ContainerPort objects) |
| `image.repository` | couchdb |
| `image.tag` | 3.5.0 |
| `image.pullPolicy` | IfNotPresent |
| `searchImage.repository` | kocolosk/couchdb-search |
| `searchImage.tag` | 0.1.0 |
| `searchImage.pullPolicy` | IfNotPresent |
| `initImage.repository` | busybox |
| `initImage.tag` | latest |
| `initImage.pullPolicy` | Always |
| `ingress.enabled` | false |
| `ingress.className` | |
| `ingress.hosts` | chart-example.local |
| `ingress.annotations` | |
| `ingress.path` | / |
| `ingress.tls` | |
| `persistentVolume.accessModes` | ReadWriteOnce |
| `persistentVolume.storageClass` | Default for the Kube cluster |
| `persistentVolume.annotations` | {} |
| `persistentVolume.existingClaims` | [] (a list of existing PV/PVC volume value objects with `volumeName`, `claimName`, `persistentVolumeName` and `volumeSource` defined) |
| `persistentVolume.volumeName` | |
| `persistentVolume.claimName` | |
| `persistentVolume.volumeSource` | |
| `persistentVolume.annotations` | {} |
| `persistentVolumeClaimRetentionPolicy.enabled` | Field controls if and how PVCs are deleted during the lifecycle |
| `persistentVolumeClaimRetentionPolicy.whenScaled` | Configures the volume retention behavior that applies when the replica count of the StatefulSet is reduced |
| `persistentVolumeClaimRetentionPolicy.whenDeleted` | Configures the volume retention behavior that applies when the StatefulSet is deleted |
| `podDisruptionBudget.enabled` | false |
| `podDisruptionBudget.minAvailable` | nil |
| `podDisruptionBudget.maxUnavailable` | 1 |
| `podManagementPolicy` | Parallel |
| `affinity` | |
| `topologySpreadConstraints` | |
| `labels` | |
| `annotations` | |
| `tolerations` | |
| `resources` | |
| `initResources` | |
| `autoSetup.enabled` | false (if set to true, must have `service.enabled` set to true and a correct `adminPassword` - deploy it with the `--wait` flag to avoid first jobs failure) |
| `autoSetup.image.repository` | curlimages/curl |
| `autoSetup.image.tag` | latest |
| `autoSetup.image.pullPolicy` | Always |
| `autoSetup.defaultDatabases` | [`_global_changes`] |
| `service.annotations` | |
| `service.enabled` | true |
| `service.type` | ClusterIP |
| `service.externalPort` | 5984 |
| `service.targetPort` | 5984 |
| `service.extraPorts` | [] (a list of ServicePort objects) |
| `dns.clusterDomainSuffix` | cluster.local |
| `networkPolicy.enabled` | true |
| `serviceAccount.enabled` | true |
| `serviceAccount.create` | true |
| `imagePullSecrets` | |
| `sidecars` | {} |
| `livenessProbe.enabled` | true |
| `livenessProbe.failureThreshold` | 3 |
| `livenessProbe.initialDelaySeconds` | 0 |
| `livenessProbe.periodSeconds` | 10 |
| `livenessProbe.successThreshold` | 1 |
| `livenessProbe.timeoutSeconds` | 1 |
| `readinessProbe.enabled` | true |
| `readinessProbe.failureThreshold` | 3 |
| `readinessProbe.initialDelaySeconds` | 0 |
| `readinessProbe.periodSeconds` | 10 |
| `readinessProbe.successThreshold` | 1 |
| `readinessProbe.timeoutSeconds` | 1 |
| `prometheusPort.enabled` | false |
| `prometheusPort.port` | 17896 |
| `prometheusPort.bind_address` | 0.0.0.0 |
| `lifecycle` | {} |
| `lifecycleTemplate` | false (set `true` and add a named `lifecycleTemplate` if using couchdb as a subchart) |
| `extraEnv` | [] |
| `extraEnvTemplate` | false (set `true` and add a named `extraEnvTemplate` if using couchdb as a subchart) |
| `placementConfig.enabled` | false |
| `placementConfig.image.repository` | caligrafix/couchdb-autoscaler-placement-manager |
| `placementConfig.image.tag` | 0.1.0 |
| `podSecurityContext` | |
| `containerSecurityContext` | |
## Feedback, Issues, Contributing
General feedback is welcome at our [user][1] or [developer][2] mailing lists.
Apache CouchDB has a [CONTRIBUTING][3] file with details on how to get started
with issue reporting or contributing to the upkeep of this project. In short,
use GitHub Issues, do not report anything on Docker's website.
## Non-Apache CouchDB Development Team Contributors
- [@natarajaya](https://github.com/natarajaya)
- [@satchpx](https://github.com/satchpx)
- [@spanato](https://github.com/spanato)
- [@jpds](https://github.com/jpds)
- [@sebastien-prudhomme](https://github.com/sebastien-prudhomme)
- [@stepanstipl](https://github.com/sebastien-stepanstipl)
- [@amatas](https://github.com/amatas)
- [@Chimney42](https://github.com/Chimney42)
- [@mattjmcnaughton](https://github.com/mattjmcnaughton)
- [@mainephd](https://github.com/mainephd)
- [@AdamDang](https://github.com/AdamDang)
- [@mrtyler](https://github.com/mrtyler)
- [@kevinwlau](https://github.com/kevinwlau)
- [@jeyenzo](https://github.com/jeyenzo)
- [@Pinpin31.](https://github.com/Pinpin31)
- [@yekibud](https://github.com/yekibud)
[1]: http://mail-archives.apache.org/mod_mbox/couchdb-user/
[2]: http://mail-archives.apache.org/mod_mbox/couchdb-dev/
[3]: https://github.com/apache/couchdb/blob/master/CONTRIBUTING.md
[4]: https://docs.couchdb.org/en/stable/config/index.html
[5]: https://docs.couchdb.org/en/latest/setup/cluster.html#preparing-couchdb-nodes-to-be-joined-into-a-cluster

View File

@ -0,0 +1,259 @@
# CouchDB
{{ template "chart.versionBadge" . }}{{ template "chart.typeBadge" . }}{{ template "chart.appVersionBadge" . }}
Apache CouchDB is a database featuring seamless multi-master sync, that scales
from big data to mobile, with an intuitive HTTP/JSON API and designed for
reliability.
This chart deploys a CouchDB cluster as a StatefulSet. It creates a ClusterIP
Service in front of the Deployment for load balancing by default, but can also
be configured to deploy other Service types or an Ingress Controller. The
default persistence mechanism is simply the ephemeral local filesystem, but
production deployments should set `persistentVolume.enabled` to `true` to attach
storage volumes to each Pod in the Deployment.
## TL;DR
```bash
$ helm repo add couchdb https://apache.github.io/couchdb-helm
$ helm install couchdb/couchdb \
--version={{ template "chart.version" . }} \
--set allowAdminParty=true \
--set couchdbConfig.couchdb.uuid=$(curl https://www.uuidgenerator.net/api/version4 2>/dev/null | tr -d -)
```
## Prerequisites
- Kubernetes 1.9+ with Beta APIs enabled
- Ingress requires Kubernetes 1.19+
## Installing the Chart
To install the chart with the release name `my-release`:
Add the CouchDB Helm repository:
```bash
$ helm repo add couchdb https://apache.github.io/couchdb-helm
```
Afterwards install the chart replacing the UUID
`decafbaddecafbaddecafbaddecafbad` with a custom one:
```bash
$ helm install \
--name my-release \
--version={{ template "chart.version" . }} \
--set couchdbConfig.couchdb.uuid=decafbaddecafbaddecafbaddecafbad \
couchdb/couchdb
```
This will create a Secret containing the admin credentials for the cluster.
Those credentials can be retrieved as follows:
```bash
$ kubectl get secret my-release-couchdb -o go-template='{{ print "{{ .data.adminPassword }}" }}' | base64 --decode
```
If you prefer to configure the admin credentials directly you can create a
Secret containing `adminUsername`, `adminPassword` and `cookieAuthSecret` keys:
```bash
$ kubectl create secret generic my-release-couchdb --from-literal=adminUsername=foo --from-literal=adminPassword=bar --from-literal=cookieAuthSecret=baz
```
If you want to set the `adminHash` directly to achieve consistent salts between
different nodes you need to add it to the secret:
```bash
$ kubectl create secret generic my-release-couchdb \
--from-literal=adminUsername=foo \
--from-literal=cookieAuthSecret=baz \
--from-literal=adminHash=-pbkdf2-d4b887da....
```
and then install the chart while overriding the `createAdminSecret` setting:
```bash
$ helm install \
--name my-release \
--version={{ template "chart.version" . }} \
--set createAdminSecret=false \
--set couchdbConfig.couchdb.uuid=decafbaddecafbaddecafbaddecafbad \
couchdb/couchdb
```
This Helm chart deploys CouchDB on the Kubernetes cluster in a default
configuration. The [configuration](#configuration) section lists
the parameters that can be configured during installation.
> **Tip**: List all releases using `helm list`
## Uninstalling the Chart
To uninstall/delete the `my-release` Deployment:
```bash
$ helm delete my-release
```
The command removes all the Kubernetes components associated with the chart and
deletes the release.
## Upgrading an existing Release to a new major version
A major chart version change (like v0.2.3 -> v1.0.0) indicates that there is an
incompatible breaking change needing manual actions.
### Upgrade to 3.0.0
Since version 3.0.0 setting the CouchDB server instance UUID is mandatory.
Therefore you need to generate a UUID and supply it as a value during the
upgrade as follows:
```bash
$ helm upgrade <release-name> \
--version=3.6.4 \
--reuse-values \
--set couchdbConfig.couchdb.uuid=<UUID> \
couchdb/couchdb
```
### Upgrade to 4.0.0
Breaking change between v3 and v4 is the `adminHash` in the secret that no longer uses
the `password.ini`. It stores the `adminHash` only instead, make sure to change it if you
use your own secret.
## Migrating from stable/couchdb
This chart replaces the `stable/couchdb` chart previously hosted by Helm and continues the
version semantics. You can upgrade directly from `stable/couchdb` to this chart using:
```bash
$ helm repo add couchdb https://apache.github.io/couchdb-helm
$ helm upgrade my-release --version={{ template "chart.version" . }} couchdb/couchdb
```
## Configuration
The following table lists the most commonly configured parameters of the
CouchDB chart and their default values:
{{ template "couchdb.valuesTable" . }}
You can set the values of the `couchdbConfig` map according to the
[official configuration][4]. The following shows the map's default values and
required options to set:
| Parameter | Description | Default |
|---------------------------------|--------------------------------------------------------------------|----------------------------------------|
| `couchdb.uuid` | UUID for this CouchDB server instance ([Required in a cluster][5]) | |
| `chttpd.bind_address` | listens on all interfaces when set to any | any |
| `chttpd.require_valid_user` | disables all the anonymous requests to the port 5984 when true | false |
A variety of other parameters are also configurable. See the comments in the
`values.yaml` file for further details:
| Parameter | Default |
|--------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `adminUsername` | admin |
| `adminPassword` | auto-generated |
| `adminHash` | |
| `cookieAuthSecret` | auto-generated |
| `image.repository` | couchdb |
| `image.tag` | 3.5.0 |
| `image.pullPolicy` | IfNotPresent |
| `searchImage.repository` | kocolosk/couchdb-search |
| `searchImage.tag` | 0.1.0 |
| `searchImage.pullPolicy` | IfNotPresent |
| `initImage.repository` | busybox |
| `initImage.tag` | latest |
| `initImage.pullPolicy` | Always |
| `ingress.enabled` | false |
| `ingress.hosts` | chart-example.local |
| `ingress.annotations` | |
| `ingress.path` | / |
| `ingress.tls` | |
| `persistentVolume.accessModes` | ReadWriteOnce |
| `persistentVolume.storageClass` | Default for the Kube cluster |
| `persistentVolume.annotations` | {} |
| `podDisruptionBudget.enabled` | false |
| `podDisruptionBudget.minAvailable` | nil |
| `podDisruptionBudget.maxUnavailable` | 1 |
| `podManagementPolicy` | Parallel |
| `affinity` | |
| `topologySpreadConstraints` | |
| `labels` | |
| `annotations` | |
| `tolerations` | |
| `resources` | |
| `autoSetup.enabled` | false (if set to true, must have `service.enabled` set to true and a correct `adminPassword` - deploy it with the `--wait` flag to avoid first jobs failure) |
| `autoSetup.image.repository` | alpine/curl |
| `autoSetup.image.tag` | latest |
| `autoSetup.image.pullPolicy` | Always |
| `autoSetup.defaultDatabases` | [`_global_changes`] |
| `service.annotations` | |
| `service.enabled` | true |
| `service.type` | ClusterIP |
| `service.externalPort` | 5984 |
| `service.targetPort` | 5984 |
| `dns.clusterDomainSuffix` | cluster.local |
| `networkPolicy.enabled` | true |
| `serviceAccount.enabled` | true |
| `serviceAccount.create` | true |
| `serviceAccount.imagePullSecrets` | |
| `sidecars` | {} |
| `livenessProbe.enabled` | true |
| `livenessProbe.failureThreshold` | 3 |
| `livenessProbe.initialDelaySeconds` | 0 |
| `livenessProbe.periodSeconds` | 10 |
| `livenessProbe.successThreshold` | 1 |
| `livenessProbe.timeoutSeconds` | 1 |
| `readinessProbe.enabled` | true |
| `readinessProbe.failureThreshold` | 3 |
| `readinessProbe.initialDelaySeconds` | 0 |
| `readinessProbe.periodSeconds` | 10 |
| `readinessProbe.successThreshold` | 1 |
| `readinessProbe.timeoutSeconds` | 1 |
| `prometheusPort.enabled` | false |
| `prometheusPort.port` | 17896 |
| `prometheusPort.bind_address` | 0.0.0.0 |
| `placementConfig.enabled` | false |
| `placementConfig.image.repository` | caligrafix/couchdb-autoscaler-placement-manager |
| `placementConfig.image.tag` | 0.1.0 |
| `podSecurityContext` | |
| `containerSecurityContext | |
## Feedback, Issues, Contributing
General feedback is welcome at our [user][1] or [developer][2] mailing lists.
Apache CouchDB has a [CONTRIBUTING][3] file with details on how to get started
with issue reporting or contributing to the upkeep of this project. In short,
use GitHub Issues, do not report anything on Docker's website.
## Non-Apache CouchDB Development Team Contributors
- [@natarajaya](https://github.com/natarajaya)
- [@satchpx](https://github.com/satchpx)
- [@spanato](https://github.com/spanato)
- [@jpds](https://github.com/jpds)
- [@sebastien-prudhomme](https://github.com/sebastien-prudhomme)
- [@stepanstipl](https://github.com/sebastien-stepanstipl)
- [@amatas](https://github.com/amatas)
- [@Chimney42](https://github.com/Chimney42)
- [@mattjmcnaughton](https://github.com/mattjmcnaughton)
- [@mainephd](https://github.com/mainephd)
- [@AdamDang](https://github.com/AdamDang)
- [@mrtyler](https://github.com/mrtyler)
- [@kevinwlau](https://github.com/kevinwlau)
- [@jeyenzo](https://github.com/jeyenzo)
- [@Pinpin31.](https://github.com/Pinpin31)
[1]: http://mail-archives.apache.org/mod_mbox/couchdb-user/
[2]: http://mail-archives.apache.org/mod_mbox/couchdb-dev/
[3]: https://github.com/apache/couchdb/blob/master/CONTRIBUTING.md
[4]: https://docs.couchdb.org/en/stable/config/index.html
[5]: https://docs.couchdb.org/en/latest/setup/cluster.html#preparing-couchdb-nodes-to-be-joined-into-a-cluster

View File

@ -0,0 +1,5 @@
couchdbConfig:
couchdb:
uuid: "decafbaddecafbaddecafbaddecafbad"
annotations:
foo: bar

View File

@ -0,0 +1,9 @@
sidecars:
- name: foo
image: "busybox"
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: "0.1"
memory: 10Mi
command: ['while true; do echo "foo"; sleep 5; done;']

View File

@ -0,0 +1,32 @@
Apache CouchDB is starting. Check the status of the Pods using:
kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "couchdb.name" . }},release={{ .Release.Name }}"
Skip this step if autoSetup is enabled - Once all of the Pods are fully Ready, execute the following command to create
some required system databases:
kubectl exec --namespace {{ .Release.Namespace }} {{ if not .Values.allowAdminParty }}-it {{ end }}{{ template "couchdb.fullname" . }}-0 -c couchdb -- \
curl -s \
http://127.0.0.1:5984/_cluster_setup \
-X POST \
-H "Content-Type: application/json" \
{{- if .Values.allowAdminParty }}
-d '{"action": "finish_cluster"}'
{{- else }}
-d '{"action": "finish_cluster"}' \
-u <adminUsername>
{{- end }}
Then it's time to relax.
{{- $erlangCookie := .Values.erlangFlags.setcookie }}
{{- if (empty $erlangCookie) }}
NOTE: You are using an auto-generated value for the Erlang Cookie
- We recommend making this value persistent by setting it in: `erlangFlags.setcookie`
- Changing this value can cause problems for the Couch DB installation (particularly upgrades / config changes)
- You can get the current value with:
```
kubectl -n {{ $.Release.Namespace }} get secret {{ include "couchdb.fullname" . }} --template='{{print "{{" }}index .data "erlangCookie" | base64decode{{ print "}}" }}'
```
{{- end }}

View File

@ -0,0 +1,145 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "couchdb.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "couchdb.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- printf "%s-%s" .Values.fullnameOverride .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{/*
In the event that we create both a headless service and a traditional one,
ensure that the latter gets a unique name.
*/}}
{{- define "couchdb.svcname" -}}
{{- if .Values.fullnameOverride -}}
{{- printf "%s-svc-%s" .Values.fullnameOverride .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-svc-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{/*
Create a random string if the supplied key does not exist
*/}}
{{- define "couchdb.defaultsecret" -}}
{{- if . -}}
{{- . | b64enc | quote -}}
{{- else -}}
{{- randAlphaNum 20 | b64enc | quote -}}
{{- end -}}
{{- end -}}
{{- /*
Create a random string if the supplied "secret" key does not exist. Otherwise create the key in a persistent fashion
using `lookup` and `get`. The "key", "ns", and "secretName" keys need to be provided for this to work
*/ -}}
{{- define "couchdb.defaultsecret-stateful" -}}
{{- if .secret -}}
{{- .secret | b64enc | quote -}}
{{- else -}}
{{- /* generate secret, which will be overwritten if already exists */ -}}
{{- $autoSecret := randAlphaNum 20 | b64enc -}}
{{- if and (not (empty .key)) (not (empty .secretName)) }}
{{- $currentSecret := lookup "v1" "Secret" .ns .secretName }}
{{- if $currentSecret }}
{{- /* already exists, looking up */ -}}
{{- $autoSecret = get $currentSecret.data .key -}}
{{- end }}
{{- end }}
{{- print $autoSecret | quote -}}
{{- end -}}
{{- end -}}
{{/*
Labels used to define Pods in the CouchDB statefulset
*/}}
{{- define "couchdb.ss.selector" -}}
app: {{ template "couchdb.name" . }}
release: {{ .Release.Name }}
{{- end -}}
{{/*
Generates a comma delimited list of nodes in the cluster
*/}}
{{- define "couchdb.seedlist" -}}
{{- $nodeCount := min 5 .Values.clusterSize | int }}
{{- range $index0 := until $nodeCount -}}
{{- $index1 := $index0 | add1 -}}
{{ $.Values.erlangFlags.name }}@{{ template "couchdb.fullname" $ }}-{{ $index0 }}.{{ template "couchdb.fullname" $ }}.{{ $.Release.Namespace }}.svc.{{ $.Values.dns.clusterDomainSuffix }}{{ if ne $index1 $nodeCount }},{{ end }}
{{- end -}}
{{- end -}}
{{/*
If serviceAccount.name is specified, use that, else use the couchdb instance name
*/}}
{{- define "couchdb.serviceAccount" -}}
{{- if .Values.serviceAccount.name -}}
{{- .Values.serviceAccount.name }}
{{- else -}}
{{- template "couchdb.fullname" . -}}
{{- end -}}
{{- end -}}
{{/*
Fail if couchdbConfig.couchdb.uuid is undefined
*/}}
{{- define "couchdb.uuid" -}}
{{- required "A value for couchdbConfig.couchdb.uuid must be set" (.Values.couchdbConfig.couchdb | default dict).uuid -}}
{{- end -}}
{{/*
Repurpose volume claim metadata whether using the new volume claim template
or existing volume claims.
*/}}
{{- define "persistentVolume.metadata" -}}
{{- $context := index . "context" -}}
{{- $claim := index . "claim" -}}
name: {{ $claim.claimName | default "database-storage" }}
labels:
app: {{ template "couchdb.name" $context }}
release: {{ $context.Release.Name }}
{{- with $context.Values.persistentVolume.annotations }}
annotations:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- end -}}
{{/*
Repurpose volume claim spec whether using the new volume claim template
or an existing volume claim.
*/}}
{{- define "persistentVolume.spec" -}}
{{- $context := index . "context" -}}
{{- $claim := index . "claim" -}}
accessModes:
{{- range $context.Values.persistentVolume.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ $context.Values.persistentVolume.size | quote }}
{{- if $context.Values.persistentVolume.storageClass }}
{{- if (eq "-" $context.Values.persistentVolume.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ $context.Values.persistentVolume.storageClass }}"
{{- end }}
{{- end }}
{{- if $claim.persistentVolumeName }}
volumeName: {{ $claim.persistentVolumeName }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,34 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "couchdb.fullname" . }}
labels:
app: {{ template "couchdb.name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
data:
inifile: |
{{ $couchdbConfig := dict "couchdb" (dict "uuid" (include "couchdb.uuid" .)) -}}
{{- $couchdbConfig := merge $couchdbConfig .Values.couchdbConfig -}}
{{- range $section, $settings := $couchdbConfig -}}
{{ printf "[%s]" $section }}
{{ range $key, $value := $settings -}}
{{- if kindIs "float64" $value }}
{{ $value = (int $value) }}
{{ end -}}
{{ printf "%s = %v" $key $value }}
{{ end }}
{{ end }}
seedlistinifile: |
[cluster]
seedlist = {{ template "couchdb.seedlist" . }}
{{- if .Values.prometheusPort.enabled }}
prometheusinifile: |
[prometheus]
additional_port = {{ .Values.prometheusPort.enabled }}
bind_address = {{ .Values.prometheusPort.bind_address }}
port = {{ .Values.prometheusPort.port }}
{{- end }}

View File

@ -0,0 +1,21 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "couchdb.fullname" . }}
labels:
app: {{ template "couchdb.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
clusterIP: None
publishNotReadyAddresses: true
ports:
- name: couchdb
port: 5984
{{- if .Values.prometheusPort.enabled }}
- name: metrics
port: {{ .Values.prometheusPort.port }}
{{- end }}
selector:
{{ include "couchdb.ss.selector" . | indent 4 }}

View File

@ -0,0 +1,39 @@
{{- if .Values.ingress.enabled -}}
{{- $serviceName := include "couchdb.svcname" . -}}
{{- $servicePort := .Values.service.externalPort -}}
{{- $path := .Values.ingress.path | quote -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ template "couchdb.fullname" . }}
labels:
app: {{ template "couchdb.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
{{- range $key, $value := .Values.ingress.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
{{- if .Values.ingress.className }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
rules:
{{- range $host := .Values.ingress.hosts }}
- host: {{ $host }}
http:
paths:
- path: {{ $path }}
pathType: Prefix
backend:
service:
name: {{ $serviceName }}
port:
number: {{ $servicePort }}
{{- end -}}
{{- if .Values.ingress.tls }}
tls:
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,56 @@
{{- if .Values.autoSetup -}}
{{- if and .Values.autoSetup.enabled .Values.service.enabled -}}
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}-post-install"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
annotations:
"helm.sh/hook": post-install
spec:
template:
metadata:
name: "{{ .Release.Name }}-post-install"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
spec:
restartPolicy: OnFailure
{{- if .Values.podSecurityContext }}
securityContext: {{ .Values.podSecurityContext | toYaml | nindent 8 }}
{{- end }}
containers:
- name: cluster-setup
image: {{ .Values.autoSetup.image.repository }}:{{ .Values.autoSetup.image.tag }}
imagePullPolicy: {{ .Values.autoSetup.image.pullPolicy }}
command:
- 'sh'
- '-c'
- 'curl -s http://$COUCHDB_ADDRESS/_cluster_setup -X POST -H "Content-Type: application/json" -d "{\"action\": \"finish_cluster\"}" -u $COUCHDB_ADMIN:$COUCHDB_PASS && export IFS=","; for db_name in $DEFAULT_DBS; do curl -X PUT http://$COUCHDB_ADDRESS/$db_name -u $COUCHDB_ADMIN:$COUCHDB_PASS; done'
env:
- name: DEFAULT_DBS
value: {{ join "," .Values.autoSetup.defaultDatabases }}
- name: COUCHDB_ADDRESS
value: "{{ template "couchdb.svcname" . }}.{{ .Release.Namespace }}.svc.{{ default "cluster.local" .Values.dns.clusterDomainSuffix }}:{{ .Values.service.externalPort}}"
- name: COUCHDB_ADMIN
valueFrom:
secretKeyRef:
name: {{ .Values.extraSecretName | default (include "couchdb.fullname" .) }}
key: {{ .Values.adminUsernameKey | default "adminUsername" }}
- name: COUCHDB_PASS
valueFrom:
secretKeyRef:
name: {{ .Values.extraSecretName | default (include "couchdb.fullname" .) }}
key: {{ .Values.adminPasswordKey | default "adminPassword" }}
{{- if .Values.containerSecurityContext }}
securityContext: {{ .Values.containerSecurityContext | toYaml | nindent 12 }}
{{- end }}
backoffLimit: 2
ttlSecondsAfterFinished: 600
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,39 @@
{{- if .Values.networkPolicy.enabled }}
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: {{ template "couchdb.fullname" . }}
labels:
app: {{ template "couchdb.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
podSelector:
matchLabels:
{{ include "couchdb.ss.selector" . | indent 6 }}
ingress:
- ports:
- protocol: TCP
port: 5984
{{- if .Values.prometheusPort.enabled }}
- protocol: TCP
port: {{ .Values.prometheusPort.port }}
{{- end }}
{{ range .Values.extraPorts }}
- protocol: TCP
port: {{ .containerPort }}
{{ end }}
- ports:
- protocol: TCP
port: 9100
- protocol: TCP
port: 4369
from:
- podSelector:
matchLabels:
{{ include "couchdb.ss.selector" . | indent 14 }}
policyTypes:
- Ingress
{{- end }}

View File

@ -0,0 +1,21 @@
{{- if and .Values.podDisruptionBudget .Values.podDisruptionBudget.enabled }}
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: "{{ .Release.Name }}-pdb"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
spec:
{{- if .Values.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.podDisruptionBudget.minAvailable }}
{{- end }}
{{- if .Values.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.podDisruptionBudget.maxUnavailable }}
{{- end }}
selector:
matchLabels:
{{ include "couchdb.ss.selector" . | indent 6 }}
{{- end }}

View File

@ -0,0 +1,24 @@
{{- if and .Values.persistentVolume.enabled .Values.persistentVolume.existingClaims -}}
{{- range $claim := .Values.persistentVolume.existingClaims }}
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ $claim.persistentVolumeName }}
spec:
{{- if $.Values.persistentVolume.storageClass }}
{{- if (eq "-" $.Values.persistentVolume.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ $.Values.persistentVolume.storageClass }}"
{{- end }}
{{- end }}
accessModes:
{{- range $.Values.persistentVolume.accessModes }}
- {{ . | quote }}
{{- end }}
capacity:
storage: {{ $.Values.persistentVolume.size }}
{{ toYaml $claim.volumeSource | indent 2 }}
---
{{- end }}
{{- end }}

View File

@ -0,0 +1,12 @@
{{- if and .Values.persistentVolume.enabled .Values.persistentVolume.existingClaims -}}
{{- $context := . }}
{{- range $claim := .Values.persistentVolume.existingClaims }}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
{{- include "persistentVolume.metadata" (dict "context" $context "claim" $claim) | nindent 2 }}
spec:
{{- include "persistentVolume.spec" (dict "context" $context "claim" $claim) | nindent 2 }}
---
{{- end }}
{{- end }}

View File

@ -0,0 +1,50 @@
{{- if .Values.placementConfig.enabled -}}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "couchdb.fullname" . }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
template:
metadata:
name: "{{ .Release.Name }}"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
spec:
restartPolicy: OnFailure
{{- if .Values.podSecurityContext }}
securityContext: {{ .Values.podSecurityContext | toYaml | nindent 8 }}
{{- end }}
containers:
- name: placement-tagging-job
image: {{ .Values.placementConfig.image.repository }}:{{ .Values.placementConfig.image.tag }}
imagePullPolicy: Always
args: ["--placement-manager"]
envFrom:
- secretRef:
name: couchdb-couchdb
- configMapRef:
name: {{ template "couchdb.fullname" . }}
env:
- name: NAMESPACE
value: {{ .Release.Namespace }}
- name: COUCHDB_SVC
value: {{ template "couchdb.svcname" . }}
- name: COUCHDB_PORT
value: {{ .Values.service.externalPort | quote }}
{{- if .Values.containerSecurityContext }}
securityContext: {{ .Values.containerSecurityContext | toYaml | nindent 10 }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,21 @@
{{- if .Values.createAdminSecret -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "couchdb.fullname" . }}
labels:
app: {{ template "couchdb.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
type: Opaque
data:
adminUsername: {{ template "couchdb.defaultsecret" .Values.adminUsername }}
adminPassword: {{ template "couchdb.defaultsecret" .Values.adminPassword }}
{{- $erlangCookieArgs := dict "key" "erlangCookie" "ns" $.Release.Namespace "secretName" (include "couchdb.fullname" .) "secret" .Values.erlangFlags.setcookie }}
erlangCookie: {{ template "couchdb.defaultsecret-stateful" $erlangCookieArgs }}
cookieAuthSecret: {{ template "couchdb.defaultsecret" .Values.cookieAuthSecret }}
{{- if .Values.adminHash }}
adminHash: {{ .Values.adminHash | b64enc | quote }}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,30 @@
{{- if .Values.service.enabled -}}
apiVersion: v1
kind: Service
metadata:
name: {{ template "couchdb.svcname" . }}
labels:
app: {{ template "couchdb.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.service.labels }}
{{- . | toYaml | nindent 4 }}
{{- end }}
{{- with .Values.service.annotations }}
annotations:
{{- . | toYaml | nindent 4 }}
{{- end }}
spec:
ports:
- port: {{ .Values.service.externalPort }}
name: couchdb
protocol: TCP
targetPort: {{ .Values.service.targetPort }}
{{ with .Values.service.extraPorts }}
{{- toYaml . | nindent 4 }}
{{- end }}
type: {{ .Values.service.type }}
selector:
{{ include "couchdb.ss.selector" . | indent 4 }}
{{- end -}}

View File

@ -0,0 +1,15 @@
{{- if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "couchdb.serviceAccount" . }}
labels:
app: {{ template "couchdb.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- if .Values.serviceAccount.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.serviceAccount.imagePullSecrets }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,269 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "couchdb.fullname" . }}
labels:
app: {{ template "couchdb.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.clusterSize }}
serviceName: {{ template "couchdb.fullname" . }}
podManagementPolicy: {{ .Values.podManagementPolicy }}
selector:
matchLabels:
{{ include "couchdb.ss.selector" . | indent 6 }}
template:
metadata:
labels:
{{ include "couchdb.ss.selector" . | indent 8 }}
{{- if .Values.labels }}
{{ toYaml .Values.labels | indent 8 }}
{{- end }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum }}
{{- if .Values.annotations }}
{{ toYaml .Values.annotations | indent 8 }}
{{- end }}
spec:
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
{{- if .Values.podSecurityContext }}
securityContext: {{ .Values.podSecurityContext | toYaml | nindent 8 }}
{{- end }}
{{- if .Values.serviceAccount.enabled }}
serviceAccountName: {{ template "couchdb.serviceAccount" . }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets: {{ .Values.imagePullSecrets | toYaml | nindent 8 }}
{{- end }}
initContainers:
- name: init-copy
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
imagePullPolicy: {{ .Values.initImage.pullPolicy }}
command:
- 'sh'
- '-c'
{{- if .Values.prometheusPort.enabled }}
- 'cp /tmp/chart.ini /default.d; cp /tmp/seedlist.ini /default.d; cp /tmp/prometheus.ini /default.d; ls -lrt /default.d;'
{{- else }}
- 'cp /tmp/chart.ini /default.d; cp /tmp/seedlist.ini /default.d; ls -lrt /default.d;'
{{- end }}
volumeMounts:
- name: config
mountPath: /tmp/
- name: config-storage
mountPath: /default.d
{{- if .Values.containerSecurityContext }}
securityContext: {{ .Values.containerSecurityContext | toYaml | nindent 12 }}
{{- end }}
resources:
{{ toYaml .Values.initResources | indent 12 }}
{{- if .Values.adminHash }}
- name: admin-hash-copy
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
imagePullPolicy: {{ .Values.initImage.pullPolicy }}
env:
- name: "ADMINUSERNAME"
valueFrom:
secretKeyRef:
name: {{ template "couchdb.fullname" . }}
key: adminUsername
- name: "ADMINHASH"
valueFrom:
secretKeyRef:
name: {{ template "couchdb.fullname" . }}
key: adminHash
command: ['sh','-c','echo -e "[admins]\n$ADMINUSERNAME = $ADMINHASH" > /local.d/password.ini ;']
volumeMounts:
- name: local-config-storage
mountPath: /local.d
{{- if .Values.containerSecurityContext }}
securityContext: {{ .Values.containerSecurityContext | toYaml | nindent 12 }}
{{- end }}
resources:
{{ toYaml .Values.initResources | indent 12 }}
{{- end }}
containers:
- name: couchdb
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: couchdb
containerPort: 5984
- name: epmd
containerPort: 4369
- containerPort: 9100
{{- if .Values.prometheusPort.enabled }}
- name: metrics
containerPort: {{ .Values.prometheusPort.port }}
{{- end }}
{{ with .Values.extraPorts }}
{{ toYaml . | indent 12 }}
{{ end }}
{{- if .Values.lifecycle }}
lifecycle: {{ toYaml .Values.lifecycle | nindent 12 }}
{{- else if .Values.lifecycleTemplate }}
lifecycle:
{{- include "couchdb.lifecycleTemplate" . | nindent 12 }}
{{- end }}
env:
{{- if not .Values.allowAdminParty }}
- name: COUCHDB_USER
valueFrom:
secretKeyRef:
name: {{ .Values.extraSecretName | default (include "couchdb.fullname" .) }}
key: {{ .Values.adminUsernameKey | default "adminUsername" }}
- name: COUCHDB_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Values.extraSecretName | default (include "couchdb.fullname" .) }}
key: {{ .Values.adminPasswordKey | default "adminPassword" }}
- name: COUCHDB_SECRET
valueFrom:
secretKeyRef:
name: {{ .Values.extraSecretName | default (include "couchdb.fullname" .) }}
key: {{ .Values.cookieAuthSecretKey | default "cookieAuthSecret" }}
{{- end }}
- name: COUCHDB_ERLANG_COOKIE
valueFrom:
secretKeyRef:
name: {{ .Values.extraSecretName | default (include "couchdb.fullname" .) }}
key: {{ .Values.erlangCookieKey | default "erlangCookie" }}
- name: ERL_FLAGS
value: "{{ range $k, $v := .Values.erlangFlags }} -{{ $k }} {{ $v }} {{ end }}"
{{- if .Values.extraEnv }}
{{ toYaml .Values.extraEnv | indent 12 }}
{{- else if .Values.extraEnvTemplate }}
{{- include "couchdb.extraEnvTemplate" . | indent 12 }}
{{- end }}
{{- if .Values.livenessProbe.enabled }}
livenessProbe:
{{- if .Values.couchdbConfig.chttpd.require_valid_user }}
exec:
command:
- sh
- -c
- curl -G --silent --fail -u ${COUCHDB_USER}:${COUCHDB_PASSWORD} http://localhost:5984/_up
{{- else }}
httpGet:
path: /_up
port: 5984
{{- end }}
failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
successThreshold: {{ .Values.livenessProbe.successThreshold }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
{{- end }}
{{- if .Values.readinessProbe.enabled }}
readinessProbe:
{{- if .Values.couchdbConfig.chttpd.require_valid_user }}
exec:
command:
- sh
- -c
- curl -G --silent --fail -u ${COUCHDB_USER}:${COUCHDB_PASSWORD} http://localhost:5984/_up
{{- else }}
httpGet:
path: /_up
port: 5984
{{- end }}
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
volumeMounts:
- name: config-storage
mountPath: /opt/couchdb/etc/default.d
{{- if .Values.adminHash }}
- name: local-config-storage
mountPath: /opt/couchdb/etc/local.d
{{- end }}
- name: database-storage
mountPath: /opt/couchdb/data
{{- if .Values.containerSecurityContext }}
securityContext: {{ .Values.containerSecurityContext | toYaml | nindent 12 }}
{{- end }}
{{- if .Values.enableSearch }}
- name: clouseau
image: "{{ .Values.searchImage.repository }}:{{ .Values.searchImage.tag }}"
imagePullPolicy: {{ .Values.searchImage.pullPolicy }}
volumeMounts:
- name: database-storage
mountPath: /opt/couchdb-search/data
{{- if .Values.containerSecurityContext }}
securityContext: {{ .Values.containerSecurityContext | toYaml | nindent 12 }}
{{- end }}
{{- end }}
{{- if .Values.sidecars }}
{{ toYaml .Values.sidecars | indent 8}}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.topologySpreadConstraints }}
topologySpreadConstraints:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.priorityClassName }}
priorityClassName: {{ . | quote }}
{{- end }}
volumes:
- name: config-storage
emptyDir: {}
- name: config
configMap:
name: {{ template "couchdb.fullname" . }}
items:
- key: inifile
path: chart.ini
- key: seedlistinifile
path: seedlist.ini
{{- if .Values.prometheusPort.enabled }}
- key: prometheusinifile
path: prometheus.ini
{{- end }}
{{- if .Values.adminHash }}
- name: local-config-storage
emptyDir: {}
{{- end -}}
{{- if not .Values.persistentVolume.enabled }}
- name: database-storage
emptyDir: {}
{{- else if and .Values.persistentVolume.enabled .Values.persistentVolume.existingClaims }}
{{- range $claim := .Values.persistentVolume.existingClaims }}
- name: {{ $claim.volumeName }}
persistentVolumeClaim:
claimName: {{ $claim.claimName }}
{{- end }}
{{- else }}
{{- if .Values.persistentVolumeClaimRetentionPolicy.enabled }}
persistentVolumeClaimRetentionPolicy:
whenDeleted: {{ .Values.persistentVolumeClaimRetentionPolicy.whenDeleted }}
whenScaled: {{ .Values.persistentVolumeClaimRetentionPolicy.whenScaled }}
{{- end }}
volumeClaimTemplates:
- metadata:
{{- include "persistentVolume.metadata" (dict "context" .) | nindent 8 }}
spec:
{{- include "persistentVolume.spec" (dict "context" .) | nindent 8 }}
{{- end }}

316
charts/couchdb/values.yaml Normal file
View File

@ -0,0 +1,316 @@
# -- the initial number of nodes in the CouchDB cluster.
clusterSize: 3
# -- If allowAdminParty is enabled the cluster will start up without any database
# administrator account; i.e., all users will be granted administrative
# access. Otherwise, the system will look for a Secret called
# <ReleaseName>-couchdb containing `adminUsername`, `adminPassword` and
# `cookieAuthSecret` keys. See the `createAdminSecret` flag.
# ref: https://kubernetes.io/docs/concepts/configuration/secret/
allowAdminParty: false
# Set it to true to automatically enable the cluster after installation.
# It will create a post-install job that will send the {"action": "finish_cluster"}
# message to CouchDB to finalize the cluster and add the defaultDatabases listed.
# Note that this job needs service.enabled to be set to true and if you use adminHash,
# a valid adminPassword in the secret. Also set the --wait flag when you install to
# avoid first jobs failure (helm install --wait ...)
autoSetup:
enabled: false
image:
repository: curlimages/curl
tag: latest
pullPolicy: Always
defaultDatabases:
- _global_changes
# -- If createAdminSecret is enabled a Secret called <ReleaseName>-couchdb will
# be created containing auto-generated credentials. Users who prefer to set
# these values themselves have a couple of options:
#
# 1) The `adminUsername`, `adminPassword`, `adminHash`, and `cookieAuthSecret`
# can be defined directly in the chart's values. Note that all of a chart's
# values are currently stored in plaintext in a ConfigMap in the tiller
# namespace.
#
# 2) This flag can be disabled and a Secret with the required keys can be
# created ahead of time.
createAdminSecret: true
# defaults to chart name
extraSecretName: ""
adminUsernameKey: ""
adminPasswordKey: ""
cookieAuthSecretKey: ""
erlangCookieKey: ""
adminUsername: admin
# adminPassword: this_is_not_secure
# adminHash: -pbkdf2-this_is_not_necessarily_secure_either
# cookieAuthSecret: neither_is_this
## When enabled, will deploy a networkpolicy that allows CouchDB pods to
## communicate with each other for clustering and ingress on port 5984
networkPolicy:
enabled: true
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
# Use a service account
serviceAccount:
enabled: true
create: true
# name:
# imagePullSecrets:
# - name: myimagepullsecret
# -- The storage volume used by each Pod in the StatefulSet. If a
# persistentVolume is not enabled, the Pods will use `emptyDir` ephemeral
# local storage. Setting the storageClass attribute to "-" disables dynamic
# provisioning of Persistent Volumes; leaving it unset will invoke the default
# provisioner.
persistentVolume:
enabled: false
# NOTE: the number of existing claims must match the cluster size
existingClaims: []
annotations: {}
accessModes:
- ReadWriteOnce
size: 10Gi
# storageClass: "-"
# Experimental - FEATURE STATE: Kubernetes v1.27 [beta]
# Field controls if and how PVCs are deleted during the lifecycle
# of a StatefulSet
# ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-retention
persistentVolumeClaimRetentionPolicy:
enabled: false
whenScaled: Retain
whenDeleted: Retain
## The CouchDB image
image:
repository: couchdb
tag: 3.5.0
pullPolicy: IfNotPresent
## Experimental integration with Lucene-powered fulltext search
searchImage:
repository: kocolosk/couchdb-search
tag: 0.2.0
pullPolicy: IfNotPresent
# -- Flip this to flag to include the Search container in each Pod
enableSearch: false
initImage:
repository: busybox
tag: latest
pullPolicy: Always
## CouchDB is happy to spin up cluster nodes in parallel, but if you encounter
## problems you can try setting podManagementPolicy to the StatefulSet default
## `OrderedReady`
podManagementPolicy: Parallel
## To better tolerate Node failures, we can prevent Kubernetes scheduler from
## assigning more than one Pod of CouchDB StatefulSet per Node using podAntiAffinity.
affinity: {}
# podAntiAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# - labelSelector:
# matchExpressions:
# - key: "app"
# operator: In
# values:
# - couchdb
# topologyKey: "kubernetes.io/hostname"
## To control how Pods are spread across your cluster among failure-domains such as regions,
## zones, nodes, and other user-defined topology domains use topologySpreadConstraints.
topologySpreadConstraints: {}
# topologySpreadConstraints:
# - maxSkew: 1
# topologyKey: "topology.kubernetes.io/zone"
# whenUnsatisfiable: ScheduleAnyway
# labelSelector:
# matchLabels:
# app: couchdb
## Optional pod labels
labels: {}
## Optional pod annotations
annotations: {}
## Optional tolerations
tolerations: []
## A StatefulSet requires a headless Service to establish the stable network
## identities of the Pods, and that Service is created automatically by this
## chart without any additional configuration. The Service block below refers
## to a second Service that governs how clients connect to the CouchDB cluster.
service:
annotations: {}
enabled: true
type: ClusterIP
externalPort: 5984
targetPort: 5984
labels: {}
extraPorts: []
# - name: sqs
# port: 4984
# targetPort: 4984
# protocol: TCP
## If you need to expose any additional ports on the CouchDB container, for example
## if you're running CouchDB container with additional processes that need to
## be accessible outside of the pod, you can define them here.
extraPorts: []
# - name: sqs
# containerPort: 4984
## An Ingress resource can provide name-based virtual hosting and TLS
## termination among other things for CouchDB deployments which are accessed
## from outside the Kubernetes cluster.
## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/
ingress:
enabled: false
# className: nginx
hosts:
- chart-example.local
path: /
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
tls:
# Secrets must be manually created in the namespace.
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
## Optional resource requests and limits for the CouchDB container
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
resources: {}
# requests:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 56
# memory: 256Gi
## Optional resource requests and limits for the CouchDB init container
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
initResources: {}
# requests:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 500m
# memory: 128Mi
# -- erlangFlags is a map that is passed to the Erlang VM as flags using the
# ERL_FLAGS env. The `name` flag is required to establish connectivity
# between cluster nodes.
# ref: http://erlang.org/doc/man/erl.html#init_flags
erlangFlags:
name: couchdb
# Older versions of the official CouchDB image (anything prior to 3.2.1)
# do not act on the COUCHDB_ERLANG_COOKIE environment variable, so if you
# want to cluster these deployments it's necessary to pass in a cookie here
# setcookie: make-something-up
# -- couchdbConfig will override default CouchDB configuration settings.
# The contents of this map are reformatted into a .ini file laid down
# by a ConfigMap object.
# ref: http://docs.couchdb.org/en/latest/config/index.html
couchdbConfig:
# couchdb:
# uuid: decafbaddecafbaddecafbaddecafbad # Unique identifier for this CouchDB server instance
# cluster:
# q: 8 # Create 8 shards for each database
chttpd:
bind_address: any
# chttpd.require_valid_user disables all the anonymous requests to the port
# 5984 when is set to true.
require_valid_user: false
# required to use Fauxton if chttpd.require_valid_user is set to true
# httpd:
# WWW-Authenticate: "Basic realm=\"administrator\""
# Kubernetes local cluster domain.
# This is used to generate FQDNs for peers when joining the CouchDB cluster.
dns:
clusterDomainSuffix: cluster.local
## Configure liveness and readiness probe values
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
livenessProbe:
enabled: true
failureThreshold: 3
initialDelaySeconds: 0
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
enabled: true
failureThreshold: 3
initialDelaySeconds: 0
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
# Control an optional pod disruption budget
podDisruptionBudget:
# toggle creation of pod disruption budget, disabled by default
enabled: false
# minAvailable: 1
maxUnavailable: 1
# CouchDB 3.2.0 adds in a metrics endpoint on the path `/_node/_local/_prometheus`.
# Optionally, a standalone, unauthenticated port can be exposed for these metrics.
prometheusPort:
enabled: false
bind_address: "0.0.0.0"
port: 17986
# Optional lifecycle hooks defined by a lifecyle value map or parent template
# (for e.g. more complex use cases requiring variable interpolation by parent
# charts)
lifecycle: {}
lifecycleTemplate: false
# Optional environment variables defined by a values array or parent template
# (for e.g. passing secrets as environment variables or for use in the statefulset)
extraEnv: []
extraEnvTemplate: false
# Configure arbitrary sidecar containers for CouchDB pods created by the
# StatefulSet
sidecars: {}
# - name: foo
# image: "busybox"
# imagePullPolicy: IfNotPresent
# resources:
# requests:
# cpu: "0.1"
# memory: 10Mi
# command: ['echo "foo";']
# volumeMounts:
# - name: database-storage
# mountPath: /opt/couchdb/data/
# Placement manager to annotate each document in the nodes DB with "zone" attribute
# recording the zone where node has been scheduled
# Ref: https://docs.couchdb.org/en/stable/cluster/sharding.html#specifying-database-placement
placementConfig:
enabled: false
image:
repository: caligrafix/couchdb-autoscaler-placement-manager
tag: 0.1.0
# Optional priority class to be used for CouchDB pods
priorityClassName: ""

View File

@ -0,0 +1,58 @@
#ENC[AES256_GCM,data:HWytc+3TCQFZA96ZyDnuv/Jmdk5sw29fll4M7x24rgEYRXwGBMWvm++uMA==,iv:9fSBQN3T/nJgzJaJtyjbvvp5HwwVIPoJbz/VEZxob6w=,tag:iPkuCwXs0mNTPMenTSCE1w==,type:comment]
clusterSize: ENC[AES256_GCM,data:HQ==,iv:x2++05uaCEAidTgGy3jJmQKe5oW4O5EuU+OuFJM5/TY=,tag:bV/Kwb2rREjyRvdXnjuZVw==,type:int]
#ENC[AES256_GCM,data:fL1uE2MbvNpgy0i86PlLWFSFZg+SmPG3ppPp5EnsY6IWPX+DTNeAsnfF+CVEcQ==,iv:NY/FORFaMCYfULtGfOe1v/AJCEiaJwJIKR7n4haN4g8=,tag:6FeQKCLF0IeNerxWk/t/Gg==,type:comment]
adminUsername: ENC[AES256_GCM,data:byVcSgU=,iv:qSIFNUJKxxcBV6fKhj7Wx+VMh/dQNyvTV64F2PebtcE=,tag:ukJQ0naytk4CNXlcx15TBA==,type:str]
adminPassword: ENC[AES256_GCM,data:Y9Hy+eLnQ/3B00WMzbZ3kHuW1A==,iv:M0C7T0v/cexOHny1PM/OAvK03P98KsHkULBmOSaY4ic=,tag:3+7QL4y4DFDg4imFRUwPMg==,type:str]
#ENC[AES256_GCM,data:vMd1Rg5Z29aDCaGvUj5ASPC3nuRLuXP1oAW0yL7tdzYfuexE1h5n2JPxKGfAUlY=,iv:pN3DuJd0yiMASDazJI9/gd1zcinyX7XaEhzaN6QBrtw=,tag:7SHEvsHXKOTxosyfmZBVgw==,type:comment]
persistentVolume:
enabled: ENC[AES256_GCM,data:B6qFEA==,iv:+kJ8PTJBylO22NiqbEodRpgbfMEIXro4ghJPZ9h+Nvw=,tag:yo7Qf4yiX8nU8z0yAYLacA==,type:bool]
accessModes:
- ENC[AES256_GCM,data:CEdO2uXROe0qE/9yFg==,iv:vQFcEbFLgOctTcmlwjZQsMUlyB1kx7VOIwIHRU8Ei2Q=,tag:nro0ZeExx9B0XASknllspA==,type:str]
size: ENC[AES256_GCM,data:LFNo0A==,iv:EutGBhFZyC8TMHr3dXA5ppH6tyV8WQpQz/V0oBpFRRw=,tag:pIhCzrYmzQqTypHDnHOxCg==,type:str]
#ENC[AES256_GCM,data:HsMbvVZSg+jtDx9z4lOnyOi54NuRgz/vo/quYXuTkg==,iv:4FeolSj66ylKV7/XqdVT1KmEDdv9A7NfqrPe7H4gGFg=,tag:TXyqiZJX+Nai8p9TVix3zw==,type:comment]
ingress:
enabled: ENC[AES256_GCM,data:qgfhjg==,iv:VFzXKleINe7mfcdYNR1iLYmhwveLhY33gDOrtogWIsk=,tag:UbpEtD3pMXsSCW+xaJ+OPQ==,type:bool]
hosts:
- ENC[AES256_GCM,data:PzvqPmG7czzN7TVj13CwwI9Fcg==,iv:mOxyvMH9fhZz34XVitCI6BnqjMMf+HAFXGv9ORckdD0=,tag:f0xRW/Oc+kHIvPWZ3X180Q==,type:str]
path: ENC[AES256_GCM,data:oQ==,iv:Epw1K6uKcdT4Tl9G36B5GItgh/bYY8awTOZTHHrglSA=,tag:vmhBn8JA2iJbmiySrLNFlg==,type:str]
annotations:
cert-manager.io/cluster-issuer: ENC[AES256_GCM,data:FeTb+PocsM4Gml6h8iyKS21qaLOl+MULpw==,iv:AYYgf99Fkpfxe9knbWosBk+RbG2hfHOoNeOQ3bDmWkE=,tag:hhk/w3j1XHNdC7jcIPId+g==,type:str]
kubernetes.io/ingress.allow-http: ENC[AES256_GCM,data:5uOVcQ==,iv:HX98XqHwR9hkrqwbHD8DSwWU7h9llGaXbigptEGcimk=,tag:Gpg6/RxqabTEqnju+pCINw==,type:str]
tls:
- hosts:
- ENC[AES256_GCM,data:8o0/FeLWmoKze28DShNzSwMrNA==,iv:6VMZpDJwigC4P+4YGGP9OBE6sqv/gj128L9Ihe8b+rI=,tag:PfbmNfmCU5N0ppk5UWQXuw==,type:str]
secretName: ENC[AES256_GCM,data:r/byvb0zs+8WIuk=,iv:fnt4chR+P9qHu6O1nS+7eF9jBwKZPUkRgLAnP3CBKao=,tag:5ACLoFBsMuoMPHZikaKmqg==,type:str]
#ENC[AES256_GCM,data:EzQF0ckNgQtrZ6oH7Rqde3MpLFHCR1Ns8pqkYG3ichXvMHa28k8OyW/fTNUgDdxH,iv:QxnUAI7WtLHAbyBNZ/sZ9SGW/oSt9opAe+8hZx9enLU=,tag:NmbauuhdgSETV5IJO/yw7Q==,type:comment]
autoSetup:
enabled: ENC[AES256_GCM,data:JNsaBA==,iv:Ksl2T7oqWkj3z23NQhj1awbNlbcwx8NgdnlBzM8OSok=,tag:1NW+h/21Tweag79hDUHHzA==,type:bool]
defaultDatabases:
- ENC[AES256_GCM,data:lzJ0XfJJzRCV1X3Nocw0,iv:wb9K5q9VxbFcmWl5XiajDq6gfbHUPD5zQfN+Lz6h7+o=,tag:Pav3ToyRN973u0k1UvVrsA==,type:str]
#ENC[AES256_GCM,data:h+GBpgOGVSk6hNpaHN5EpKfhfOecNQ==,iv:/3aCDxCT0GEZzJIVanaMci5Ffplz3B+FcECS7HAfaFo=,tag:qjmRICzB0AalJqFRaOwi7A==,type:comment]
couchdbConfig:
chttpd:
bind_address: ENC[AES256_GCM,data:dYZ5,iv:ro/OID3wEV2kr2TkSZ5Ra55Z3KvdClXvFP9I3MtgPME=,tag:pn0DXNIj2o7LdgfVfwZO8w==,type:str]
require_valid_user: ENC[AES256_GCM,data:4n2zkA==,iv:25GveBKnbk/pvjiFV3M/91lvMHEb900VO9Uz6TmAvcE=,tag:KYvcCScRKe1h2lPXdWnIVA==,type:bool]
#ENC[AES256_GCM,data:Km3FG3gnt166w1qlh88jeCJ/f7PhiuG6zEo=,iv:odseRB19TX3U3oeFcTIRI4euZPyANHF0AZmpvxyNHR0=,tag:F4FEAxP/sAD+I4B9I9Op+g==,type:comment]
prometheusPort:
enabled: ENC[AES256_GCM,data:xL7ciw==,iv:kzwtlwmMky+kbIkqCMlWjkS5Y3M4xDavrup30TN9T54=,tag:KE7QybrwvIz9qz2IULPpPw==,type:bool]
bind_address: ENC[AES256_GCM,data:AB8399107Q==,iv:ywAUjHkG8yma8S0fNTblJAaWvUzFDd6U0tY7T0c8JoA=,tag:/6QhjBFtnU4OBNSMyAhQGQ==,type:str]
port: ENC[AES256_GCM,data:yKOGmFM=,iv:6PvgzEyFTzlzVzlbIAO+22ZDNB65dZU4WcnyTDtmfcg=,tag:ly1Que6CFFGj1l3V+h5sKQ==,type:int]
#ENC[AES256_GCM,data:kx5pbQG10rFPm4Z7CIfalg==,iv:qQOGJZy93F25GynYi2xQ/pst1q8TdblzfBGGwWF2fPk=,tag:XVa3RnS5pYzhVll4smCxWQ==,type:comment]
serviceAccount:
enabled: ENC[AES256_GCM,data:7nKzmw==,iv:qjoE5YQ0VVGCaUYSRxMIkNmd0VNWARvLGuoJKHM6nUs=,tag:RB4dLUTTsEH9Z514ag8Y1w==,type:bool]
create: ENC[AES256_GCM,data:AOVrQw==,iv:2zH771nyulbC/BdKuRNcuzutaM0yQn4bJ0K4HAPhyAI=,tag:kJysGUGjW9IKBt+eaxFZ9w==,type:bool]
sops:
age:
- recipient: age1s476478zx2klmkst79paaucw9vec9gkfgjtmzhqzdffmpkkmmf4s5x0nu0
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSAyMFNzWVdqdXY2Nm9jTjdS
L0FFS3k4LzlZMFVicUtaVkpCcmlXMHE4OW5vCktLdlRGVnR3UnRXeFUyV2FscDAv
TUE1Vmp4ZWV4Mk9pTklobkYyME5uamcKLS0tIEE4TnpvR25ldUdscTBLeitpa08z
eUw1Z0xUNkVrREYyY0ZTek1MUDd4WlUKQzqmhi2xugOkFJ+svN/JC5xZSeMuWPbG
N+CQ5dROi6Ap7KwqJICpWCwHIxSNStUvjP29w9KoZ5wRMSSZMZmCNQ==
-----END AGE ENCRYPTED FILE-----
lastmodified: "2025-10-13T05:53:21Z"
mac: ENC[AES256_GCM,data:fIk7OaEFqvZXxvXqJi7njz1ksLAW5lzQoF9R6zt0VoFpN6M48nZLx4nSq3fmNJNcAJReoFhA6YkjZCc/7Cl+sah72Jq4/MUxe44tndXkMxvkmjTH5RqWAVP895xkHOtyFrt821JcJxKCOFmx3SEf7WBDnlt6NnSwBvcaRsd0jeo=,iv:EaO1NbMp9Vkfyt0sjwS05x29Hvzyhn9BTjY23Ihjs54=,tag:mwHBWkmZNqpI9LM+TJuYQg==,type:str]
unencrypted_suffix: _unencrypted
version: 3.10.2