Public Information

This commit is contained in:
2025-01-24 16:18:47 +01:00
commit 0bd2038c86
449 changed files with 108655 additions and 0 deletions

11
charts/rms/Chart.yaml Normal file
View File

@@ -0,0 +1,11 @@
apiVersion: v2
name: nplus-component-rms
description: nplus Remote Management Server incl. RMS and Access Proxy
icon: data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0idXRmLTgiPz4KPHN2ZyB2ZXJzaW9uPSIxLjEiIGlkPSJFYmVuZV8xIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB4PSIwcHgiIHk9IjBweCIKCSB2aWV3Qm94PSIwIDAgNTEuMDI0IDUxLjAyNCIgZW5hYmxlLWJhY2tncm91bmQ9Im5ldyAwIDAgNTEuMDI0IDUxLjAyNCIgeG1sOnNwYWNlPSJwcmVzZXJ2ZSI+CjxnPgoJPHBvbHlnb24gZmlsbD0iI0E0QkZFNCIgcG9pbnRzPSIzMi4zMjIsMTkuNzQ0IDIwLjY0OSwxOS43NDQgMTguNTkxLDMxLjQxNyAzMC4yNjQsMzEuNDE3IAkiLz4KCTxwb2x5Z29uIGZpbGw9IiNBNEJGRTQiIHBvaW50cz0iNDcuMTg1LDE5Ljc0NCAzNS41MTIsMTkuNzQ0IDMzLjQ1NCwzMS40MTcgNDUuMTI2LDMxLjQxNyAJIi8+Cgk8cG9seWdvbiBmaWxsPSIjQTRCRkU0IiBwb2ludHM9IjI5Ljc2NiwzNC41NTEgMTguMDk0LDM0LjU1MSAxNi4wMzUsNDYuMjI0IDI3LjcwOCw0Ni4yMjQgCSIvPgoJPHBvbHlnb24gZmlsbD0iI0E0QkZFNCIgcG9pbnRzPSIxNy41NywxOS43NDQgNS44OTcsMTkuNzQ0IDMuODM5LDMxLjQxNyAxNS41MTIsMzEuNDE3IAkiLz4KCTxwb2x5Z29uIGZpbGw9IiNBNEJGRTQiIHBvaW50cz0iMzUuMTUsNC43OTkgMjMuNDc3LDQuNzk5IDIxLjQxOSwxNi40NzIgMzMuMDkyLDE2LjQ3MiAJIi8+Cjwvc3ZnPgo=
type: application
dependencies:
- name: nplus-globals
alias: globals
version: "*-0"
repository: "file://../globals"
version: 1.0.0

228
charts/rms/README.md Normal file
View File

@@ -0,0 +1,228 @@
# nplus-component-rms
nplus Remote Management Server incl. RMS and Access Proxy
## Generelle Informationen
Das RMS Chart simuliert einen Server, auf dem alle nscale Komponenten installiert sind. Dieser kann dann per
nscale Administrator zugegriffen werden. Der Ingress dafür ist admin.<domain>.
Das funktioniert nur mit RMS 2 und einem neuen Admin ab Version 9.2
## Bash into
```sh
kubectl exec --stdin --tty rms-rms-0 -- bash
```
## Logs
```sh
kubectl logs rms-rms-0 -c proxy
kubectl logs rms-rms-0 -c rms
```
## nplus-component-rms Chart Configuration
You can customize / configure nplus-component-rms by setting configuration values on the command line or in values files,
that you can pass to helm. Please see the samples directory for details.
In case there is no value set, the key will not be used in the manifest, resulting in values taken from the config files of the component.
### Template Functions
You can use template functions in the values files. If you do so, make sure you quote correctly (single quotes, if you have double quotes in the template,
or escaped quotes).
### Global Values
All values can be set per component, per instance or globally per environment.
Example: `global.ingress.domain` sets the domain on instance level. You can still set a different domain on a component, such as administrator.
In that case, simply set `ingress.domain` for the administrator chart and that setting will have priority:
- Prio 1 - Component Level: `ingress.domain`
- Prio 2 - Instance Level: `global.ingress.domain`
- Prio 3 - Environment Level: `global.environment.ingress.domain`
### Using Values in Templates
As it would be a lot of typing to write `.Values.ingress.domain | default .Values.global.ingress.domain | default .Values.global.environment.ingress.domain`in your
template code, this is automatically done by nplus. You can simply type `.this.ingress.domain` and you will get a condensed and defaulted version
of your Values.
So an example in your `values.yaml` would be:
```
administrator:
waitFor:
- '-service {{ .component.prefix }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local:\{{ .this.nappl.port }} -timeout 600'
```
This example shows `.this.nappl.port` which might come from a component, instance or global setting. You do not need to care.
The `.Release.Namespace` is set by helm. You have access to all Release and Chart Metadata, just like in your chart code.
The `.component.prefix` is calculated by nplus and gives you some handy shortcuts to internal variables:
- `.component.chartName`
The name of the chart as in `.Chart.Name`, but with override by `.Values.nameOverride`
- `.component.shortChartName`
A shorter Version of the name - `nappl` instead of `nplus-component-nappl`
- `.component.prefix`
The instance Prefix used to name the resources including `-`. This prefix is dropped, if the
`.Release.Name` equals `.Release.Namespace` for those of you that only
run one nplus Instance per namespace
- `.component.name`
The name of the component, including `.Values.nameOverride` and some logic
- `.component.fullName`
The fullName inlcuding `.Values.fullnameOverride` and some logic
- `.component.chart`
Mainly the `Chart.Name` and `Chart.Version`
- `.component.storagePath`
The path where the component config is stored in the conf PVC
- `.component.handler`
The handler (either helm, argoCD or manual)
- `.instance.name`
The name of the instance, but with override by `.Values.instanceOverride`
- `.instance.group`
The group, this instance belongs to. Override by `.Values.groupOverride`
- `.instance.version`
The *nscale* version (mostly taken from Application Layer), this instance is deploying.
- `.environment.name`
The name of the environment, but with override by `.Values.environmentNameOverride`
### Keys
You can set any of the following values for this component:
| Key | Description | Default |
|-----|-------------|---------|
**comps**&#8203;.cmis&#8203;.displayName | The displayName name of the component as it appears in the RMS Server Properties <br>do not change | **info only**, do not change<br> `"CMIS Connector"` |
**comps**&#8203;.cmis&#8203;.enabled | Toggles if this component should be available through RMS | `false` |
**comps**&#8203;.cmis&#8203;.host | The host, where this component runs | `"{{ .component.prefix }}cmis.{{ .Release.Namespace }}.svc.cluster.local"` |
**comps**&#8203;.cmis&#8203;.name | The internal name of the component <br>do not change | **info only**, do not change<br> `"cmis"` |
**comps**&#8203;.cmis&#8203;.ports&#8203;.http | proxied port <br>do not change | **info only**, do not change<br> `8096` |
**comps**&#8203;.cmis&#8203;.ports&#8203;.https | proxied port <br>do not change | **info only**, do not change<br> `8196` |
**comps**&#8203;.cmis&#8203;.replicaSetType | The type of the replicaSet - important for the kubectl command <br>do not change | **info only**, do not change<br> `"Deployment"` |
**comps**&#8203;.cmis&#8203;.restartReplicas | The amount of replicas to set when starting through the *nscale Administrator* client | `1` |
**comps**&#8203;.ilm&#8203;.displayName | The displayName name of the component as it appears in the RMS Server Properties <br>do not change | **info only**, do not change<br> `"SAP ILM Connector"` |
**comps**&#8203;.ilm&#8203;.enabled | Toggles if this component should be available through RMS | `false` |
**comps**&#8203;.ilm&#8203;.host | The host, where this component runs | `"{{ .component.prefix }}ilm.{{ .Release.Namespace }}.svc.cluster.local"` |
**comps**&#8203;.ilm&#8203;.name | The internal name of the component <br>do not change | **info only**, do not change<br> `"ilm"` |
**comps**&#8203;.ilm&#8203;.ports&#8203;.http | proxied port <br>do not change | **info only**, do not change<br> `8297` |
**comps**&#8203;.ilm&#8203;.ports&#8203;.https | proxied port <br>do not change | **info only**, do not change<br> `8397` |
**comps**&#8203;.ilm&#8203;.replicaSetType | The type of the replicaSet - important for the kubectl command <br>do not change | **info only**, do not change<br> `"Deployment"` |
**comps**&#8203;.ilm&#8203;.restartReplicas | The amount of replicas to set when starting through the *nscale Administrator* client | `1` |
**comps**&#8203;.mon&#8203;.displayName | The displayName name of the component as it appears in the RMS Server Properties <br>do not change | **info only**, do not change<br> `"Monitoring Console"` |
**comps**&#8203;.mon&#8203;.enabled | Toggles if this component should be available through RMS | `false` |
**comps**&#8203;.mon&#8203;.host | The host, where this component runs | `"{{ .component.prefix }}mon.{{ .Release.Namespace }}.svc.cluster.local"` |
**comps**&#8203;.mon&#8203;.name | The internal name of the component <br>do not change | **info only**, do not change<br> `"mon"` |
**comps**&#8203;.mon&#8203;.ports&#8203;.http | proxied port <br>do not change | **info only**, do not change<br> `8387` |
**comps**&#8203;.mon&#8203;.ports&#8203;.https | proxied port <br>do not change | **info only**, do not change<br> `8388` |
**comps**&#8203;.mon&#8203;.ports&#8203;.tcp | proxied port <br>do not change | **info only**, do not change<br> `8389` |
**comps**&#8203;.mon&#8203;.replicaSetType | The type of the replicaSet - important for the kubectl command <br>do not change | **info only**, do not change<br> `"StatefulSet"` |
**comps**&#8203;.mon&#8203;.restartReplicas | The amount of replicas to set when starting through the *nscale Administrator* client | `1` |
**comps**&#8203;.nappl&#8203;.displayName | The displayName name of the component as it appears in the RMS Server Properties <br>do not change | **info only**, do not change<br> `"Application Layer"` |
**comps**&#8203;.nappl&#8203;.enabled | Toggles if this component should be available through RMS | `false` |
**comps**&#8203;.nappl&#8203;.host | The host, where this component runs | `"{{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local"` |
**comps**&#8203;.nappl&#8203;.name | The internal name of the component <br>do not change | **info only**, do not change<br> `"nappl"` |
**comps**&#8203;.nappl&#8203;.ports&#8203;.http | proxied port <br>do not change | **info only**, do not change<br> `8080` |
**comps**&#8203;.nappl&#8203;.ports&#8203;.https | proxied port <br>do not change | **info only**, do not change<br> `8443` |
**comps**&#8203;.nappl&#8203;.replicaSetType | The type of the replicaSet - important for the kubectl command <br>do not change | **info only**, do not change<br> `"StatefulSet"` |
**comps**&#8203;.nappl&#8203;.restartReplicas | The amount of replicas to set when starting through the *nscale Administrator* client | `1` |
**comps**&#8203;.nstl&#8203;.displayName | The displayName name of the component as it appears in the RMS Server Properties <br>do not change | **info only**, do not change<br> `"Storage Layer"` |
**comps**&#8203;.nstl&#8203;.enabled | Toggles if this component should be available through RMS | `false` |
**comps**&#8203;.nstl&#8203;.host | The host, where this component runs | `"{{ .component.prefix }}nstl.{{ .Release.Namespace }}.svc.cluster.local"` |
**comps**&#8203;.nstl&#8203;.name | The internal name of the component <br>do not change | **info only**, do not change<br> `"nstl"` |
**comps**&#8203;.nstl&#8203;.ports&#8203;.tcp | proxied port <br>do not change | **info only**, do not change<br> `3005` |
**comps**&#8203;.nstl&#8203;.ports&#8203;.tcps | proxied port <br>do not change | **info only**, do not change<br> `3006` |
**comps**&#8203;.nstl&#8203;.replicaSetType | The type of the replicaSet - important for the kubectl command <br>do not change | **info only**, do not change<br> `"StatefulSet"` |
**comps**&#8203;.nstl&#8203;.restartReplicas | The amount of replicas to set when starting through the *nscale Administrator* client | `1` |
**comps**&#8203;.pipeliner&#8203;.displayName | The displayName name of the component as it appears in the RMS Server Properties <br>do not change | **info only**, do not change<br> `"Pipeliner"` |
**comps**&#8203;.pipeliner&#8203;.enabled | Toggles if this component should be available through RMS | `false` |
**comps**&#8203;.pipeliner&#8203;.host | The host, where this component runs | `"{{ .component.prefix }}pipeliner.{{ .Release.Namespace }}.svc.cluster.local"` |
**comps**&#8203;.pipeliner&#8203;.name | The internal name of the component <br>do not change | **info only**, do not change<br> `"pipeliner"` |
**comps**&#8203;.pipeliner&#8203;.ports&#8203;.tcp | proxied port <br>do not change | **info only**, do not change<br> `4173` |
**comps**&#8203;.pipeliner&#8203;.replicaSetType | The type of the replicaSet - important for the kubectl command <br>do not change | **info only**, do not change<br> `"StatefulSet"` |
**comps**&#8203;.pipeliner&#8203;.restartReplicas | The amount of replicas to set when starting through the *nscale Administrator* client | `1` |
**comps**&#8203;.rs&#8203;.displayName | The displayName name of the component as it appears in the RMS Server Properties <br>do not change | **info only**, do not change<br> `"Rendition Server"` |
**comps**&#8203;.rs&#8203;.enabled | Toggles if this component should be available through RMS | `false` |
**comps**&#8203;.rs&#8203;.host | The host, where this component runs | `"{{ .component.prefix }}rs.{{ .Release.Namespace }}.svc.cluster.local"` |
**comps**&#8203;.rs&#8203;.name | The internal name of the component <br>do not change | **info only**, do not change<br> `"rs"` |
**comps**&#8203;.rs&#8203;.ports&#8203;.http | proxied port <br>do not change | **info only**, do not change<br> `8192` |
**comps**&#8203;.rs&#8203;.ports&#8203;.https | proxied port <br>do not change | **info only**, do not change<br> `8193` |
**comps**&#8203;.rs&#8203;.replicaSetType | The type of the replicaSet - important for the kubectl command <br>do not change | **info only**, do not change<br> `"Deployment"` |
**comps**&#8203;.rs&#8203;.restartReplicas | The amount of replicas to set when starting through the *nscale Administrator* client | `1` |
**comps**&#8203;.web&#8203;.displayName | The displayName name of the component as it appears in the RMS Server Properties <br>do not change | **info only**, do not change<br> `"Application Layer Web"` |
**comps**&#8203;.web&#8203;.enabled | Toggles if this component should be available through RMS | `false` |
**comps**&#8203;.web&#8203;.host | The host, where this component runs | `"{{ .component.prefix }}web.{{ .Release.Namespace }}.svc.cluster.local"` |
**comps**&#8203;.web&#8203;.name | The internal name of the component <br>do not change | **info only**, do not change<br> `"web"` |
**comps**&#8203;.web&#8203;.ports&#8203;.http | proxied port <br>do not change | **info only**, do not change<br> `8090` |
**comps**&#8203;.web&#8203;.ports&#8203;.https | proxied port <br>do not change | **info only**, do not change<br> `8453` |
**comps**&#8203;.web&#8203;.replicaSetType | The type of the replicaSet - important for the kubectl command <br>do not change | **info only**, do not change<br> `"Deployment"` |
**comps**&#8203;.web&#8203;.restartReplicas | The amount of replicas to set when starting through the *nscale Administrator* client | `1` |
env | Sets additional environment variables for the configuration. | |
envMap | Sets the name of a configMap, which holds additional environment variables for the configuration. It is added as envFrom configMap to the container. | |
envSecret | Sets the name of a secret, which holds additional environment variables for the configuration. It is added as envFrom secretRef to the container. | |
fullnameOverride | This overrides the output of the internal fullname function | |
**image**&#8203;.name | the name of the image to use | `"admin-server"` |
**image**&#8203;.pullSecrets | you can provide your own pullSecrets, in case you use a private repo. | `["nscale-cr", "nplus-cr"]` |
**image**&#8203;.repo | if you use a private repo, feel free to set it here | `"git.nplus.cloud/subscription"` |
**image**&#8203;.tag | the tag of the image to use | `"latest"` |
**meta**&#8203;.language | Sets the language of the main service (in the *service* container). This is used for instance if you turn OpenTelemetry on, to know which Agent to inject into the container. | |
**meta**&#8203;.provider | sets provider (partner, reseller) information to be able to invoice per use in a cloud environment | |
**meta**&#8203;.serviceContainer | The container name of the main service for this component. This is used to define where to inject the telemetry agents, if any | |
**meta**&#8203;.stage | A optional parameter to indicate the stage (DEV, QA, PROD, ...) this component, instance or environment runs in. This can be used in template functions to add the stage to for instance the service name of telemetry services like open telemetry. (see telemetry example) | |
**meta**&#8203;.tenant | sets tenant information to be able to invoice per use in a cloud environment | |
**meta**&#8203;.type | the type of the component. You should not change this value, except if you use a pipeliner in core mode. In core mode, it should be *core*, else *pipeliner* This type is used to create cluster communication for nappl and nstl and potentially group multiple replicaSets into one service. | `"rms"` |
**meta**&#8203;.wave | Sets the wave in which this component should be deployed within an ArgoCD deployment if unset, it uses the default wave thus all components are installed in one wave, then relying on correct wait settings just like in a helm installation | |
minReplicaCountType | if you set minReplicaCountType, a podDesruptionBudget will be created with this value as minAvailable, using the component type as selector. This is useful for components, that are spread across multiple replicaSets, like sharepoint or storage layer | |
**mounts**&#8203;.caCerts&#8203;.configMap | Alternative 2: the name of the configMap to use. The Key has to be the File Name used in the path setting | |
**mounts**&#8203;.caCerts&#8203;.secret | Alternative 1: the name of the secret to use. The Key has to be the File Name used in the path setting | |
**mounts**&#8203;.componentCerts&#8203;.configMap | Alternative 2: the name of the configMap to use. The Key has to be the File Name used in the path setting | |
**mounts**&#8203;.componentCerts&#8203;.secret | Alternative 1: the name of the secret to use. The Key has to be the File Name used in the path setting | |
**mounts**&#8203;.data&#8203;.class | Sets the class of the data disk | |
**mounts**&#8203;.data&#8203;.size | Sets the size of the data disk | |
**mounts**&#8203;.data&#8203;.volumeName | If you do not want to have a Volume created by the provisioner, you can set the name of your volume here to attach to this pre-existing one | |
**mounts**&#8203;.disk&#8203;.class | Sets the class of the disk | |
**mounts**&#8203;.disk&#8203;.enabled | enables the use of the second data disk. If enabled, all paths defined will end up on this disk. In case of the (default) disabled, the paths will be added to the primaty data disk. | `false` |
**mounts**&#8203;.disk&#8203;.migration | Enables the migration init container. This will copy the data in paths from the primary data disk to the newly enabled secondary disk. This is done only once and only if there is legacy data at all. No files are overwritten! | `false` |
**mounts**&#8203;.disk&#8203;.size | Sets the size of the disk | |
**mounts**&#8203;.disk&#8203;.volumeName | If you do not want to have a Volume created by the provisioner, you can set the name of your volume here to attach to this pre-existing one | |
**mounts**&#8203;.file&#8203;.class | Sets the class of the shared disk | |
**mounts**&#8203;.file&#8203;.size | Sets the size of the shared disk | |
**mounts**&#8203;.file&#8203;.volumeName | If you do not want to have a Volume created by the provisioner, you can set the name of your volume here to attach to this pre-existing one | |
**mounts**&#8203;.generic | Allows to define generic mounts of pre-provisioned PVs into any container. This can be used e.g. to mount migration nfs, cifs / samba shares into a pipeliner container. | |
**mounts**&#8203;.logs&#8203;.medium | the medium for the emptyDisk volume if you unset it, it drops it from the manifest | |
**mounts**&#8203;.logs&#8203;.path | Sets the path to the log files <br>do not change this value | **info only**, do not change<br> `"/opt/ceyoniq/nscale-rms/log"` |
**mounts**&#8203;.logs&#8203;.size | Sets the size of the log disk (all paths) | `"1Gi"` |
**mounts**&#8203;.temp&#8203;.paths | Sets a list of paths to the temporary files <br>do not change this value | **info only**, do not change<br> `["/opt/ceyoniq/nscale-rms/tmp"]` |
**mounts**&#8203;.temp&#8203;.size | Sets the size of the temporary disk (all paths) | `"100Mi"` |
nameOverride | This overrides the output of the internal name function | |
nodeSelector | select specific nodes for this component | |
**security**&#8203;.cni&#8203;.adminIpRange | defines the IP Range of out-of-cluster Administrator Workplaces that are allowed to access the RMS Server. | |
**security**&#8203;.containerSecurityContext&#8203;.allowPrivilegeEscalation | Some functionality may need the possibility to allow privilege escalation. This should be very restrictive <br>you should not change this | **info only**, do not change<br> `false` |
**security**&#8203;.containerSecurityContext&#8203;.readOnlyRootFilesystem | sets the container root file system to read only. This should be the case in production environment <br>you should not change this | **info only**, do not change<br> `true` |
**security**&#8203;.podSecurityContext&#8203;.fsGroup | The file system group as which new files are created <br>there is normally no need to change this | **info only**, do not change<br> `1001` |
**security**&#8203;.podSecurityContext&#8203;.fsGroupChangePolicy | Under which condition should the fsGroup be changed <br>there is normally no need to change this | **info only**, do not change<br> `"OnRootMismatch"` |
**security**&#8203;.podSecurityContext&#8203;.runAsUser | The user under which the container ist run. Avoid 0 / root. The container should run in a non-root context for security <br>there is normally no need to change this | **info only**, do not change<br> `1001` |
**security**&#8203;.zeroTrust | turns on *Zero Trust* Mode, disabling *all* http communication, even the internal http probes | `false` |
**service**&#8203;.annotations | adds extra Annotations to the service | |
**service**&#8203;.enabled | enables the service to be consumed by group components and a potential ingress Disabling the service also disables the ingress. | `true` |
**service**&#8203;.selector | The selector can be `component` or `type` *component* selects only pods that are in the replicaset. *type* selects any pod that has the given type | `"component"` |
**telemetry**&#8203;.openTelemetry | turns Open Telemetry on | |
**telemetry**&#8203;.serviceName | Sets the service name for the telemetry service to more convenient identify the displayed component Example: "{{ .this.meta.type }}-{{ .instance.name }}" | |
terminationGracePeriodSeconds | Sets the terminationGracePeriodSeconds for the component If not set, it uses the Kubernetes defaults | |
timezone | set the time zone for this component to make sure log output has a specific timestamp, internal dates and times are correct (like the creationDate in nappl) etc. | `Europe/Berlin` |
tolerations | Set tolerations for this component | |
**utils**&#8203;.debug | Turn debugging *on* will give you stack trace etc. Please check out the Chart Developer Guide | `false` |
**utils**&#8203;.disableWait | in case you use the argoCD Wave feature, you might think about switching off the waitFor mechanism, that makes sure PODs are only started after pre-requisites are fulfilled. You can disable the starndard wait mechanism, but at your own risk, as this might start components even if they are not intended to run yet. | `false` |
**utils**&#8203;.disableWave | If you use argoCD, you most likely want to use the argo Wave Feature as well, making sure the components of an instance are deployed ordered. However, in DEV you might want to disable this to allow live changing components while previous waves are not finished yet. | `false` |
**utils**&#8203;.includeNamespace | By default, the namespace is rendered into the manifest. However, if you want to use `helm template` and store manifests for later applying them to multiple namespaces, you might want to turn this `false` to be able to use `kubectl apply -n <namespace> -f template.yaml` later | `true` |
**utils**&#8203;.maintenance | in Maintenance Mode, all *waitFor* actions will be skipped, the *Health Checks* are ignored and the pods will start in idle, not starting the service at all. This will allow you to gain access to the container to perform recovery and maintenance tasks while having the real container up. | `false` |
**utils**&#8203;.renderComments | You can turn Comment rendering *on* to get descriptive information inside the manifests. It will also fail on depricated functions and keys, so it is recommended to only switch it off in PROD | `true` |

View File

@@ -0,0 +1,2 @@
{{- include "nplus.init" $ -}}
{{- include "nplus.component" . -}}

View File

@@ -0,0 +1,18 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .component.fullName }}-haproxy
{{- if .this.utils.includeNamespace }}
namespace: {{ .Release.Namespace }}
{{- end }}
labels:
{{- include "nplus.instanceLabels" . | nindent 4 }}
annotations:
{{- include "nplus.argoWave" . | nindent 4 }}
{{- include "nplus.annotations" . | nindent 4 }}
{{- include "nplus.securityAnnotations" . | nindent 4 }}
data:
{{- range $path, $bytes := .Files.Glob "haproxy/*" }}
{{- base $path | nindent 2 }}: |
{{- tpl ($.Files.Get $path) $ | nindent 4 }}
{{- end }}

View File

@@ -0,0 +1,18 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .component.fullName }}-repository
{{- if .this.utils.includeNamespace }}
namespace: {{ .Release.Namespace }}
{{- end }}
labels:
{{- include "nplus.instanceLabels" . | nindent 4 }}
annotations:
{{- include "nplus.argoWave" . | nindent 4 }}
{{- include "nplus.annotations" . | nindent 4 }}
{{- include "nplus.securityAnnotations" . | nindent 4 }}
data:
{{- range $path, $bytes := .Files.Glob "repository/*" }}
{{- base $path | nindent 2 }}: |
{{- tpl ($.Files.Get $path) $ | nindent 4 }}
{{- end }}

View File

@@ -0,0 +1,42 @@
{{- include "nplus.init" $ -}}
{{- if ((.this.security).cni).createNetworkPolicy }}
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: {{ .component.fullName }}
{{- if .this.utils.includeNamespace }}
namespace: {{ .Release.Namespace }}
{{- end }}
labels:
{{- include "nplus.instanceLabels" . | nindent 4 }}
annotations:
{{- include "nplus.argoWave" . | nindent 4 }}
{{- include "nplus.annotations" . | nindent 4 }}
{{- include "nplus.securityAnnotations" . | nindent 4 }}
spec:
podSelector:
matchLabels:
{{- include "nplus.selectorLabels" . | nindent 6 }}
policyTypes:
- Ingress
- Egress
ingress:
- from:
# Access from out of Cluster (Admin Desktop)
- ipBlock:
cidr: {{ ((.this.security).cni).adminIpRange | quote }}
egress:
- to:
# All Pods in Instance
- podSelector:
matchLabels:
nplus/group: {{ .instance.group }}
# Allow API Access
- ports:
- protocol: TCP
port: 16443
- protocol: TCP
port: 443
{{- end }}

View File

@@ -0,0 +1,2 @@
{{- include "nplus.init" $ -}}
{{- include "nplus.podDisruptionBudget" . -}}

View File

@@ -0,0 +1,53 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .component.fullName }}-svc-account
{{- if .this.utils.includeNamespace }}
namespace: {{ .Release.Namespace }}
{{- end }}
labels:
{{- include "nplus.instanceLabels" . | nindent 4 }}
annotations:
{{- include "nplus.argoSharedResource" . | nindent 4 }}
{{- include "nplus.annotations" . | nindent 4 }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ .component.fullName }}-role
{{- if .this.utils.includeNamespace }}
namespace: {{ .Release.Namespace }}
{{- end }}
labels:
{{- include "nplus.instanceLabels" . | nindent 4 }}
annotations:
{{- include "nplus.argoSharedResource" . | nindent 4 }}
{{- include "nplus.annotations" . | nindent 4 }}
rules:
- apiGroups: ["apps"]
resources: ["deployments","deployments/scale", "statefulsets","statefulsets/scale", "replicasets"]
verbs: ["get", "patch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ .component.fullName }}-role-binding
{{- if .this.utils.includeNamespace }}
namespace: {{ .Release.Namespace }}
{{- end }}
labels:
{{- include "nplus.instanceLabels" . | nindent 4 }}
annotations:
{{- include "nplus.argoSharedResource" . | nindent 4 }}
{{- include "nplus.annotations" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ .component.fullName }}-role
subjects:
- kind: ServiceAccount
name: {{ .component.fullName }}-svc-account

View File

@@ -0,0 +1,43 @@
apiVersion: v1
kind: Service
metadata:
{{- if .this.utils.includeNamespace }}
namespace: {{ .Release.Namespace }}
{{- end }}
name: {{ .component.fullName }}-admin
labels:
{{- include "nplus.instanceLabels" . | nindent 4 }}
annotations:
{{- include "nplus.argoWave" . | nindent 4 }}
{{- include "nplus.annotations" . | nindent 4 }}
{{- include "nplus.securityAnnotations" . | nindent 4 }}
{{- include "nplus.serviceAnnotations" . | nindent 4 }}
spec:
selector:
{{- if eq .this.service.selector "component" }}
{{- include "nplus.selectorLabels" . | nindent 4 }}
{{- else if eq .this.service.selector "type" }}
{{- include "nplus.selectorLabelsNc" . | nindent 4 }}
{{- else }}
{{- fail (printf "Unknown Service Selector Type: %s - must be component or type" .this.service.selector) }}
{{- end }}
type: LoadBalancer
{{- if .Values.externalIp }}
loadBalancerIP: {{ .Values.externalIp }}
{{- end }}
ports:
- protocol: TCP
port: 3120
targetPort: 3120
name: rms
{{- range $ckey, $component := .Values.comps }}
{{- if $component.enabled }}
{{- range $pkey, $port := .ports }}
- protocol: TCP
port: {{ $port }}
targetPort: {{ $port }}
name: {{ $ckey }}-{{ $pkey }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,128 @@
{{- include "nplus.init" $ -}}
{{- range $key, $component := .Values.components }}
{{- range $port := .ports }}
# {{ $key }}/{{ $port }}
{{- end }}
{{- end }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .component.fullName }}
{{- if .this.utils.includeNamespace }}
namespace: {{ .Release.Namespace }}
{{- end }}
labels:
{{- include "nplus.instanceLabels" . | nindent 4 }}
annotations:
{{- include "nplus.argoWave" . | nindent 4 }}
{{- include "nplus.annotations" . | nindent 4 }}
{{- include "nplus.securityAnnotations" . | nindent 4 }}
spec:
serviceName: {{ .component.fullName }}
selector:
matchLabels:
{{- include "nplus.selectorLabels" . | nindent 6 }}
replicas: {{ .Values.replicaCount }}
podManagementPolicy: OrderedReady
updateStrategy:
type: {{ .Values.updateStrategy | default "OnDelete" }}
minReadySeconds: 5
template:
metadata:
labels:
{{- include "nplus.templateLabels" . | nindent 8 }}
annotations:
{{- include "nplus.templateAnnotations" . | nindent 8 }}
{{- include "nplus.securityAnnotations" . | nindent 8 }}
spec:
serviceAccountName: {{ .component.fullName }}-svc-account
{{- include "nplus.imagePullSecrets" . | nindent 6 }}
{{- include "nplus.podSecurityContext" . | nindent 6 }}
{{- include "nplus.securityIllumioReadinessGates" . | nindent 6 }}
{{- include "nplus.terminationGracePeriodSeconds" . | nindent 6 }}
initContainers:
{{- include "nplus.waitFor" . | nindent 6 }}
containers:
- name: rms
image: {{ include "nplus.image" (dict "global" .Values.global "image" .Values.image) }}
imagePullPolicy: {{ include "nplus.imagePullPolicy" .Values.image }}
{{- include "nplus.containerSecurityContext" . | nindent 8 }}
command: ["/opt/ceyoniq/nscale-rms/bin/rms.bin"]
ports:
- containerPort: 3120
name: rms
{{- include "nplus.resources" . | nindent 8 }}
volumeMounts:
{{- include "nplus.defaultMounts" . | nindent 8 }}
- name: conf
subPath: {{ .this.instance.name | quote }}
mountPath: /conf
{{- if ($.this.ingress).domain }}
- name: cert
subPath: tls.crt
mountPath: "/opt/ceyoniq/nscale-rms/bin/tls.cer"
readOnly: true
- name: cert
subPath: tls.key
mountPath: "/opt/ceyoniq/nscale-rms/bin/tls.key"
readOnly: true
{{- end }}
- name: repository
mountPath: /etc/ceyoniq/nscale-rms/repository
- name: proxy
image: {{ include "nplus.image" (dict "global" .Values.global "image" .Values.image) }}
imagePullPolicy: {{ include "nplus.imagePullPolicy" .Values.image }}
{{- include "nplus.containerSecurityContext" . | nindent 8 }}
command: ["haproxy", "-f", "/etc/haproxy/haproxy.cfg", "-d"]
ports:
{{- range $ckey, $component := .Values.comps }}
{{- if $component.enabled }}
{{- range $pkey, $port := .ports }}
- containerPort: {{ $port }}
name: {{ $ckey }}-{{ $pkey }}
protocol: TCP
{{- end }}
{{- end }}
{{- end }}
{{- include "nplus.resources" . | nindent 8 }}
volumeMounts:
{{- include "nplus.defaultMounts" . | nindent 8 }}
- name: haproxy
subPath: haproxy.cfg
mountPath: /etc/haproxy/haproxy.cfg
volumes:
{{- include "nplus.defaultVolumes" . | nindent 6 }}
- name: conf
persistentVolumeClaim:
claimName: conf
{{- if ($.this.ingress).domain }}
- name: cert
secret:
secretName: {{ ($.this.ingress).secret }}
{{- end }}
- name: repository
configMap:
name: {{ .component.fullName }}-repository
defaultMode: 0777
- name: haproxy
configMap:
name: {{ .component.fullName }}-haproxy
defaultMode: 0777

File diff suppressed because it is too large Load Diff

516
charts/rms/values.yaml Normal file
View File

@@ -0,0 +1,516 @@
# yaml-language-server: $schema=values.schema.json
comps:
# -- Values for the nappl component
nappl:
# -- The internal name of the component
# @internal -- do not change
name: nappl
# -- The displayName name of the component as it appears in the RMS Server Properties
# @internal -- do not change
displayName: "Application Layer"
# -- The amount of replicas to set when starting through the *nscale Administrator* client
restartReplicas: 1
# -- The type of the replicaSet - important for the kubectl command
# @internal -- do not change
replicaSetType: StatefulSet
# -- Toggles if this component should be available through RMS
enabled: false
# -- The ports exposed by the L4 Load Balancer / Reverse Proxy
# @internal -- do not change
ports:
# -- proxied port
# @internal -- do not change
http: 8080
# -- proxied port
# @internal -- do not change
https: 8443
# -- The host, where this component runs
host: "{{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local"
nstl:
# -- The internal name of the component
# @internal -- do not change
name: nstl
# -- The displayName name of the component as it appears in the RMS Server Properties
# @internal -- do not change
displayName: "Storage Layer"
# -- The amount of replicas to set when starting through the *nscale Administrator* client
restartReplicas: 1
# -- The type of the replicaSet - important for the kubectl command
# @internal -- do not change
replicaSetType: StatefulSet
# -- Toggles if this component should be available through RMS
enabled: false
# -- The ports exposed by the L4 Load Balancer / Reverse Proxy
# @internal -- do not change
ports:
# -- proxied port
# @internal -- do not change
tcp: 3005
# -- proxied port
# @internal -- do not change
tcps: 3006
# -- The host, where this component runs
host: "{{ .component.prefix }}nstl.{{ .Release.Namespace }}.svc.cluster.local"
rs:
# -- The internal name of the component
# @internal -- do not change
name: rs
# -- The displayName name of the component as it appears in the RMS Server Properties
# @internal -- do not change
displayName: "Rendition Server"
# -- The amount of replicas to set when starting through the *nscale Administrator* client
restartReplicas: 1
# -- The type of the replicaSet - important for the kubectl command
# @internal -- do not change
replicaSetType: Deployment
# -- Toggles if this component should be available through RMS
enabled: false
# -- The ports exposed by the L4 Load Balancer / Reverse Proxy
# @internal -- do not change
ports:
# -- proxied port
# @internal -- do not change
http: 8192
# -- proxied port
# @internal -- do not change
https: 8193
# -- The host, where this component runs
host: "{{ .component.prefix }}rs.{{ .Release.Namespace }}.svc.cluster.local"
mon:
# -- The internal name of the component
# @internal -- do not change
name: mon
# -- The displayName name of the component as it appears in the RMS Server Properties
# @internal -- do not change
displayName: "Monitoring Console"
# -- The amount of replicas to set when starting through the *nscale Administrator* client
restartReplicas: 1
# -- The type of the replicaSet - important for the kubectl command
# @internal -- do not change
replicaSetType: StatefulSet
# -- Toggles if this component should be available through RMS
enabled: false
# -- The ports exposed by the L4 Load Balancer / Reverse Proxy
# @internal -- do not change
ports:
# -- proxied port
# @internal -- do not change
http: 8387
# -- proxied port
# @internal -- do not change
https: 8388
# -- proxied port
# @internal -- do not change
tcp: 8389 # rmi
# -- The host, where this component runs
host: "{{ .component.prefix }}mon.{{ .Release.Namespace }}.svc.cluster.local"
ilm:
# -- The internal name of the component
# @internal -- do not change
name: ilm
# -- The displayName name of the component as it appears in the RMS Server Properties
# @internal -- do not change
displayName: "SAP ILM Connector"
# -- The amount of replicas to set when starting through the *nscale Administrator* client
restartReplicas: 1
# -- The type of the replicaSet - important for the kubectl command
# @internal -- do not change
replicaSetType: Deployment
# -- Toggles if this component should be available through RMS
enabled: false
# -- The ports exposed by the L4 Load Balancer / Reverse Proxy
# @internal -- do not change
ports:
# -- proxied port
# @internal -- do not change
http: 8297
# -- proxied port
# @internal -- do not change
https: 8397
# -- The host, where this component runs
host: "{{ .component.prefix }}ilm.{{ .Release.Namespace }}.svc.cluster.local"
cmis:
# -- The internal name of the component
# @internal -- do not change
name: cmis
# -- The displayName name of the component as it appears in the RMS Server Properties
# @internal -- do not change
displayName: "CMIS Connector"
# -- The amount of replicas to set when starting through the *nscale Administrator* client
restartReplicas: 1
# -- The type of the replicaSet - important for the kubectl command
# @internal -- do not change
replicaSetType: Deployment
# -- Toggles if this component should be available through RMS
enabled: false
# -- The ports exposed by the L4 Load Balancer / Reverse Proxy
# @internal -- do not change
ports:
# -- proxied port
# @internal -- do not change
http: 8096
# -- proxied port
# @internal -- do not change
https: 8196
# -- The host, where this component runs
host: "{{ .component.prefix }}cmis.{{ .Release.Namespace }}.svc.cluster.local"
web:
# -- The internal name of the component
# @internal -- do not change
name: web
# -- The displayName name of the component as it appears in the RMS Server Properties
# @internal -- do not change
displayName: "Application Layer Web"
# -- The amount of replicas to set when starting through the *nscale Administrator* client
restartReplicas: 1
# -- The type of the replicaSet - important for the kubectl command
# @internal -- do not change
replicaSetType: Deployment
# -- Toggles if this component should be available through RMS
enabled: false
# -- The ports exposed by the L4 Load Balancer / Reverse Proxy
# @internal -- do not change
ports:
# -- proxied port
# @internal -- do not change
http: 8090
# -- proxied port
# @internal -- do not change
https: 8453
# -- The host, where this component runs
host: "{{ .component.prefix }}web.{{ .Release.Namespace }}.svc.cluster.local"
pipeliner:
# -- The internal name of the component
# @internal -- do not change
name: pipeliner
# -- The displayName name of the component as it appears in the RMS Server Properties
# @internal -- do not change
displayName: "Pipeliner"
# -- The amount of replicas to set when starting through the *nscale Administrator* client
restartReplicas: 1
# -- The type of the replicaSet - important for the kubectl command
# @internal -- do not change
replicaSetType: StatefulSet
# -- Toggles if this component should be available through RMS
enabled: false
# -- The ports exposed by the L4 Load Balancer / Reverse Proxy
# @internal -- do not change
ports:
# -- proxied port
# @internal -- do not change
tcp: 4173 # for admin and mon
# -- The host, where this component runs
host: "{{ .component.prefix }}pipeliner.{{ .Release.Namespace }}.svc.cluster.local"
meta:
# -- the type of the component. You should not change this value, except if
# you use a pipeliner in core mode. In core mode, it should be *core*, else *pipeliner*
# This type is used to create cluster communication for nappl and nstl and potentially
# group multiple replicaSets into one service.
type: rms
# -- lists the ports this component exposes. This is important for zero trust mode and others.
ports:
# -- The http port this component uses (if any). In zero trust mode, this will be disabled.
# @internal -- this is a constant value of the component and should not be changed.
http:
# -- The tls / https port, this component uses (if any)
# @internal -- this is a constant value of the component and should not be changed.
https:
# -- A potential tcp port, this component uses (if any)
# @internal -- this is a constant value of the component and should not be changed.
tcp:
# -- A potential tls / tcps port, this component uses (if any)
# @internal -- this is a constant value of the component and should not be changed.
tcps:
# -- A potential rmi port, this component uses (if any)
# @internal -- this is a constant value of the component and should not be changed.
rmi:
# -- sets tenant information to be able to invoice per use in a cloud environment
tenant:
# -- sets provider (partner, reseller) information to be able to invoice per use in a cloud environment
provider:
# -- Sets the wave in which this component should be deployed within an ArgoCD deployment
# if unset, it uses the default wave thus all components are installed in one wave, then relying
# on correct wait settings just like in a helm installation
wave:
# -- Sets the language of the main service (in the *service* container). This is used for instance
# if you turn OpenTelemetry on, to know which Agent to inject into the container.
language:
# -- The container name of the main service for this component. This is used to define where to
# inject the telemetry agents, if any
serviceContainer:
# -- A optional parameter to indicate the stage (DEV, QA, PROD, ...) this component, instance or environment
# runs in. This can be used in template functions to add the stage to for instance the service name of
# telemetry services like open telemetry. (see telemetry example)
stage:
# -- This is the version of the component, used for display
# @internal -- set by devOps pipeline, so do not modify
componentVersion:
# -- the replicaCount for the Storage Layer. This does not make sense, so
# leave this at 1 at any time, unless you know exactly what you are doing.
# @ignore
replicaCount: 1
mounts:
# -- The log volume is used to take any left-over logging in the container.
# The container should log to stdout, but if any component still tries to log to disk
# this disk needs to be writeable
logs:
# -- Sets the size of the log disk (all paths)
size: "1Gi"
# -- the medium for the emptyDisk volume
# if you unset it, it drops it from the manifest
medium:
# -- Sets the path to the log files
# @internal -- do not change this value
path: "/opt/ceyoniq/nscale-rms/log"
# -- Sets a list of paths to the log files
# @internal -- do not change this value
paths:
# -- The temp volume is used to hold any superflues and temporary data.
# it is deleted when the pod terminates. However, it is extremely important
# as all pods filesystems are read only
temp:
# -- Sets a list of paths to the temporary files
# @internal -- do not change this value
paths:
- "/opt/ceyoniq/nscale-rms/tmp"
# -- Sets the size of the temporary disk (all paths)
size: "100Mi"
# -- Sets the path to the temporary files
# @internal -- do not change this value
path:
# -- The conf volume is a RWX volume mounted by the environment, that holds
# all configurations of all instances and components in this environment
conf:
# -- Sets the path to the conf files
# @internal -- do not change this value
path:
# -- Sets a list of paths to the conf files
# @internal -- do not change this value
paths:
# -- some nscale Components require a license file and this
# defines it's location
license:
# -- Sets the path to the license files
# @internal -- do not change this value
path:
# -- If you want to use additional
# fonts like the msttcorefonts (Microsoft Core Fonts). This mounts the
# fonts directory from the environment pool
fonts:
# -- Sets the path to the fonts folder.
# @internal -- do not change this value
path:
# -- You can add a file with trusted Root Certificates (e.g. Azure), to be able to
# connect to alien services via https. If you have a self-signed root certificate,
# you can also add it here.
caCerts:
# -- Sets the path to the certs folder.
# @internal -- do not change this value
paths:
# -- Alternative 1: the name of the secret to use. The Key has to be the File Name used in the path setting
secret:
# -- Alternative 2: the name of the configMap to use. The Key has to be the File Name used in the path setting
configMap:
# -- the java based nscale components have their own certificates, that you might want to upload.
# You can normally do so via the environment configuration, but should you want to use a secret,
# you can set it here
componentCerts:
# -- Sets the path to the component certs.
# @internal -- do not change this value
paths:
# -- Alternative 1: the name of the secret to use. The Key has to be the File Name used in the path setting
secret:
# -- Alternative 2: the name of the configMap to use. The Key has to be the File Name used in the path setting
configMap:
data:
# -- Sets the size of the data disk
size:
# -- Sets the class of the data disk
class:
# -- Sets the path to the data files
# @internal -- do not change this value
path:
# -- Sets a list of paths to the data files
# @internal -- do not change this value
paths:
# -- If you do not want to have a Volume created by the provisioner,
# you can set the name of your volume here to attach to this pre-existing one
volumeName:
file:
# -- Sets the size of the shared disk
size:
# -- Sets the class of the shared disk
class:
# -- Sets the path to the shared files
# @internal -- do not change this value
path:
# -- Sets a list of paths to the shared files
# @internal -- do not change this value
paths:
# -- If you do not want to have a Volume created by the provisioner,
# you can set the name of your volume here to attach to this pre-existing one
volumeName:
pool:
# -- Sets the path to a directory, there the `pool` folder from the `conf` volume should be mounted.
# this is used to store scripts, apps and assets that are required to deploy an application / solution
# @internal -- do not change this value
path:
# -- The temp volume is used to hold any superflues and temporary data.
# it is deleted when the pod terminates. However, it is extremely important
# as all pods filesystems are read only
ptemp:
# -- Sets the path for temporary files that are persisted
# @internal -- do not change this value
path:
# -- Sets a list of paths for temporary files that are persisted
# @internal -- do not change this value
paths:
# -- Allows to define generic mounts of pre-provisioned PVs into any container.
# This can be used e.g. to mount migration nfs, cifs / samba shares into a pipeliner container.
generic:
disk:
# -- Sets the size of the disk
size:
# -- Sets the class of the disk
class:
# -- Sets the path to the disk files
# @internal -- do not change this value
path:
# -- Sets a list of paths to the data files
# @internal -- do not change this value
paths:
# -- If you do not want to have a Volume created by the provisioner,
# you can set the name of your volume here to attach to this pre-existing one
volumeName:
# -- enables the use of the second data disk. If enabled, all paths defined will end up on this disk.
# In case of the (default) disabled, the paths will be added to the primaty data disk.
enabled: false
# -- Enables the migration init container. This will copy the data in paths from the primary data disk to the newly enabled secondary disk.
# This is done only once and only if there is legacy data at all. No files are overwritten!
migration: false
# -- provide the image to be used for this component
image:
# -- you can provide your own pullSecrets, in case you use
# a private repo.
pullSecrets:
- nscale-cr
- nplus-cr
# -- the name of the image to use
name: admin-server
# -- the tag of the image to use
tag: latest
# -- if you use a private repo, feel free to set it here
repo: git.nplus.cloud/subscription
pullPolicy: IfNotPresent
# -- Security Section defining default runtime environment for your container
security:
cni:
# -- defines the IP Range of out-of-cluster Administrator Workplaces that are
# allowed to access the RMS Server.
adminIpRange:
podSecurityContext:
# -- The user under which the container ist run. Avoid 0 / root. The container should run in a non-root context
# for security
# @internal -- there is normally no need to change this
runAsUser: 1001
# -- The file system group as which new files are created
# @internal -- there is normally no need to change this
fsGroup: 1001
# -- Under which condition should the fsGroup be changed
# @internal -- there is normally no need to change this
fsGroupChangePolicy: OnRootMismatch
containerSecurityContext:
# -- sets the container root file system to read only. This should be the case in production environment
# @internal -- you should not change this
readOnlyRootFilesystem: true
# -- Some functionality may need the possibility to allow privilege escalation. This should be very restrictive
# @internal -- you should not change this
allowPrivilegeEscalation: false
# -- Capabilities this container should have. Only allow the necessity, and drop as many as possible
# @internal -- you should not change this
capabilities:
drop:
- ALL
# -- turns on *Zero Trust* Mode, disabling *all* http communication, even the internal http probes
# @default -- `false`
zeroTrust:
# # <id>:
# # path: <the path in the container, where you want to mount this>
# # volumeName: <the name of the PV to be mounted>
# # subPath: <an (optional) subpath to be used inside the PV>
# -- set the time zone for this component to make sure log output has a specific timestamp, internal dates and times are correct (like the creationDate in nappl)
# etc.
# @default -- `Europe/Berlin`
timezone:
# -- Set tolerations for this component
tolerations:
# -- select specific nodes for this component
nodeSelector:
# -- Sets the name of a secret, which holds additional environment variables for
# the configuration. It is added as envFrom secretRef to the container.
envSecret:
# -- Sets the name of a configMap, which holds additional environment variables for
# the configuration. It is added as envFrom configMap to the container.
envMap:
# -- Sets additional environment variables for
# the configuration.
env:
# -- This overrides the output of the internal name function
nameOverride:
# -- This overrides the output of the internal fullname function
fullnameOverride:
utils:
# -- Turn debugging *on* will give you stack trace etc.
# Please check out the Chart Developer Guide
# @default -- `false`
debug:
# -- You can turn Comment rendering *on* to get descriptive information inside the manifests. It
# will also fail on depricated functions and keys, so it is recommended to only switch it off in PROD
# @default -- `true`
renderComments:
# -- By default, the namespace is rendered into the manifest. However, if you want to use
# `helm template` and store manifests for later applying them to multiple namespaces, you might
# want to turn this `false` to be able to use `kubectl apply -n <namespace> -f template.yaml` later
# @default -- `true`
includeNamespace:
# -- in Maintenance Mode, all *waitFor* actions will be skipped, the *Health Checks* are ignored and the
# pods will start in idle, not starting the service at all. This will allow you to gain access to the container
# to perform recovery and maintenance tasks while having the real container up.
# @default -- `false`
maintenance:
# -- If you use argoCD, you most likely want to use the argo Wave Feature as well, making sure the components
# of an instance are deployed ordered. However, in DEV you might want to disable this to allow live changing components
# while previous waves are not finished yet.
# @default -- `false`
disableWave:
# -- in case you use the argoCD Wave feature, you might think about switching off the waitFor mechanism, that makes sure PODs are
# only started after pre-requisites are fulfilled. You can disable the starndard wait mechanism, but at your own risk, as this might
# start components even if they are not intended to run yet.
# @default -- `false`
disableWait:
service:
# -- enables the service to be consumed by group components and a potential ingress
# Disabling the service also disables the ingress.
enabled: true
# -- The selector can be `component` or `type`
# *component* selects only pods that are in the replicaset.
# *type* selects any pod that has the given type
selector: "component"
# -- adds extra Annotations to the service
annotations:
# -- if you set minReplicaCountType, a podDesruptionBudget will be created with this value as
# minAvailable, using the component type as selector. This is useful for components, that are spread
# across multiple replicaSets, like sharepoint or storage layer
minReplicaCountType:
# -- Settings for telemetry tools
telemetry:
# -- turns Open Telemetry on
openTelemetry:
# -- Sets the service name for the telemetry service to more convenient
# identify the displayed component
# Example: "{{ .this.meta.type }}-{{ .instance.name }}"
serviceName:
# -- Sets the terminationGracePeriodSeconds for the component
# If not set, it uses the Kubernetes defaults
terminationGracePeriodSeconds: