Files
nplus/charts/instance/README.md
2025-01-24 16:18:47 +01:00

52 KiB
Raw Blame History

nplus-instance

nplus Instance, an umbrella chart for orchestrating the components in a nplus Instance

Single Instance Mode

If you want to separate tenants on your system not only by instance but also by environment / namespace, you can run nplus in single instance mode.

SIM (Single Instance Mode) lets you deploy your instance including all components of the environment in one single chart.

Steps to turn on single instance mode:

  • Create your Namespace
  • Upload the secrets you need to access the repos, registries as well as the nscale license file
  • Turn on the sim components in your instance values file
  • deploy your instance (under the same name as your namespace) to the new namespace

In this case, no separate deployment of the environment is necessary and the environment components will show up as parts of the instance.

Please also see the example Single-Instance-Mode for a detailed How-To.

nplus-instance Chart Configuration

You can customize / configure nplus-instance by setting configuration values on the command line or in values files, that you can pass to helm. Please see the samples directory for details.

In case there is no value set, the key will not be used in the manifest, resulting in values taken from the config files of the component.

Template Functions

You can use template functions in the values files. If you do so, make sure you quote correctly (single quotes, if you have double quotes in the template, or escaped quotes).

Global Values

All values can be set per component, per instance or globally per environment.

Example: global.ingress.domain sets the domain on instance level. You can still set a different domain on a component, such as administrator. In that case, simply set ingress.domain for the administrator chart and that setting will have priority:

  • Prio 1 - Component Level: ingress.domain
  • Prio 2 - Instance Level: global.ingress.domain
  • Prio 3 - Environment Level: global.environment.ingress.domain

Using Values in Templates

As it would be a lot of typing to write .Values.ingress.domain | default .Values.global.ingress.domain | default .Values.global.environment.ingress.domainin your template code, this is automatically done by nplus. You can simply type .this.ingress.domain and you will get a condensed and defaulted version of your Values.

So an example in your values.yaml would be:

administrator:
  waitFor:
    - '-service {{ .component.prefix }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local:\{{ .this.nappl.port }} -timeout 600'

This example shows .this.nappl.port which might come from a component, instance or global setting. You do not need to care. The .Release.Namespace is set by helm. You have access to all Release and Chart Metadata, just like in your chart code.

The .component.prefix is calculated by nplus and gives you some handy shortcuts to internal variables:

  • .component.chartName The name of the chart as in .Chart.Name, but with override by .Values.nameOverride

  • .component.shortChartName A shorter Version of the name - nappl instead of nplus-component-nappl

  • .component.prefix The instance Prefix used to name the resources including -. This prefix is dropped, if the .Release.Name equals .Release.Namespace for those of you that only run one nplus Instance per namespace

  • .component.name The name of the component, including .Values.nameOverride and some logic

  • .component.fullName The fullName inlcuding .Values.fullnameOverride and some logic

  • .component.chart Mainly the Chart.Name and Chart.Version

  • .component.storagePath The path where the component config is stored in the conf PVC

  • .component.handler The handler (either helm, argoCD or manual)

  • .instance.name The name of the instance, but with override by .Values.instanceOverride

  • .instance.group The group, this instance belongs to. Override by .Values.groupOverride

  • .instance.version The nscale version (mostly taken from Application Layer), this instance is deploying.

  • .environment.name The name of the environment, but with override by .Values.environmentNameOverride

Keys

You can set any of the following values for this component:

Key Description Default
administrator.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"administrator"
administrator.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
administrator.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.3.1201"
administrator.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.3.1201"
administrator.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 9
administrator.waitFor Defines what condition needs to be met before this components starts ["-service {{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 1800"]
application.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"application-layer"
application.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
application.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.3.1300.2024121814"
application.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.3.1300"
application.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 11
application.nstl.host sets the dns of the nscale Server Storage Layer, that should be configured "{{ .component.prefix }}nstl.{{ .Release.Namespace }}"
application.rs.host sets the dns of the nscale Rendition Server, that should be configured "{{ .component.prefix }}rs.{{ .Release.Namespace }}"
application.waitFor Defines what condition needs to be met before this components starts ["-service {{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 1800"]
backend.meta.componentVersion "1.2.1400-124"
backend.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 1
cmis.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"cmis-connector"
cmis.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
cmis.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.3.1200.2024112508"
cmis.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.3.1200"
cmis.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 8
cmis.waitFor Defines what condition needs to be met before this components starts ["-service {{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 1800"]
components.administrator enable a nscale Administrator Web component in this instance false
components.application deploy any solution using GBA, Standard Apps or shell copy with this generic deployment chart false
components.cmis enable a nscale CMIS Connector component in this instance false
components.database enable an internal Postgres Database in this instance true
components.dmsapi false
components.erpcmis enable a nscale ERP CMIS Connector component in this instance false
components.erpproxy enable a nscale ERP Proxy Connector component in this instance false
components.ilm enable a nscale ILM Connector component in this instance false
components.mon enable a nscale Monitoring Console component in this instance false
components.nappl enable a consumer nscale Application Layer component in this instance true
components.nappljobs enable a dedicated jobs nscale Application Layer component in this instance please also make sure to set the jobs setting false
components.nstl enable a nscale Server Storage Layer component in this instance If you are in a High Availability scenario, disable this true
components.nstla enable an additional nscale Server Storage Layer node in this instance within a High Availability scenario. false
components.nstlb enable an additional nscale Server Storage Layer node in this instance within a High Availability scenario. false
components.nstlc enable an additional nscale Server Storage Layer node in this instance within a High Availability scenario. false
components.nstld enable an additional nscale Server Storage Layer node in this instance within a High Availability scenario. false
components.pam enable a nscale Process Automation Modeler component in this instance false
components.pipeliner enable nscale Pipeliner component in this instance false
components.prepper download, deploy and run any git asset or script prior to installation of the components false
components.rms enable a nplus Remote Management Server component in this instance If you are in a High Availability scenario, disable this false
components.rmsa enable an additional nplus Remote Management Server in this instance within a High Availability scenario. false
components.rmsb enable an additional nplus Remote Management Server in this instance within a High Availability scenario. false
components.rs enable a nscale Rendition Server component in this instance true
components.sharepoint enable a nscale Sharepoint Connector component in this instance false
components.sharepointa enable an additional nscale Sharepoint Connector component in this instance for another set of configuration parameters false
components.sharepointb enable an additional nscale Sharepoint Connector component in this instance for another set of configuration parameters false
components.sharepointc enable an additional nscale Sharepoint Connector component in this instance for another set of configuration parameters false
components.sharepointd enable an additional nscale Sharepoint Connector component in this instance for another set of configuration parameters false
components.sim.backend This is for Single-Instance-Mode only. Read the docu before enabling this. the backend components holds the common storages / PVCs for conf and ptemp umong other common environmental resources false
components.sim.dav This is for Single-Instance-Mode only. Read the docu before enabling this. DAV gives you WebDAV access to your conf and ptemp volumes false
components.sim.operator This is for Single-Instance-Mode only. Read the docu before enabling this. The Operator will let you query the Custom Resources for nscale, e.g. kubectl get nscale false
components.sim.toolbox This is for Single-Instance-Mode only. Read the docu before enabling this. the toolbox has a git client installed and is suitable for pulling, pushing, copying stuff into the pool, fonts, certificates, snippets and configuration files false
components.web enable a nscale Web component in this instance true
components.webdav enable a nscale WebDAV Connector component in this instance false
database.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"bitnami/postgresql"
database.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons
database.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"16"
database.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"16"
database.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 3
dmsapi.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 8
dmsapi.waitFor Defines what condition needs to be met before this components starts ["-service {{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 1800"]
erpcmis.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"erp-cmis-connector"
erpcmis.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
erpcmis.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.2.1000.2024032720"
erpcmis.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.2.1000"
erpcmis.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 8
erpcmis.waitFor Defines what condition needs to be met before this components starts ["-service {{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 1800"]
erpproxy.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"sap-proxy-connector"
erpproxy.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/pre-release/nscale"
erpproxy.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.3.1000.2024092409"
erpproxy.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.3.1000"
erpproxy.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 8
erpproxy.waitFor Defines what condition needs to be met before this components starts ["-service {{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 1800"]
global.database.account DB account (if not using a secret) "nscale"
global.database.dialect nscale DB server dialect "PostgreSQL"
global.database.driverclass nscale DB server driverclass "org.postgresql.Driver"
global.database.name name of the nscale DB "nscale"
global.database.password DB password (if not using a secret) "nscale"
global.database.passwordEncoded weather the password is stored encrypted "false"
global.database.schema DB schema name "public"
global.database.secret DB credential secret (account, password)
global.database.url The URL to the database "jdbc:postgresql://{{ .component.prefix }}database:5432/{{ .this.database.name }}"
global.ingress.appRoot Sets the root for this instance, where incoming root traffic should be redirected to "/nscale_web"
global.ingress.class sets the global ingressclass for all components to use - if they do not define a specific one, for example if there are separate controllers for internal and external traffic `public``
global.ingress.createSelfSignedCertificate if you do not define an issuer to generate the tls secret for you, you still can have a self signed certificate generated for you, if you set this to true. The default is true, so either you have an issuer or not, you will always end up with a certificate. Set an empty issuer and createSelfSignedCertificate to false to have no certificate generated and use an external or existing secret. Then make sure the secret matches. true
global.ingress.domain Sets the global domain within the instance to be used, if the component does not define any domain. If this remains empty, no ingress is generated Example: {{ .instance.group }}.lab.nplus.cloud
global.ingress.issuer Sets the name of the issuer to create the tls secret. Very common is to have it created by cert-manager. Please see the documentation how to create a cert-manager cluster issuer for example. If no issuer is set, no certificate request will be generated
global.ingress.namespace Specify the namespace in which the ingress controller runs. This sets the firewall rule / networkPolicy to allow traffic from this namespace to our pods. This may be a comma separated list ingress, kube-system, ingress-nginx
global.ingress.secret Sets the name of the tls secret to be used for this ingress, that contains the private and public key. This secret is then either generated by cert-manager or self signed by helm - or not created {{ .this.ingress.domain }}-tls
global.ingress.whitelist optionally sets a whitelist of ip ranges (CIDR format, comma separated) from which ingress is allowed. This is an annotation for nginx, so won't work with other ingress controllers
global.instance.group The group of the instance. This is used for the networkPolicies. Only Pods within one group are allowed to communicate if you enable the nplus Network Policies. By default, this is set the same as the instance name
global.instance.name The name of the instance. Should this name be identical to the namespace name, then the prefix will be dropped. By default, this is the .Release.Name "{{ .Release.Name }}"
global.license Globally set the license secret name "nscale-license"
global.logForwarderImage.name defines the nplus toolbox name to be used for the wait feature "fluent-bit"
global.logForwarderImage.pullPolicy defines the nplus toolbox pull policy to be used for the wait feature "IfNotPresent"
global.logForwarderImage.repo defines the nplus toolbox image to be used for the wait feature "cr.fluentbit.io/fluent"
global.logForwarderImage.tag defines the tag for the logforwarder (FluentBit)
set by devOps pipeline, so do not modify
info only, do not change
"2.0"
global.meta.nscaleVersion Sets the nscale version of this deployment / instance. This is used by the operator to display the correct version e.g. in the Web UI.
this is set by the devOps pipeline, so do not modify
info only, do not change
"9.3.1300"
global.nappl.account The technical account to login with "admin"
global.nappl.domain The domain of the technical account "nscale"
global.nappl.host sets the nscale Server Application Layer host to be used. As this is a global option, it can be overridden at component level. "{{ .component.prefix }}nappl.{{ .Release.Namespace }}"
global.nappl.instance the instance of nscale Server Application Layer to be used
As this is depricated for nscale 10, you should never modify this.
info only, do not change
"nscalealinst1"
global.nappl.password The password of the technical accunt (if not set by secret) "admin"
global.nappl.port sets the nscale Server Application Layer port to be used. As this is a global option, it can be overridden at component level. if you switch to zero trus mode or change the nappl backend to https, you want to modify this port to 8443 8080
global.nappl.secret An optional secret that holds the credentials (the keys must be account and password)
global.nappl.ssl wether to use ssl or not for the advanced connector false
global.security.cni.administratorInstance sets the instance, from which Administration is allowed "{{ .this.instance.name }}"
global.security.cni.administratorNamespace sets the namespace, from which Administration is allowed "{{ .Release.Namespace }}"
global.security.cni.createNetworkPolicy creates NetworkPolicies for each component.
global.security.cni.defaultEgressPolicy if defined, creates a default NetworkPolicy to handle egress Traffic from the instance. Possible Values: deny, allow, none
global.security.cni.defaultIngressPolicy if defined, creates a default NetworkPolicy to handle ingress Traffic to the instance. Possible Values: deny, allow, none
global.security.cni.monitoringInstance sets the instance, from which Monitoring is allowed "{{ .this.instance.name }}"
global.security.cni.monitoringNamespace sets the namespace, from which Monitoring is allowed "{{ .Release.Namespace }}"
global.security.cni.pamInstance sets the instance, from which Process Automation Modeling is allowed "{{ .this.instance.name }}"
global.security.cni.pamNamespace sets the namespace, from which Process Automation Modeling is allowed "{{ .Release.Namespace }}"
global.security.zeroTrust enables zero trust on the instance. When enabled, no unencrypted http connection is allowed. This will remove all http ports from pods, services, network policies and ingress rules
global.telemetry.openTelemetry if you use a OpenTelemetry as a telemetry collector, you can enable it here. This will add the annotations to some known pods for the injector to use agents inside the pods for telemetry collection. This often goes along with the language setting in the meta section to tell the telemetry collector which agent to inject.
global.waitImage.name defines the nplus toolbox name to be used for the wait feature "toolbox2"
global.waitImage.pullPolicy defines the nplus toolbox pull policy to be used for the wait feature "IfNotPresent"
global.waitImage.repo defines the nplus toolbox image to be used for the wait feature "cr.nplus.cloud/subscription"
global.waitImage.tag defines the nplus toolbox tag to be used for the wait feature
set by devOps pipeline, so do not modify
info only, do not change
"1.2.1300"
ilm.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"ilm-connector"
ilm.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
ilm.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.3.1000.2024091702"
ilm.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.3.1000"
ilm.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 8
ilm.waitFor Defines what condition needs to be met before this components starts ["-service {{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 1800"]
meta.provider sets provider (partner, reseller) information to be able to invoice per use in a cloud environment
meta.tenant sets tenant information to be able to invoice per use in a cloud environment
mon.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"monitoring-console"
mon.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
mon.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.3.1000.2024092618"
mon.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.3.1000"
mon.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 8
nappl.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"application-layer"
nappl.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
nappl.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.3.1300.2024121814"
nappl.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.3.1300"
nappl.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler "{{ if .this.jobs }}4{{ else }}6{{ end }}"
nappl.waitFor Defines what condition needs to be met before this components starts ["-service {{ .component.prefix }}database.{{ .Release.Namespace }}.svc.cluster.local:5432 -timeout 600"]
nappljobs.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"application-layer"
nappljobs.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
nappljobs.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.3.1300.2024121814"
nappljobs.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.3.1300"
nappljobs.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 4
nstl.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"storage-layer"
nstl.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
nstl.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.3.1201.2024112518"
nstl.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.3.1201"
nstl.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 3
nstla.clusterService.enabled When using multiple nstl Instances with different configurations, you still might want to use a cluster service for HA access This will generate one for you. true
nstla.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"storage-layer"
nstla.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
nstla.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.3.1201.2024112518"
nstla.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.3.1201"
nstla.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 3
nstlb.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"storage-layer"
nstlb.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
nstlb.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.3.1201.2024112518"
nstlb.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.3.1201"
nstlb.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 3
nstlc.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"storage-layer"
nstlc.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
nstlc.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.3.1201.2024112518"
nstlc.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.3.1201"
nstlc.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 3
nstld.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"storage-layer"
nstld.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
nstld.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.3.1201.2024112518"
nstld.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.3.1201"
nstld.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 3
pam.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"process-automation-modeler"
pam.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
pam.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.3.1200.63696"
pam.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.3.1200"
pam.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 9
pam.waitFor Defines what condition needs to be met before this components starts ["-service {{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 1800"]
pipeliner.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"pipeliner"
pipeliner.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
pipeliner.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.3.1300.2024121815"
pipeliner.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.3.1300"
pipeliner.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 8
pipeliner.waitFor Defines what condition needs to be met before this components starts ["-service {{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 1800"]
prepper.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"toolbox2"
prepper.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "cr.nplus.cloud/subscription"
prepper.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"1.2.1300"
prepper.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"1.2.1300"
prepper.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 2
rms.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"admin-server"
rms.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "cr.nplus.cloud/subscription"
rms.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"1.2.1200"
rms.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"1.2.1200"
rms.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 10
rmsa.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"admin-server"
rmsa.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "cr.nplus.cloud/subscription"
rmsa.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"1.2.1200"
rmsa.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"1.2.1200"
rmsa.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 10
rmsb.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"admin-server"
rmsb.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "cr.nplus.cloud/subscription"
rmsb.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"1.2.1200"
rmsb.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"1.2.1200"
rmsb.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 10
rs.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"rendition-server"
rs.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
rs.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.3.1301.2024121910"
rs.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.3.1301"
rs.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 4
sharepoint.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"sharepoint-connector"
sharepoint.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
sharepoint.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.2.1400.2024073012"
sharepoint.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.2.1400"
sharepoint.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 8
sharepoint.waitFor Defines what condition needs to be met before this components starts ["-service {{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 1800"]
sharepointa.clusterService.contextPath Set the context Path for the cluster Ingress. Make sure also the members are listening to this path "/nscale_spc"
sharepointa.clusterService.enabled When using multiple SharePoint Connectors with different configurations, you still might want to use a retrieval cluster for HA so you can enable the clusterService and define the context path. false
sharepointa.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"sharepoint-connector"
sharepointa.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
sharepointa.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.2.1400.2024073012"
sharepointa.ingress.contextPath Defines the context path of this sharepoint instance, in case you might have multiple instances. We do not want them to consume the same ingress path, because it would block the ingress from being created. "/nscale_spca"
sharepointa.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.2.1400"
sharepointa.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 8
sharepointa.waitFor Defines what condition needs to be met before this components starts ["-service {{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 1800"]
sharepointb.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"sharepoint-connector"
sharepointb.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
sharepointb.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.2.1400.2024073012"
sharepointb.ingress.contextPath Defines the context path of this sharepoint instance, in case you might have multiple instances. We do not want them to consume the same ingress path, because it would block the ingress from being created. "/nscale_spcb"
sharepointb.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.2.1400"
sharepointb.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 8
sharepointb.waitFor Defines what condition needs to be met before this components starts ["-service {{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 1800"]
sharepointc.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"sharepoint-connector"
sharepointc.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
sharepointc.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.2.1400.2024073012"
sharepointc.ingress.contextPath Defines the context path of this sharepoint instance, in case you might have multiple instances. We do not want them to consume the same ingress path, because it would block the ingress from being created. "/nscale_spcc"
sharepointc.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.2.1400"
sharepointc.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 8
sharepointc.waitFor Defines what condition needs to be met before this components starts ["-service {{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 1800"]
sharepointd.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"sharepoint-connector"
sharepointd.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
sharepointd.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.2.1400.2024073012"
sharepointd.ingress.contextPath Defines the context path of this sharepoint instance, in case you might have multiple instances. We do not want them to consume the same ingress path, because it would block the ingress from being created. "/nscale_spcd"
sharepointd.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.2.1400"
sharepointd.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 8
sharepointd.waitFor Defines what condition needs to be met before this components starts ["-service {{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 1800"]
web.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"application-layer-web"
web.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
web.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.3.1300.2024121620"
web.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.3.1300"
web.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 7
web.waitFor Defines what condition needs to be met before this components starts ["-service {{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 900"]
webdav.image.name sets the name of the image to use for this component
set by devOps pipeline, so do not modify
info only, do not change
"webdav-connector"
webdav.image.repo sets the repo from where to load the image. This can be overridden on environment or instance level in case you have your own repo for caching and security reasons "ceyoniq.azurecr.io/release/nscale"
webdav.image.tag defines the tag for this component
set by devOps pipeline, so do not modify
info only, do not change
"ubi.9.3.1000.2024091609"
webdav.meta.componentVersion This is the version of the component, used for display
set by devOps pipeline, so do not modify
info only, do not change
"9.3.1000"
webdav.meta.wave Defines the ArgoCD wave in which this component should be installed. This setting only applies to scenarios, where ArgoCD is used as handler 8
webdav.waitFor Defines what condition needs to be met before this components starts ["-service {{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 1800"]