Files
nplus/charts/application/README.md
2025-01-24 16:18:47 +01:00

21 KiB
Raw Permalink Blame History

nplus-application

nplus Application, used to install Apps and Customizations into the nscale Application Layer.

AppInstaller

In order to install Apps, you will need a matching AppInstaller. This can be downloaded from the Ceyoniq Service Portal. Once you have it, copy it the pool folder (or any other place where the application chart has access to):

kubectl cp app-installer-9.0.1202.jar nplus-toolbox-0:/conf/pool

Ceyoniq Smart Business Apps (SBS)

The SBS Apps are automatically downloaded from the official Ceyoniq nstore by a job in the nplus environment, if you switched it on during the environment installation:

nstoreDownloader.enabled: true

If enabled, the Downloader job will run regularly in the background, and download the latest SBS Apps in the pool folder. You can always enabled it in the environment chart later on if desired:

helm upgrade \
    --set toolbox.enabled=true \
    --set nstoreDownloader.enabled=true \
    dev nplus/nplus-environment

SBS Example

You can install SBS by adding the necessary apps to the deployment:

components:
  application: true
application:
  appInstaller: "/pool/app-installer-9.0.1202.jar"
  docAreas:
    - id: "SBS"
      name: "DocArea with SBS"
      description: "This is a sample DocArea with the SBS Apps installed"
      apps:
      - "/pool/nstore/bl-app-9.0.1202.zip"
      - "/pool/nstore/gdpr-app-9.0.1302.zip"
      - "/pool/nstore/sbs-base-9.0.1302.zip"
      - "/pool/nstore/sbs-app-9.0.1302.zip"
      - "/pool/nstore/tmpl-app-9.0.1302.zip"
      - "/pool/nstore/cm-base-9.0.1302.zip"
      - "/pool/nstore/cm-app-9.0.1302.zip"
      - "/pool/nstore/hr-base-9.0.1302.zip"
      - "/pool/nstore/hr-app-9.0.1302.zip"
      - "/pool/nstore/pm-base-9.0.1302.zip"
      - "/pool/nstore/pm-app-9.0.1302.zip"
      - "/pool/nstore/sd-base-9.0.1302.zip"
      - "/pool/nstore/sd-app-9.0.1302.zip"
      - "/pool/nstore/kon-app-9.0.1302.zip"
      - "/pool/nstore/kal-app-9.0.1302.zip"
      - "/pool/nstore/dok-app-9.0.1302.zip"
      - "/pool/nstore/ts-base-9.0.1302.zip"
      - "/pool/nstore/ts-app-9.0.1302.zip"
      - "/pool/nstore/ocr-base-9.0.1302.zip"

This will install the SBS Apps into the DocArea "SBS". The DocArea is created, if it does not exist.

Install custom Generic Base Apps (GBA)

If you wish to deploy your custom GBAs, simply copy them to the pool (e.g. in the apps folder):

kubectl cp my-gba-1.0.1000.zip nplus-toolbox-0:/conf/pool/apps

Then, use the GBA file name and version in the DocArea:

application:
  docAreas:
    - id: "MyGBA"
      name: "DocArea with my GBA"
      description: "This is a sample DocArea with a custom GBA installed"
      apps:
      - "/pool/apps/my-gba-1.0.1000.zip"

Downloading assets from the web, like git

If your assets are in git, you can simply download them prior to installing. That way, you do not have to upload them manually:

application:
  download:
    - "https://git.nplus.cloud/public/nplus/raw/branch/master/apps/my-gba-1.0.1000.zip"
  docAreas:
    - id: "MyGBA"
      name: "DocArea with my GBA"
      description: "This is a sample DocArea with a custom GBA installed"
      apps:
      - "/pool/downloads/my-gba-1.0.1000.zip"

You can also use the prepper for downloading assets, which is useful to for example download snippets into the web client before it starts.

Deploying additional parts

You might want to deploy additional parts like web snippets to your instance. This can by done by custom scripts.

Custom scripts can be run either in global or in document area context:

application:
  preRun:
  - "/pool/scripts/global-init.sh"
  docAreas:
    - id: "MyGBA"
      run:
      - "/pool/scripts/da-deployment.sh"
  run:
  - "/pool/scripts/global-deployment.sh"

In DA context, the script will get the NAPPL information passed to it. In global context, the script does not get any application specific context.

Example (for a global script):

#/bin/sh
cp /pool/snippets/test.jar /instance/web/snippets

This script copies the file test.jar to the web snippets folder, so the web containers have access to it.

Place this script in the pool folder of your environment, like this:

kubectl cp global-deployment.sh nplus-toolbox-0:/conf/pool/scripts

Then you can run it during the initialization Job like in the example above. Of course you also need to copy your snippet to the pool first:

kubectl cp test.jar nplus-toolbox-0:/conf/pool/snippets

Scripts can run Pre- and Post DocArea and App installs:

  • The global preRun scripts are run before any document area initialization.
  • The DA preRun scripts are run before all apps are installed.
  • The DA Run scripts are run after all apps are installed.
  • The global Run scripts are run after any document area initialization.

Debugging

The Application Chart uses a job that runs a pod once the Application Layer is available. This pod then creates document areas (if not present) and installs apps into them.

While the job is running, you can check its log using

kubectl logs -l nplus/instance=<instance>,nplus/component=application

Please substitute <instance> with your instance name.

The job/pod is automatically removed shortly after it finishes, so the kubectl logs command might not find the resource any more if you try this after minutes. Of course you will still find these logs in splunk, prometheus, kibana or whatever log stack you use.

Alternatively, you can check the log at /conf/<instance>/application/10init.log from inside the environment toolbox.

kubectl exec --stdin --tty nplus-toolbox-0 -- cat /conf/<instance>/application/10init.log

Wait-One-Minute

If you have an update scenario (and not using argoCD with its waves) and your application is inside your instance, you might get into a race condition problem:

Your Application Layer is still up when the job is created. The jobs waits for the Application Layer, which - since it is still there - is only a split second and then the job executes. Kubernetes might then update the Application Layer which terminates, leaving the job crashing. As the application job only tries to install once, it will be left incomplete.

We use an init container wait-one-minute, which will wait a minute before the job executes, leaving Kubernetes and the Application Layer enough time to terminate for the update.

This is the default when not using argoCD and waves.

nplus-application Chart Configuration

You can customize / configure nplus-application by setting configuration values on the command line or in values files, that you can pass to helm. Please see the samples directory for details.

In case there is no value set, the key will not be used in the manifest, resulting in values taken from the config files of the component.

Template Functions

You can use template functions in the values files. If you do so, make sure you quote correctly (single quotes, if you have double quotes in the template, or escaped quotes).

Global Values

All values can be set per component, per instance or globally per environment.

Example: global.ingress.domain sets the domain on instance level. You can still set a different domain on a component, such as administrator. In that case, simply set ingress.domain for the administrator chart and that setting will have priority:

  • Prio 1 - Component Level: ingress.domain
  • Prio 2 - Instance Level: global.ingress.domain
  • Prio 3 - Environment Level: global.environment.ingress.domain

Using Values in Templates

As it would be a lot of typing to write .Values.ingress.domain | default .Values.global.ingress.domain | default .Values.global.environment.ingress.domainin your template code, this is automatically done by nplus. You can simply type .this.ingress.domain and you will get a condensed and defaulted version of your Values.

So an example in your values.yaml would be:

administrator:
  waitFor:
    - '-service {{ .component.prefix }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local:\{{ .this.nappl.port }} -timeout 600'

This example shows .this.nappl.port which might come from a component, instance or global setting. You do not need to care. The .Release.Namespace is set by helm. You have access to all Release and Chart Metadata, just like in your chart code.

The .component.prefix is calculated by nplus and gives you some handy shortcuts to internal variables:

  • .component.chartName The name of the chart as in .Chart.Name, but with override by .Values.nameOverride

  • .component.shortChartName A shorter Version of the name - nappl instead of nplus-component-nappl

  • .component.prefix The instance Prefix used to name the resources including -. This prefix is dropped, if the .Release.Name equals .Release.Namespace for those of you that only run one nplus Instance per namespace

  • .component.name The name of the component, including .Values.nameOverride and some logic

  • .component.fullName The fullName inlcuding .Values.fullnameOverride and some logic

  • .component.chart Mainly the Chart.Name and Chart.Version

  • .component.storagePath The path where the component config is stored in the conf PVC

  • .component.handler The handler (either helm, argoCD or manual)

  • .instance.name The name of the instance, but with override by .Values.instanceOverride

  • .instance.group The group, this instance belongs to. Override by .Values.groupOverride

  • .instance.version The nscale version (mostly taken from Application Layer), this instance is deploying.

  • .environment.name The name of the environment, but with override by .Values.environmentNameOverride

Keys

You can set any of the following values for this component:

Key Description Default
docAreas Provide a list of docareas to create. Please also see the example files
download A list of URLs (Links) to Assets to download before anything else if the download is a .tar.gz, it is automatically untared to /pool/downloads
env Sets additional environment variables for the configuration.
envMap Sets the name of a configMap, which holds additional environment variables for the configuration. It is added as envFrom configMap to the container.
envSecret Sets the name of a secret, which holds additional environment variables for the configuration. It is added as envFrom secretRef to the container.
fullnameOverride This overrides the output of the internal fullname function
image.name the name of the image to use "application-layer"
image.pullSecrets you can provide your own pullSecrets, in case you use a private repo. ["nscale-cr", "nplus-cr"]
image.repo if you use a private repo, feel free to set it here "ceyoniq.azurecr.io/release/nscale"
image.tag the tag of the image to use "latest"
meta.language Sets the language of the main service (in the service container). This is used for instance if you turn OpenTelemetry on, to know which Agent to inject into the container.
meta.provider sets provider (partner, reseller) information to be able to invoice per use in a cloud environment
meta.serviceContainer The container name of the main service for this component. This is used to define where to inject the telemetry agents, if any
meta.stage A optional parameter to indicate the stage (DEV, QA, PROD, ...) this component, instance or environment runs in. This can be used in template functions to add the stage to for instance the service name of telemetry services like open telemetry. (see telemetry example)
meta.tenant sets tenant information to be able to invoice per use in a cloud environment
meta.type the type of the component. You should not change this value, except if you use a pipeliner in core mode. In core mode, it should be core, else pipeliner This type is used to create cluster communication for nappl and nstl and potentially group multiple replicaSets into one service. "application"
meta.wave Sets the wave in which this component should be deployed within an ArgoCD deployment if unset, it uses the default wave thus all components are installed in one wave, then relying on correct wait settings just like in a helm installation
minReplicaCountType if you set minReplicaCountType, a podDesruptionBudget will be created with this value as minAvailable, using the component type as selector. This is useful for components, that are spread across multiple replicaSets, like sharepoint or storage layer
mounts.caCerts.configMap Alternative 2: the name of the configMap to use. The Key has to be the File Name used in the path setting
mounts.caCerts.secret Alternative 1: the name of the secret to use. The Key has to be the File Name used in the path setting
mounts.componentCerts.configMap Alternative 2: the name of the configMap to use. The Key has to be the File Name used in the path setting
mounts.componentCerts.secret Alternative 1: the name of the secret to use. The Key has to be the File Name used in the path setting
mounts.conf.path Sets the path to the conf files
do not change this value
info only, do not change
"/application"
mounts.data.class Sets the class of the data disk
mounts.data.size Sets the size of the data disk
mounts.data.volumeName If you do not want to have a Volume created by the provisioner, you can set the name of your volume here to attach to this pre-existing one
mounts.disk.class Sets the class of the disk
mounts.disk.enabled enables the use of the second data disk. If enabled, all paths defined will end up on this disk. In case of the (default) disabled, the paths will be added to the primaty data disk. false
mounts.disk.migration Enables the migration init container. This will copy the data in paths from the primary data disk to the newly enabled secondary disk. This is done only once and only if there is legacy data at all. No files are overwritten! false
mounts.disk.size Sets the size of the disk
mounts.disk.volumeName If you do not want to have a Volume created by the provisioner, you can set the name of your volume here to attach to this pre-existing one
mounts.file.class Sets the class of the shared disk
mounts.file.size Sets the size of the shared disk
mounts.file.volumeName If you do not want to have a Volume created by the provisioner, you can set the name of your volume here to attach to this pre-existing one
mounts.generic Allows to define generic mounts of pre-provisioned PVs into any container. This can be used e.g. to mount migration nfs, cifs / samba shares into a pipeliner container.
mounts.logs.size Sets the size of the log disk (all paths)
mounts.pool.path Sets the path to a directory, there the pool folder from the conf volume should be mounted. this is used to store scripts, apps and assets that are required to deploy an application / solution
do not change this value
info only, do not change
"/pool"
mounts.temp.path Sets the path to the temporary files
do not change this value
info only, do not change
"/tmp"
mounts.temp.size Sets the size of the temporary disk (all paths)
nameOverride This overrides the output of the internal name function
nappl.account The technical account to login with
nappl.domain The domain of the technical account
nappl.host nappl host name
nappl.instance instance of the Application Layer, likely instance1
nappl.password The password of the technical accunt (if not set by secret)
nappl.port nappl port (http 8080 or https 8443)
nappl.secret An optional secret that holds the credentials (the keys must be account and password)
nappl.ssl sets the Advanced Connect to tls
nodeSelector select specific nodes for this component
nstl.host The dns of the nscale Server Storage Layer. This is used to add it to the nappl configuration
prerun A list of scripts to run before the deployment of Apps
resources.limits.cpu The maximum allowed CPU for the container
resources.limits.memory The maximum allowed RAM for the container
resources.requests.cpu Set the share of guaranteed CPU to the container.
resources.requests.memory Set the share of guaranteed RAM to the container
rs.host The dns of the nscale rendition Server. This is used to add it to the nappl configuration
run A list of scripts to run after the deployment of Apps
security.containerSecurityContext.allowPrivilegeEscalation Some functionality may need the possibility to allow privilege escalation. This should be very restrictive
you should not change this
info only, do not change
false
security.containerSecurityContext.readOnlyRootFilesystem sets the container root file system to read only. This should be the case in production environment
you should not change this
info only, do not change
true
security.podSecurityContext.fsGroup The file system group as which new files are created
there is normally no need to change this
info only, do not change
1001
security.podSecurityContext.fsGroupChangePolicy Under which condition should the fsGroup be changed
there is normally no need to change this
info only, do not change
"OnRootMismatch"
security.podSecurityContext.runAsUser The user under which the container ist run. Avoid 0 / root. The container should run in a non-root context for security
there is normally no need to change this
info only, do not change
1001
security.zeroTrust turns on Zero Trust Mode, disabling all http communication, even the internal http probes false
telemetry.openTelemetry turns Open Telemetry on
telemetry.serviceName Sets the service name for the telemetry service to more convenient identify the displayed component Example: "{{ .this.meta.type }}-{{ .instance.name }}"
terminationGracePeriodSeconds Sets the terminationGracePeriodSeconds for the component If not set, it uses the Kubernetes defaults
timezone set the time zone for this component to make sure log output has a specific timestamp, internal dates and times are correct (like the creationDate in nappl) etc. Europe/Berlin
tolerations Set tolerations for this component
utils.debug Turn debugging on will give you stack trace etc. Please check out the Chart Developer Guide false
utils.disableWait in case you use the argoCD Wave feature, you might think about switching off the waitFor mechanism, that makes sure PODs are only started after pre-requisites are fulfilled. You can disable the starndard wait mechanism, but at your own risk, as this might start components even if they are not intended to run yet. false
utils.disableWave If you use argoCD, you most likely want to use the argo Wave Feature as well, making sure the components of an instance are deployed ordered. However, in DEV you might want to disable this to allow live changing components while previous waves are not finished yet. false
utils.includeNamespace By default, the namespace is rendered into the manifest. However, if you want to use helm template and store manifests for later applying them to multiple namespaces, you might want to turn this false to be able to use kubectl apply -n <namespace> -f template.yaml later true
utils.maintenance in Maintenance Mode, all waitFor actions will be skipped, the Health Checks are ignored and the pods will start in idle, not starting the service at all. This will allow you to gain access to the container to perform recovery and maintenance tasks while having the real container up. false
utils.renderComments You can turn Comment rendering on to get descriptive information inside the manifests. It will also fail on depricated functions and keys, so it is recommended to only switch it off in PROD true
waitFor Defines a list of conditions that need to be met before this components starts. The condition must be a network port that opens, when the master component is ready. Mostly, this will be a service, since a component is only added to a service if the probes succeed.