Public Information

This commit is contained in:
2025-01-24 16:18:47 +01:00
commit 0bd2038c86
449 changed files with 108655 additions and 0 deletions

152
samples/README.md Normal file
View File

@@ -0,0 +1,152 @@
# The Samples
- [Remote Management Server](administrator)
Shows how you can setup an ***nplus Remote Management Server*** and expose a complete Instance on a single **virtual IP Adress** to be able to connect with the *nscale Administrator* from your desktop and even
perform any **offline configuration** tasks, such as **configuring the *nscale Pipeliner*** with its cold.xml.
- [Applications](application)
- shows how to deploy an ***nplus Application***, inlcuding **creating a Document area** in the *nscale Application Layer* and
**installing *nscale Business Apps*** or **custom *Generic Base Apps*** into multiple Document Areas.
- It also demonstrates how to use the *prepper* component to **download assets from git** or any other artifacs site.
- [Blobstores](blobstore)
- shows how to connect the *nscale Storage Layer* to an **Amazon S3** Object Store, an **Azure Blobstore** or any other compatiple Object Store, like **CephGW** or **min.io**
- it also demonstrates the use of **envMaps** and **envSecrets** to set environment variables outside the values.yaml file
- [Certificates](certificates)
- shows how to **disable certificate generation** and use a **prepared static certificate** for the ingress
- it also talks about the pitfalls of `.this`
- [Cluster](cluster)
shows how to render a cluster chart and prepare the cluster for *nplus*
- [Defaults](default)
Renders a **minimalistic** Instance manifest without any customization
- [Detached Applications](detached)
Shows how to separate the application from the instance but still use them in tandem. This technique is used to be able to separately update instance and application.
- [Environments](environment)
holds values for **different environments**, such as a lab environment (which is used for our internal test cases) or a production environment
- [Instance Groups](group)
- Large Instances can be split easily and re-grouped again with the `.instance.group` tag. This example shows how.
- It also shows how to switch off any certificate or networkPolicy creation at Instance Level
- [High Avalability](ha)
- showcases a full **High Availability** szenario, including a dedicated **nappljobs** and redundant components to reduce the risk of outages
- [Highest Doc ID / HID](hid)
- shows how to enable highest ID checking in *nstl*.
- [No Waves](nowaves)
Shows how to set up a simple argoCD Application without using any *waves* but just relying on *waitFor*
- [Version Pinning](pinning)
There are several ways how to pin the **version of *nscale* components**. This example shows how to stay flexible in terms of **nplus versions** and still pin *nscale* to a
specific version.
- [Resources](resources)
- demonstrates how for different environments, for different tenants depending on the usage (like amount of concurrent users) or scenarios alike,
you might want to use different sets of **resouce definitions for RAM, CPU** or HD Space.
- [Security](security)
IT Security is an important aspect when running an nplus Environment. This example shows
- how to configure **https for all inter-Pod communications**, and
- how to force **all connections** to be **encrypted**. (**Zero Trust Mode**)
- [Shared Services](shared)
You might want to have a central **nplus** Instance that can be used by multiple tenants. This example shows how to do that.
- [SharePoint Retrieval HA](sharepoint)
Normally, you either have HA with multiple replicas *or* you use multiple instances if you want different configurations per instance. This example shows how you can combine both approaches: Having mutliple instances with different configuration (for archiving) and a global ingress to all instances (for HA retrieval)
- [Single Instance Mode](sim)
Some *nplus* subscribers use **maximum tenant separation**, so not only **firewalling each component**, but also running each **Instance in a dedicated Namespace**.
This shows how to enable ***Single Instance Mode***, that melts the environment and instance together in a **single deployment**.
- [Static Volumes](static)
Shows how to disable dynamic volume provisioning (depending on the storage class) and use **pre-created static volumes** instead.
- [Chart](chart)
This is an example for a **custom Umbrella Chart**, to create a custom base chart with your **own environment defaults** to build your deployments upon. This is the next step after applying value files.
# Configuration Options
There are several possible ways to configure and customize the *nplus* helm charts:
1. By using `--set` parameter at the command line:
```
helm install \
--set components.rs=false \
--set components.mon=false \
--set components.nstl=false \
--set global.ingress.domain="demo.nplus.cloud" \
--set global.ingress.issuer="nplus-issuer" \
demo1 nplus/nplus-instance
```
`kubectl get instances` will show this Instance being handled by `helm`.
2. By adding one or more values files:
```
helm install \
--values empty.yaml \
--values s3-env.yaml \
--values lab.yaml \
demo2 nplus/nplus-instance
```
3. By using `helm template` and piping the output to `kubectl`
```
helm template \
--values empty.yaml \
--values s3-env.yaml \
--values lab.yaml \
demo2 nplus/nplus-instance | kubectl apply -f -
```
`kubectl get instances` will show this Instance being handled manually (`manual`).
4. By building *Umbrella Charts* that contain default values for your environment
# Using ArgoCD
Deploying trough ArgoCD is identical to deploying trough Helm. Just use the `instance-argo` chart instead of `instance`.
But you can use the same value files for all deployment methods. The `instance-argo` chart will render all values into the *argoCD Application*, taking them fully into account.
If you deploy Instances by ArgoCD, `kubectl get instances` will show these Instance being handled by `argoCD`.
# Stacking Values
The sample value files provided here work for standard Instances as well as for argoCD Versions.
The values Files are stacked:
- `environment/demo.yaml` contains the default values for the environment
- `empty/values.yaml` creates a sample document area in the *nscale Application Layer*
- `s3/env.yaml` adds a S3 storage to the *nscale Storage Layer*, in form of simple environment variables
This stack can be installed by using the `helm install` command:
```
helm install \
--values environment/demo.yaml \
--values empty/values.yaml \
--values s3/env.yaml \
empty-sample-s3 nplus/nplus-instance
```
The advantage of stacking is to separate and reuse parts of your configuration for different purposes.
- Reuse values for different environments like stages or labs, where only the environment is different but the components and applications are (and have to be) the same
- Use the same Storage Configuration for multiple Instances
- Have one configuration for your Application / Solution and use that on many tenants to keep them all in sync

View File

@@ -0,0 +1,93 @@
# Installing Document Areas
## Creating an empty document area while deploying an Instance
This is the simplest sample, just the core services with an empty document area:
```
helm install \
--values samples/application/empty.yaml \
--values samples/environment/demo.yaml \
empty nplus/nplus-instance
```
The empty Document Area is created with
```yaml
components:
application: true
prepper: true
application:
docAreas:
- id: "Sample"
run:
- "/pool/downloads/sample.sh"
prepper:
download:
- "https://git.nplus.cloud/public/nplus/raw/branch/master/assets/sample.tar.gz"
```
This turns on the *prepper* component, used to download a sample tarball from git. It will also extract the tarball into the `downloads` folder that is created on the *pool* automatically.
Then, after the Application Layer is running, a document area `Sample` is created. The content of the sample script will be executed.
If you use **argoCD** as deployment tool, you would go with
```
helm install \
--values samples/application/empty.yaml \
--values samples/environment/demo.yaml \
empty-argo nplus/nplus-instance-argo
```
## Deploying the *SBS* Apps to a new document area
In the SBS scenario, some Apps are installed into the document area:
```bash
helm install \
--values samples/applications/sbs.yaml \
--values samples/environment/demo.yaml \
sbs nplus/nplus-instance
```
The values look like this:
```yaml
components:
application: true
application:
nameOverride: SBS
docAreas:
- id: "SBS"
name: "DocArea with SBS"
description: "This is a sample DocArea with the SBS Apps installed"
apps:
- "/pool/nstore/bl-app-9.0.1202.zip"
- "/pool/nstore/gdpr-app-9.0.1302.zip"
...
- "/pool/nstore/ts-app-9.0.1302.zip"
- "/pool/nstore/ocr-base-9.0.1302.zip"
```
This will create a document area `SBS` and install the SBS Apps into it.
# Accounting in nstl
To collect Accounting Data in *nscale Server Storage Layer*, you can enable the nstl accouting feature by setting `accounting: true`.
This will create the accounting csv files in *ptemp* under `<instance>/<component>/accounting`.
Additionally, you can enable a log forwarder printing it to stdout.
```
nstl:
accounting: true
logForwarder:
- name: Accounting
path: "/opt/ceyoniq/nscale-server/storage-layer/accounting/*.csv"
```

80
samples/application/build.sh Executable file
View File

@@ -0,0 +1,80 @@
#!/bin/bash
#
# This sample script builds the example as described. It is also used to build the test environment in our lab,
# so it should be well tested.
#
# Make sure it fails immediately, if anything goes wrong
set -e
# -- ENVironment variables:
# CHARTS: The path to the source code
# DEST: The path to the build destination
# SAMPLE: The directory of the sample
# NAME: The name of the sample, used as the .Release.Name
# KUBE_CONTEXT: The name of the kube context, used to build this sample depending on where you run it against. You might have different Environments such as lab, dev, qa, prod, demo, local, ...
# Check, if we have the source code available
if [ ! -d "$CHARTS" ]; then
echo "ERROR Building $SAMPLE example: The Charts Sources folder is not set. Please make sure to run this script with the full Source Code available"
exit 1
fi
if [ ! -d "$DEST" ]; then
echo "ERROR Building $SAMPLE example: DEST folder not found."
exit 1
fi
if [ ! -d "$CHARTS/instance" ]; then
echo "ERROR Building $SAMPLE example: Chart Sources in $CHARTS/instance not found. Are you running this script as a subscriber?"
exit 1
fi
# Set the Variables
SAMPLE="empty"
NAME="sample-$SAMPLE"
# Output what is happening
echo "Building $NAME"
# Create the manifest
mkdir -p $DEST/instance
helm template --debug \
--values $SAMPLES/application/empty.yaml \
--values $SAMPLES/hid/values.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
$NAME $CHARTS/instance > $DEST/instance/$SAMPLE.yaml
# creating the Argo manifest
mkdir -p $DEST/instance-argo
helm template --debug \
--values $SAMPLES/application/empty.yaml \
--values $SAMPLES/hid/values.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
$NAME-argo $CHARTS/instance-argo > $DEST/instance-argo/$SAMPLE-argo.yaml
# Set the Variables
SAMPLE="sbs"
NAME="sample-$SAMPLE"
# Output what is happening
echo "Building $NAME"
# Create the manifest
mkdir -p $DEST/instance
helm template --debug \
--values $SAMPLES/application/sbs.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
$NAME $CHARTS/instance > $DEST/instance/$SAMPLE.yaml
# creating the Argo manifest
mkdir -p $DEST/instance-argo
helm template --debug \
--values $SAMPLES/application/sbs.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
$NAME-argo $CHARTS/instance-argo > $DEST/instance-argo/$SAMPLE-argo.yaml

View File

@@ -0,0 +1,20 @@
components:
application: true
prepper: true
application:
docAreas:
- id: "Sample"
run:
- "/pool/downloads/sample.sh"
prepper:
download:
- "https://git.nplus.cloud/public/nplus/raw/branch/master/assets/sample.tar.gz"
nstl:
accounting: true
logForwarder:
- name: Accounting
path: "/opt/ceyoniq/nscale-server/storage-layer/accounting/*.csv"
db: "/opt/ceyoniq/nscale-server/storage-layer/logsdb/logs.db"

View File

@@ -0,0 +1,28 @@
components:
application: true
application:
nameOverride: SBS
docAreas:
- id: "SBS"
name: "DocArea with SBS"
description: "This is a sample DocArea with the SBS Apps installed"
apps:
- "/pool/nstore/bl-app-9.0.1202.zip"
- "/pool/nstore/gdpr-app-9.0.1302.zip"
- "/pool/nstore/sbs-base-9.0.1302.zip"
- "/pool/nstore/sbs-app-9.0.1302.zip"
- "/pool/nstore/tmpl-app-9.0.1302.zip"
- "/pool/nstore/cm-base-9.0.1302.zip"
- "/pool/nstore/cm-app-9.0.1302.zip"
- "/pool/nstore/hr-base-9.0.1302.zip"
- "/pool/nstore/hr-app-9.0.1302.zip"
- "/pool/nstore/pm-base-9.0.1302.zip"
- "/pool/nstore/pm-app-9.0.1302.zip"
- "/pool/nstore/sd-base-9.0.1302.zip"
- "/pool/nstore/sd-app-9.0.1302.zip"
- "/pool/nstore/kon-app-9.0.1302.zip"
- "/pool/nstore/kal-app-9.0.1302.zip"
- "/pool/nstore/dok-app-9.0.1302.zip"
- "/pool/nstore/ts-base-9.0.1302.zip"
- "/pool/nstore/ts-app-9.0.1302.zip"
- "/pool/nstore/ocr-base-9.0.1302.zip"

View File

@@ -0,0 +1,74 @@
# Using Object Stores
Blobstores aka Objectstores have a REST Interface that you can upload your Payload to and receive an ID for it. They are normally structured into *Buckets* or *Containers* to privide
some sort of pooling payload within the store.
The *nscale Server Storage Layer* supports multiple brands of objectstores, the most popular being Amazon S3 and Microsoft Azure Blobstore.
In order to use them, you need to
- get an account for the store
- configure the *nstl* with the url, credentials etc.
- Add firewall rules to access to store
Have a look at the sample files
- s3-env.yaml
for Amazon S3 compatible storage, and
- azureblob.yaml
for Azure Blobstore
For S3 compatible storage, there are multiple S3 flavours available.
# Custom Environment Variables
There are multiple ways of how to set custom environment variables in addition to the named values, you set in helm:
## Using `env`
Please have a look at `s3-env.yaml` to see how custom environment variables can be injected into a component:
```
nstl:
env:
# Archivtyp
NSTL_ARCHIVETYPE_900_NAME: "S3"
NSTL_ARCHIVETYPE_900_ID: "900"
NSTL_ARCHIVETYPE_900_LOCALMIGRATION: "0"
NSTL_ARCHIVETYPE_900_LOCALMIGRATIONTYPE: "NONE"
NSTL_ARCHIVETYPE_900_S3MIGRATION: "1"
```
This will set the environment variables in the storage layer to add an archive type with id 900.
## Using `envMap` and `envSecret`
Alternatively to the standard `env`setting, you can also use configmaps and secrets for additional environment variables.
The file `s3-envres.yaml` creates a configmap and a secret with the same variables as used in the `s3-env.yaml` sample. `s3-envfrom.yaml` shows how to import them.
Please be aware, that data in secrets need to be base64 encoded:
```
echo "xxx" | base64
```
So in order to use the envFrom mechanism,
- prepare the resources (as in `s3-envres.yaml`)
- upload the resources to your cluster
```
kubectl apply -f s3-envres.yaml
```
- add it to your configuration
```
nstl:
# These resources are set in the s3-envres.yaml sample file
# you can set single values (envMap or envSecret) or lists (envMaps or envSecrets)
envMaps:
- env-sample-archivetype
- env-sample-device
envSecret: env-sample-device-secret
```

View File

@@ -0,0 +1,20 @@
nstl:
env:
# global
NSTL_RETRIEVALORDER: "AZURE, HARDDISK_ADAPTER, REMOTE_EXPLICIT, REMOTE_DA"
# Archive Type
NSTL_ARCHIVETYPE_901_NAME: "AZUREBLOB"
NSTL_ARCHIVETYPE_901_ID: "901"
NSTL_ARCHIVETYPE_901_LOCALMIGRATION: "0"
NSTL_ARCHIVETYPE_901_LOCALMIGRATIONTYPE: "NONE"
NSTL_ARCHIVETYPE_901_AZUREMIGRATION: "1"
# Device
NSTL_AZURE_0_CONFIGURED: "1"
NSTL_AZURE_0_ARCHIVETYPES: "AZUREBLOB"
NSTL_AZURE_0_INDEX: "0"
NSTL_AZURE_0_NAME: "AZUREBLOB"
NSTL_AZURE_0_INITIALLYACTIVE: "1"
NSTL_AZURE_0_PERMANENTMIGRATION: "1"
NSTL_AZURE_0_CONTAINERNAME: "demostore"
NSTL_AZURE_0_ACCOUNTNAME: "xxx"
NSTL_AZURE_0_ACCOUNTKEY: "xxx"

View File

@@ -0,0 +1,23 @@
nstl:
env:
# Archivtyp
NSTL_ARCHIVETYPE_900_NAME: "S3"
NSTL_ARCHIVETYPE_900_ID: "900"
NSTL_ARCHIVETYPE_900_LOCALMIGRATION: "0"
NSTL_ARCHIVETYPE_900_LOCALMIGRATIONTYPE: "NONE"
NSTL_ARCHIVETYPE_900_S3MIGRATION: "1"
# Device
NSTL_S3_0_CONFIGURED: "1"
NSTL_S3_0_ARCHIVETYPES: "S3"
NSTL_S3_0_INDEX: "0"
NSTL_S3_0_TYPE: "S3_COMPATIBLE"
NSTL_S3_0_NAME: "S3"
NSTL_S3_0_INITIALLYACTIVE: "1"
NSTL_S3_0_USESSL: "1"
NSTL_S3_0_VERIFYSSL: "0"
NSTL_S3_0_ACCESSID: "xxx"
NSTL_S3_0_SECRETKEY: "xxx"
NSTL_S3_0_ENDPOINT: "s3.nplus.cloud"
NSTL_S3_0_BUCKETNAME: "nstl"
NSTL_S3_0_USEVIRTUALADDRESSING: "0"
NSTL_S3_0_PERMANENTMIGRATION: "1"

View File

@@ -0,0 +1,7 @@
nstl:
# These resources are set in the s3-envres.yaml sample file
# you can set single values (envMap or envSecret) or lists (envMaps or envSecrets)
envMaps:
- env-sample-archivetype
- env-sample-device
envSecret: env-sample-device-secret

View File

@@ -0,0 +1,40 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: env-sample-archivetype
namespace: lab
data:
NSTL_ARCHIVETYPE_900_NAME: "S3"
NSTL_ARCHIVETYPE_900_ID: "900"
NSTL_ARCHIVETYPE_900_LOCALMIGRATION: "0"
NSTL_ARCHIVETYPE_900_LOCALMIGRATIONTYPE: "NONE"
NSTL_ARCHIVETYPE_900_S3MIGRATION: "1"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: env-sample-device
namespace: lab
data:
NSTL_S3_0_CONFIGURED: "1"
NSTL_S3_0_ARCHIVETYPES: "S3"
NSTL_S3_0_INDEX: "0"
NSTL_S3_0_TYPE: "S3_COMPATIBLE"
NSTL_S3_0_NAME: "S3"
NSTL_S3_0_INITIALLYACTIVE: "1"
NSTL_S3_0_USESSL: "1"
NSTL_S3_0_VERIFYSSL: "0"
NSTL_S3_0_ENDPOINT: "s3.nplus.cloud"
NSTL_S3_0_BUCKETNAME: "nstl"
NSTL_S3_0_USEVIRTUALADDRESSING: "0"
NSTL_S3_0_PERMANENTMIGRATION: "1"
---
apiVersion: v1
kind: Secret
metadata:
name: env-sample-device-secret
namespace: lab
type: Opaque
data:
NSTL_S3_0_ACCESSID: eHh4Cg==
NSTL_S3_0_SECRETKEY: eHh4Cg==

View File

@@ -0,0 +1,35 @@
# (auto-) certificates and the pitfalls of *.this*
*nplus* will automatically generate certificates for your ingress. It either uses an issuer like *cert-manager* or generates a *self-signed-certificate*.
In your production environment though, you might want to take more control over the certificate generation process and don't leave it to *nplus* to automatically take care of it.
In that case, you want to switch the automation *off*.
To do so, you need to understand what is happening internally:
- if `.this.ingress.issuer` is set, the chart requests this issuer to generate a tls secret with the name `.this.ingress.secret`
by creating a certificate resource with the name of the domain `.this.ingress.domain`
- else, so no issuer is set, the chart checks wether the flag `.this.ingress.createSelfSignedCertificate` is set to `true` and
generates a tls secret with the name `.this.ingress.secret`
- else, so neither issuer nor createSelfSignedCertificate are set, the charts will not generate anything
The way how `.this` works is, that it gathers the key from `.Values.global.environment`, `.Values.global` and then `.Values` and flattens them merged into `.this`so that you can set your values
on different levels.
However, the *merge* function overwrites non exising values and also boolean `true` overwrites a boolean `false`, not just the nil values. So to make sure we still can cancel functionality
by setting `null`or `false`, there is a forth merge which is set to forcefully overwrite existing keys: `override`, which can also be set on *environment*, *global* or on the *component* level.
So the correct way to cancel the generation process is to force the issuer to null (which will cancel the *cert-manager* generation) and also force `createSelfSignedCertificate` to false (to cancel the *self-signed-certificate* generation):
```yaml
global:
override:
ingress:
enabled: true
secret: myCertificate
issuer: null
createSelfSignedCertificate: false
```
This makes sure, you will get an ingress, that uses the tls certificate in the secret `myCertificate` for encryption and does not generate anything.

View File

@@ -0,0 +1,6 @@
global:
ingress:
enabled: true
secret: mySecret
issuer: null
createSelfSignedCertificate: false

51
samples/chart/build.sh Executable file
View File

@@ -0,0 +1,51 @@
#!/bin/bash
#
# This sample script builds the example as described. It is also used to build the test environment in our lab,
# so it should be well tested.
#
# Make sure it fails immediately, if anything goes wrong
set -e
# -- ENVironment variables:
# CHARTS: The path to the source code
# DEST: The path to the build destination
# SAMPLE: The directory of the sample
# NAME: The name of the sample, used as the .Release.Name
# KUBE_CONTEXT: The name of the kube context, used to build this sample depending on where you run it against. You might have different Environments such as lab, dev, qa, prod, demo, local, ...
# Check, if we have the source code available
if [ ! -d "$CHARTS" ]; then
echo "ERROR Building $SAMPLE example: The Charts Sources folder is not set. Please make sure to run this script with the full Source Code available"
exit 1
fi
if [ ! -d "$DEST" ]; then
echo "ERROR Building $SAMPLE example: DEST folder not found."
exit 1
fi
if [ ! -d "$CHARTS/instance" ]; then
echo "ERROR Building $SAMPLE example: Chart Sources in $CHARTS/instance not found. Are you running this script as a subscriber?"
exit 1
fi
# Set the Variables
SAMPLE="chart"
NAME="sample-$SAMPLE"
# Output what is happening
echo "Building $NAME"
# Create the manifest
mkdir -p $DEST/instance
helm template --debug \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/chart/resources.yaml \
$NAME $SAMPLES/chart/tenant > $DEST/instance/$SAMPLE.yaml
# Create the manifest - argo version
mkdir -p $DEST/instance-argo
helm template --debug \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/chart/resources.yaml \
$NAME-argo $SAMPLES/chart/tenant-argo > $DEST/instance-argo/$SAMPLE-argo.yaml

View File

@@ -0,0 +1,170 @@
instance:
web:
resources:
requests:
cpu: "10m"
memory: "1.5Gi"
limits:
cpu: "4000m"
memory: "4Gi"
prepper:
resources:
requests:
cpu: "10m"
memory: "128Mi"
limits:
cpu: "1000m"
memory: "128Mi"
application:
resources:
requests:
cpu: "10m"
memory: "1.5Gi"
limits:
cpu: "4000m"
memory: "4Gi"
nappl:
resources:
requests:
cpu: "10m"
memory: "1.5Gi"
limits:
cpu: "4000m"
memory: "4Gi"
nappljobs:
resources:
requests:
cpu: "10m"
memory: "2Gi"
limits:
cpu: "4000m"
memory: "4Gi"
administrator:
resources:
requests:
cpu: "2m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
cmis:
resources:
requests:
cpu: "2m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
database:
resources:
requests:
cpu: "2m"
memory: "256Mi"
limits:
cpu: "4000m"
memory: "8Gi"
ilm:
resources:
requests:
cpu: "2m"
memory: "256Mi"
limits:
cpu: "2000m"
memory: "2Gi"
mon:
resources:
requests:
cpu: "2m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
nstl:
resources:
requests:
cpu: "5m"
memory: "128Mi"
limits:
cpu: "2000m"
memory: "1Gi"
nstla:
resources:
requests:
cpu: "5m"
memory: "128Mi"
limits:
cpu: "2000m"
memory: "1Gi"
nstlb:
resources:
requests:
cpu: "5m"
memory: "128Mi"
limits:
cpu: "2000m"
memory: "1Gi"
nstlc:
resources:
requests:
cpu: "5m"
memory: "128Mi"
limits:
cpu: "2000m"
memory: "1Gi"
nstld:
resources:
requests:
cpu: "5m"
memory: "128Mi"
limits:
cpu: "2000m"
memory: "1Gi"
pam:
resources:
requests:
cpu: "5m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "1Gi"
rs:
resources:
requests:
cpu: "2m"
memory: "1Gi"
limits:
cpu: "4000m"
memory: "8Gi"
webdav:
resources:
requests:
cpu: "2m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
rms:
resources:
requests:
cpu: "2m"
memory: "128Mi"
limits:
cpu: "1000m"
memory: "512Mi"
rmsa:
resources:
requests:
cpu: "2m"
memory: "128Mi"
limits:
cpu: "1000m"
memory: "512Mi"
rmsb:
resources:
requests:
cpu: "2m"
memory: "128Mi"
limits:
cpu: "1000m"
memory: "512Mi"

View File

@@ -0,0 +1,12 @@
apiVersion: v2
name: sample-tenant-argo
description: |
ArgoCD Version of the sample tenant chart. It demonstrates the use of umbrella
charts to customize default values for your environment or tenant templates.
type: application
dependencies:
- name: nplus-instance-argo
alias: instance-argo
version: "*-0"
repository: "file://../../../charts/instance-argo"
version: 1.0.0

View File

@@ -0,0 +1,93 @@
# sample-tenant-argo
ArgoCD Version of the sample tenant chart. It demonstrates the use of umbrella
charts to customize default values for your environment or tenant templates.
## sample-tenant-argo Chart Configuration
You can customize / configure sample-tenant-argo by setting configuration values on the command line or in values files,
that you can pass to helm. Please see the samples directory for details.
In case there is no value set, the key will not be used in the manifest, resulting in values taken from the config files of the component.
### Template Functions
You can use template functions in the values files. If you do so, make sure you quote correctly (single quotes, if you have double quotes in the template,
or escaped quotes).
### Global Values
All values can be set per component, per instance or globally per environment.
Example: `global.ingress.domain` sets the domain on instance level. You can still set a different domain on a component, such as administrator.
In that case, simply set `ingress.domain` for the administrator chart and that setting will have priority:
- Prio 1 - Component Level: `ingress.domain`
- Prio 2 - Instance Level: `global.ingress.domain`
- Prio 3 - Environment Level: `global.environment.ingress.domain`
### Using Values in Templates
As it would be a lot of typing to write `.Values.ingress.domain | default .Values.global.ingress.domain | default .Values.global.environment.ingress.domain`in your
template code, this is automatically done by nplus. You can simply type `.this.ingress.domain` and you will get a condensed and defaulted version
of your Values.
So an example in your `values.yaml` would be:
```
administrator:
waitFor:
- '-service {{ .component.prefix }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local:\{{ .this.nappl.port }} -timeout 600'
```
This example shows `.this.nappl.port` which might come from a component, instance or global setting. You do not need to care.
The `.Release.Namespace` is set by helm. You have access to all Release and Chart Metadata, just like in your chart code.
The `.component.prefix` is calculated by nplus and gives you some handy shortcuts to internal variables:
- `.component.chartName`
The name of the chart as in `.Chart.Name`, but with override by `.Values.nameOverride`
- `.component.shortChartName`
A shorter Version of the name - `nappl` instead of `nplus-component-nappl`
- `.component.prefix`
The instance Prefix used to name the resources including `-`. This prefix is dropped, if the
`.Release.Name` equals `.Release.Namespace` for those of you that only
run one nplus Instance per namespace
- `.component.name`
The name of the component, including `.Values.nameOverride` and some logic
- `.component.fullName`
The fullName inlcuding `.Values.fullnameOverride` and some logic
- `.component.chart`
Mainly the `Chart.Name` and `Chart.Version`
- `.component.storagePath`
The path where the component config is stored in the conf PVC
- `.component.handler`
The handler (either helm, argoCD or manual)
- `.instance.name`
The name of the instance, but with override by `.Values.instanceOverride`
- `.instance.group`
The group, this instance belongs to. Override by `.Values.groupOverride`
- `.instance.version`
The *nscale* version (mostly taken from Application Layer), this instance is deploying.
- `.environment.name`
The name of the environment, but with override by `.Values.environmentNameOverride`
### Keys
You can set any of the following values for this component:
| Key | Description | Default |
|-----|-------------|---------|
**global**&#8203;.meta&#8203;.isArgo | | `true` |
**instance-argo**&#8203;.argocd&#8203;.chart | | `"sample-tenant"` |
**instance-argo**&#8203;.argocd&#8203;.destinationServer | | `"https://kubernetes.default.svc"` |
**instance-argo**&#8203;.argocd&#8203;.namespace | | `"argocd"` |
**instance-argo**&#8203;.argocd&#8203;.project | | `"default"` |
**instance-argo**&#8203;.argocd&#8203;.prune | | `true` |
**instance-argo**&#8203;.argocd&#8203;.repo | | `"https://git.nplus.cloud"` |
**instance-argo**&#8203;.argocd&#8203;.selfHeal | | `true` |

View File

@@ -0,0 +1,112 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"additionalProperties": false,
"properties": {
"global": {
"additionalProperties": false,
"properties": {
"meta": {
"additionalProperties": false,
"properties": {
"isArgo": {
"default": true,
"title": "isArgo",
"type": "boolean"
}
},
"title": "meta",
"type": "object"
}
},
"title": "global",
"type": "object"
},
"instance-argo": {
"description": "nplus Instance ArgoCD Edition, supporting the deployment of npus Instances through ArgoCD",
"properties": {
"argocd": {
"additionalProperties": false,
"description": "yaml-language-server: $schema=values.schema.json",
"properties": {
"chart": {
"default": "nplus-instance",
"description": "The name of the chart to use for the instance",
"title": "chart"
},
"destinationNamespace": {
"default": "{{ .Release.Namespace }}",
"description": "ArgoCD can deploy to any Namespace on the destination Server. You have to specify it. Default is the release namespace",
"title": "destinationNamespace"
},
"destinationServer": {
"default": "https://kubernetes.default.svc",
"description": "ArgoCD can also remote deploy Applications to alien clusters. The server specifies the API Endpoint of the Cluster, where the Application should be deployed",
"title": "destinationServer"
},
"namespace": {
"default": "argocd",
"description": "The ArgoCD Namespace within the cluster. The ArgoCD Application will be deployed to this namespace You will need write privileges for this namespace",
"title": "namespace"
},
"project": {
"default": "default",
"description": "ArgoCD organizes Applications in Projects. This is the name of the project, the application should be deployed to",
"title": "project"
},
"prune": {
"default": "true",
"description": "Toggle pruning for this Application",
"title": "prune"
},
"repo": {
"default": "https://git.nplus.cloud",
"description": "Specifiy the helm repo, from which ArgoCD should load the chart. Please make sure ArgoCD gets access rights to this repo",
"title": "repo"
},
"selfHeal": {
"default": "true",
"description": "Toggle self healing feature for this Application",
"title": "selfHeal"
}
},
"title": "argocd",
"type": "object"
},
"global": {
"additionalProperties": false,
"properties": {
"meta": {
"additionalProperties": false,
"properties": {
"isArgo": {
"default": "true",
"description": "specifies that this is an Argo Installation. Used to determine the correct handler in the chart @internal -- Do not change",
"title": "isArgo"
}
},
"title": "meta",
"type": "object"
}
},
"title": "global",
"type": "object"
},
"globals": {
"description": "nplus Global Functions Library Chart",
"properties": {
"global": {
"description": "Global values are values that can be accessed from any chart or subchart by exactly the same name.",
"title": "global",
"type": "object"
}
},
"title": "nplus-globals",
"type": "object"
}
},
"title": "nplus-instance-argo",
"type": "object"
}
},
"type": "object"
}

View File

@@ -0,0 +1,13 @@
# yaml-language-server: $schema=values.schema.json
instance-argo:
argocd:
chart: sample-tenant
namespace: argocd
project: default
destinationServer: "https://kubernetes.default.svc"
selfHeal: true
prune: true
repo: "https://git.nplus.cloud"
global:
meta:
isArgo: true

View File

@@ -0,0 +1,12 @@
apiVersion: v2
name: sample-tenant
description: |
The sample tenant chart demonstrates how to use umbrella charts with default values,
e.g. to define tenant templates
type: application
dependencies:
- name: nplus-instance
alias: instance
version: "*-0"
repository: "file://../../../charts/instance"
version: 1.0.0

View File

@@ -0,0 +1,88 @@
# sample-tenant
The sample tenant chart demonstrates how to use umbrella charts with default values,
e.g. to define tenant templates
## sample-tenant Chart Configuration
You can customize / configure sample-tenant by setting configuration values on the command line or in values files,
that you can pass to helm. Please see the samples directory for details.
In case there is no value set, the key will not be used in the manifest, resulting in values taken from the config files of the component.
### Template Functions
You can use template functions in the values files. If you do so, make sure you quote correctly (single quotes, if you have double quotes in the template,
or escaped quotes).
### Global Values
All values can be set per component, per instance or globally per environment.
Example: `global.ingress.domain` sets the domain on instance level. You can still set a different domain on a component, such as administrator.
In that case, simply set `ingress.domain` for the administrator chart and that setting will have priority:
- Prio 1 - Component Level: `ingress.domain`
- Prio 2 - Instance Level: `global.ingress.domain`
- Prio 3 - Environment Level: `global.environment.ingress.domain`
### Using Values in Templates
As it would be a lot of typing to write `.Values.ingress.domain | default .Values.global.ingress.domain | default .Values.global.environment.ingress.domain`in your
template code, this is automatically done by nplus. You can simply type `.this.ingress.domain` and you will get a condensed and defaulted version
of your Values.
So an example in your `values.yaml` would be:
```
administrator:
waitFor:
- '-service {{ .component.prefix }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local:\{{ .this.nappl.port }} -timeout 600'
```
This example shows `.this.nappl.port` which might come from a component, instance or global setting. You do not need to care.
The `.Release.Namespace` is set by helm. You have access to all Release and Chart Metadata, just like in your chart code.
The `.component.prefix` is calculated by nplus and gives you some handy shortcuts to internal variables:
- `.component.chartName`
The name of the chart as in `.Chart.Name`, but with override by `.Values.nameOverride`
- `.component.shortChartName`
A shorter Version of the name - `nappl` instead of `nplus-component-nappl`
- `.component.prefix`
The instance Prefix used to name the resources including `-`. This prefix is dropped, if the
`.Release.Name` equals `.Release.Namespace` for those of you that only
run one nplus Instance per namespace
- `.component.name`
The name of the component, including `.Values.nameOverride` and some logic
- `.component.fullName`
The fullName inlcuding `.Values.fullnameOverride` and some logic
- `.component.chart`
Mainly the `Chart.Name` and `Chart.Version`
- `.component.storagePath`
The path where the component config is stored in the conf PVC
- `.component.handler`
The handler (either helm, argoCD or manual)
- `.instance.name`
The name of the instance, but with override by `.Values.instanceOverride`
- `.instance.group`
The group, this instance belongs to. Override by `.Values.groupOverride`
- `.instance.version`
The *nscale* version (mostly taken from Application Layer), this instance is deploying.
- `.environment.name`
The name of the environment, but with override by `.Values.environmentNameOverride`
### Keys
You can set any of the following values for this component:
| Key | Description | Default |
|-----|-------------|---------|
**global**&#8203;.ingress&#8203;.domain | | `"{{ .instance.group | default .Release.Name }}.sample.nplus.cloud"` |
**instance**&#8203;.application&#8203;.docAreas[0]&#8203;.id | | `"Sample"` |
**instance**&#8203;.components&#8203;.application | | `true` |

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,10 @@
# yaml-language-server: $schema=values.schema.json
instance:
components:
application: true
application:
docAreas:
- id: "Sample"
global:
ingress:
domain: "{{ .instance.group | default .Release.Name }}.sample.nplus.cloud"

57
samples/cluster/README.md Normal file
View File

@@ -0,0 +1,57 @@
# Preparing the K8s Cluster
*nplus* Charts bring some custom resources, *Application*, *Instance* and *Component*. they are created during deployment of a chart and then updated by the environment operator every time the status changes.
To make this work, you will need to have the *Custom Resource Definitions* applied to your cluster prior to deploying any environment or component. This deployment is handled by the *Cluster Chart*.
```bash
helm install nplus/nplus-cluster
```
The *CRDs* are grouped into *nscale* and *nplus* (both synonym), so that you can either query for
```bash
kubectl get instance
kubectl get component
kubectl get application
```
or simply all at once with
```bash
kubectl get nscale -A
```
the output looks like this (shortened output, showing the installed samples):
```bash
$ kubectl get nscale -A
NAMESPACE NAME INSTANCE COMPONENT TYPE VERSION STATUS
empty-sim component.nplus.cloud/database empty-sim database database 16 healthy
empty-sim component.nplus.cloud/nappl empty-sim nappl core 9.2.1302 healthy
lab component.nplus.cloud/demo-centralservices-s3-nstl demo-centralservices-s3 nstl nstl 9.2.1302 healthy
lab component.nplus.cloud/demo-ha-web demo-ha web web 9.2.1300 redundant
lab component.nplus.cloud/demo-ha-webdav demo-ha webdav webdav 9.2.1000 redundant
lab component.nplus.cloud/demo-ha-zerotrust-administrator demo-ha-zerotrust administrator administrator 9.2.1300 healthy
lab component.nplus.cloud/no-provisioner-nstl no-provisioner nstl nstl 9.2.1302 healthy
lab component.nplus.cloud/no-provisioner-rs no-provisioner rs rs 9.2.1201 starting
lab component.nplus.cloud/no-provisioner-web no-provisioner web web 9.2.1300 healthy
lab component.nplus.cloud/sbs-nappl sbs nappl core 9.2.1302 healthy
NAMESPACE NAME INSTANCE APPLICATION VERSION STATUS
empty-sim application.nplus.cloud/application empty-sim application 9.2.1303-123 healthy
empty-sim application.nplus.cloud/prepper empty-sim prepper 1.2.1300 healthy
lab application.nplus.cloud/demo-ha-zerotrust-application demo-ha-zerotrust application 9.2.1303-123 healthy
lab application.nplus.cloud/demo-shared-application demo-shared application 9.2.1303-123 healthy
lab application.nplus.cloud/sbs-sbs sbs SBS 9.2.1303-123 healthy
lab application.nplus.cloud/tenant-application tenant application 9.2.1303-123 healthy
NAMESPACE NAME HANDLER VERSION TENANT STATUS
empty-sim instance.nplus.cloud/empty-sim manual 9.2.1302 healthy
lab instance.nplus.cloud/default manual 9.2.1302 healthy
lab instance.nplus.cloud/demo-centralservices manual 9.2.1302 healthy
lab instance.nplus.cloud/rms manual 9.2.1302 healthy
lab instance.nplus.cloud/sbs manual 9.2.1302 healthy
lab instance.nplus.cloud/tenant manual 9.2.1302 healthy
```

36
samples/cluster/build.sh Executable file
View File

@@ -0,0 +1,36 @@
#!/bin/bash
#
# This sample script builds the example as described. It is also used to build the test environment in our lab,
# so it should be well tested.
#
# Make sure it fails immediately, if anything goes wrong
set -e
# -- ENVironment variables:
# CHARTS: The path to the source code
# DEST: The path to the build destination
# SAMPLE: The name of the sample
SAMPLE=cluster
# Check, if we have the source code available
if [ ! -d "$CHARTS" ]; then
echo "ERROR Building $SAMPLE example: The Charts Sources folder is not set. Please make sure to run this script with the full Source Code available"
exit 1
fi
if [ ! -d "$DEST" ]; then
echo "ERROR Building $SAMPLE example: DEST folder not found."
exit 1
fi
if [ ! -d "$CHARTS/$SAMPLE" ]; then
echo "ERROR Building $SAMPLE example: Chart Sources in $CHARTS/$SAMPLE not found. Are you running this script as a subscriber?"
exit 1
fi
# Output what is happening
echo "Building $SAMPLE"
# Create the manifest
mkdir -p $DEST/cluster
helm template --debug \
nplus $CHARTS/cluster > $DEST/cluster/nplus.yaml

View File

@@ -0,0 +1 @@
This is the most simple example: It renders a default instance wihout any values at all.

38
samples/default/build.sh Executable file
View File

@@ -0,0 +1,38 @@
#!/bin/bash
#
# This sample script builds the example as described. It is also used to build the test environment in our lab,
# so it should be well tested.
#
# Make sure it fails immediately, if anything goes wrong
set -e
# -- ENVironment variables:
# CHARTS: The path to the source code
# DEST: The path to the build destination
# SAMPLE: The directory of the sample
# NAME: The name of the sample, used as the .Release.Name
SAMPLE="default"
NAME="sample-$SAMPLE"
# Check, if we have the source code available
if [ ! -d "$CHARTS" ]; then
echo "ERROR Building $SAMPLE example: The Charts Sources folder is not set. Please make sure to run this script with the full Source Code available"
exit 1
fi
if [ ! -d "$DEST" ]; then
echo "ERROR Building $SAMPLE example: DEST folder not found."
exit 1
fi
if [ ! -d "$CHARTS/instance" ]; then
echo "ERROR Building $SAMPLE example: Chart Sources in $CHARTS/instance not found. Are you running this script as a subscriber?"
exit 1
fi
# Output what is happening
echo "Building $NAME"
# Create the manifest
mkdir -p $DEST/instance
helm template --debug \
$NAME $CHARTS/instance > $DEST/instance/default.yaml

View File

@@ -0,0 +1,55 @@
# Using detached applications
All the other samples use an application that is deployed **inside of an instance**. However, you can also deploy an application **detached** from the instance as a solo chart.
The reason for this is, that you
- can update the instance without running the application update
- update the application without touching the instance
- have multiple applications deployed within one instance
There are two major things you need to do:
1. make sure the application charts sets the instance name of the instance, it should connect to
2. take the default values of the application match the ones it would get by an instance deployment
This is a sample: (find the complete one in the [application.yaml](application.yaml))
```yaml
nameOverride: SBS
docAreas:
- id: "SBS"
name: "DocArea with SBS"
description: "This is a sample DocArea with the SBS Apps installed"
apps:
...
instance:
# this is the name of the instance, it should belong to
name: "sample-detached"
# make sure it can wait for the nappl of the instance to be ready, before it deploys.
waitImage:
repo: cr.nplus.cloud/subscription
name: toolbox2
tag: 1.2.1300
pullPolicy: IfNotPresent
waitFor:
- "-service {{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 1800"
# Now we define where and what to deploy
nappl:
host: "{{ .component.prefix }}nappl.{{ .Release.Namespace }}"
port: 8080
ssl: false
instance: "nscalealinst1"
account: admin
domain: nscale
password: admin
secret:
nstl:
host: "{{ .component.prefix }}nstl.{{ .Release.Namespace }}"
rs:
host: "{{ .component.prefix }}rs.{{ .Release.Namespace }}"
```

View File

@@ -0,0 +1,56 @@
nameOverride: SBS
docAreas:
- id: "SBS"
name: "DocArea with SBS"
description: "This is a sample DocArea with the SBS Apps installed"
apps:
- "/pool/nstore/bl-app-9.0.1202.zip"
- "/pool/nstore/gdpr-app-9.0.1302.zip"
- "/pool/nstore/sbs-base-9.0.1302.zip"
- "/pool/nstore/sbs-app-9.0.1302.zip"
- "/pool/nstore/tmpl-app-9.0.1302.zip"
- "/pool/nstore/cm-base-9.0.1302.zip"
- "/pool/nstore/cm-app-9.0.1302.zip"
- "/pool/nstore/hr-base-9.0.1302.zip"
- "/pool/nstore/hr-app-9.0.1302.zip"
- "/pool/nstore/pm-base-9.0.1302.zip"
- "/pool/nstore/pm-app-9.0.1302.zip"
- "/pool/nstore/sd-base-9.0.1302.zip"
- "/pool/nstore/sd-app-9.0.1302.zip"
- "/pool/nstore/kon-app-9.0.1302.zip"
- "/pool/nstore/kal-app-9.0.1302.zip"
- "/pool/nstore/dok-app-9.0.1302.zip"
- "/pool/nstore/ts-base-9.0.1302.zip"
- "/pool/nstore/ts-app-9.0.1302.zip"
- "/pool/nstore/ocr-base-9.0.1302.zip"
resources:
requests:
cpu: "10m"
memory: "512Mi"
limits:
cpu: "4000m"
memory: "2Gi"
instance:
name: "sample-detached"
waitImage:
repo: cr.nplus.cloud/subscription
name: toolbox2
tag: 1.2.1300
pullPolicy: IfNotPresent
nappl:
host: "{{ .component.prefix }}nappl.{{ .Release.Namespace }}"
port: 8080
ssl: false
instance: "nscalealinst1"
account: admin
domain: nscale
password: admin
secret:
waitFor:
- "-service {{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 1800"
nstl:
host: "{{ .component.prefix }}nstl.{{ .Release.Namespace }}"
rs:
host: "{{ .component.prefix }}rs.{{ .Release.Namespace }}"

49
samples/detached/build.sh Executable file
View File

@@ -0,0 +1,49 @@
#!/bin/bash
#
# This sample script builds the example as described. It is also used to build the test environment in our lab,
# so it should be well tested.
#
# Make sure it fails immediately, if anything goes wrong
set -e
# -- ENVironment variables:
# CHARTS: The path to the source code
# DEST: The path to the build destination
# SAMPLE: The directory of the sample
# NAME: The name of the sample, used as the .Release.Name
# KUBE_CONTEXT: The name of the kube context, used to build this sample depending on where you run it against. You might have different Environments such as lab, dev, qa, prod, demo, local, ...
# Check, if we have the source code available
if [ ! -d "$CHARTS" ]; then
echo "ERROR Building $SAMPLE example: The Charts Sources folder is not set. Please make sure to run this script with the full Source Code available"
exit 1
fi
if [ ! -d "$DEST" ]; then
echo "ERROR Building $SAMPLE example: DEST folder not found."
exit 1
fi
if [ ! -d "$CHARTS/instance" ]; then
echo "ERROR Building $SAMPLE example: Chart Sources in $CHARTS/instance not found. Are you running this script as a subscriber?"
exit 1
fi
# Set the Variables
SAMPLE="detached"
NAME="sample-$SAMPLE"
# Output what is happening
echo "Building $NAME"
# Create the manifest
mkdir -p $DEST/instance
helm template --debug \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
$NAME $CHARTS/instance > $DEST/instance/$SAMPLE.yaml
mkdir -p $DEST/application
helm template --debug \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/detached/application.yaml \
$NAME-app $CHARTS/application > $DEST/application/$SAMPLE.yaml

View File

@@ -0,0 +1,69 @@
# K8s namespace aka *nplus environment*
*nplus instances* are deployed into K8s namespaces. Always. even if you do not specify a namespace, it uses a namespace: `default`.
In order to use this namespace for *nplus instances*, you need to deploy some shared *nplus components* into it, which are used by the instances. This is done by the environment chart:
```
helm install \
--values demo.yaml \
demo nplus/nplus-environment
```
After that, the K8s namespace is a valid *nplus environment* that can house multiple *nplus instances*.
## deploying assets into the environment
Most likely, you will need assets to be used by your instances. Fonts for example: The *nscale Rendition Server* and die *nscale Server Application Layer* both require the Microsoft fonts, that are not allowed to be distributed by neither nscale nor nplus. So this example shows how to upload some missing pieces into the environment:
```
kubectl cp ./apps/app-installer-9.0.1202.jar nplus-toolbox-0:/conf/pool
kubectl cp ./fonts nplus-toolbox-0:/conf/pool
kubectl cp ./copy-snippet.sh nplus-toolbox-0:/conf/pool/scripts
kubectl cp ./test.md nplus-toolbox-0:/conf/pool/snippets
kubectl cp ./snc nplus-toolbox-0:/conf/pool
```
Alternatively, you can also use a `prepper` component, that you can activate on the environment chart, to download assets from any web site and deploy them into the environment:
```
components:
prepper: true
prepper:
download:
- "https://git.nplus.cloud/public/nplus/raw/branch/master/assets/sample.tar.gz"
```
Please see the prepper [README.md](../../charts/prepper/README.md) for more information.
## Operator Web UI
The environment comes with the operator, responsible for managing / controlling the [custom resources](../cluster/README.md). It has a Web UI, that can be enabled in the environment chart.
![screenshot operator](assets/operator.png)
## *namespace*-less manifests
Speaking of namespaces: Sometimes you want to drop the namespace from your manifest. This can be done by
```yaml
utils:
includeNamespace: false
```
when you then call
```bash
helm template myInstance nplus/nplus-instance > myInstance.yaml
```
the manifest in `myInstance.yaml` will **not** have a namespace set, so you can apply it to multiple namespaces later:
```bash
kubectl apply --namespace dev -f myInstance.yaml
kubectl apply --namespace qa -f myInstance.yaml
kubectl apply --namespace prod -f myInstance.yaml
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 337 KiB

39
samples/environment/build.sh Executable file
View File

@@ -0,0 +1,39 @@
#!/bin/bash
#
# This sample script builds the example as described. It is also used to build the test environment in our lab,
# so it should be well tested.
#
# Make sure it fails immediately, if anything goes wrong
set -e
# -- ENVironment variables:
# CHARTS: The path to the source code
# DEST: The path to the build destination
# SAMPLE: The name of the sample
# KUBE_CONTEXT: The name of the kube context, used to build this sample depending on where you run it against. You might have different Environments such as lab, dev, qa, prod, demo, local, ...
SAMPLE=environment
# Check, if we have the source code available
if [ ! -d "$CHARTS" ]; then
echo "ERROR Building $SAMPLE example: The Charts Sources folder is not set. Please make sure to run this script with the full Source Code available"
exit 1
fi
if [ ! -d "$DEST" ]; then
echo "ERROR Building $SAMPLE example: DEST folder not found."
exit 1
fi
if [ ! -d "$CHARTS/environment" ]; then
echo "ERROR Building $SAMPLE example: Chart Sources in $CHARTS/environment not found. Are you running this script as a subscriber?"
exit 1
fi
# Output what is happening
echo "Building $SAMPLE for $KUBE_CONTEXT"
# Create the manifest
mkdir -p $DEST/environment
helm template --debug --render-subchart-notes \
--values $SAMPLES/$SAMPLE/$KUBE_CONTEXT.yaml \
$KUBE_CONTEXT $CHARTS/environment > $DEST/environment/$KUBE_CONTEXT.yaml

View File

@@ -0,0 +1,24 @@
toolbox:
enabled: true
dav:
enabled: true
nstoreDownloader:
enabled: true
global:
environment:
utils:
renderComments: true
ingress:
domain: "{{ .instance.group | default .Release.Name }}.demo.nplus.cloud"
class: "public"
issuer: "nplus-issuer"
storage:
conf:
class: "cephfs"
data:
class: "ceph-rbd"
disk:
class: "ceph-rbd"
file:
class: "cephfs"
appInstaller: "/pool/app-installer-9.0.1202.jar"

View File

@@ -0,0 +1,26 @@
toolbox:
enabled: true
dav:
enabled: true
nstoreDownloader:
enabled: true
global:
environment:
ingress:
class: "ingress-internal"
domain: "{{ .instance.group | default .Release.Name }}.dev.nplus.cloud"
issuer: "nplus-issuer"
storage:
conf:
class: "pv-af-auto"
data:
class: "pv-disk-auto"
file:
class: "pv-af-auto"
appInstaller: "/pool/app-installer-9.0.1202.jar"
security:
illumio:
enabled: true
loc: "samples"
supplier: "42i"
platform: "nplus.cloud"

View File

@@ -0,0 +1,48 @@
toolbox:
enabled: true
nstoreDownloader:
enabled: true
dav:
enabled: true
nappl:
ingress:
enabled: true
# -- In the lab / dev environment, we quite often throw away the data disk while keeping the conf folder
# the default for the DA_HID.DAT is the conf folder, so they do not match any more.
# So we switch the check off here.
nstl:
checkHighestDocId: "0"
nstla:
checkHighestDocId: "0"
nstlb:
checkHighestDocId: "0"
global:
environment:
ingress:
domain: "{{ .instance.group | default .Release.Name }}.lab.nplus.cloud"
class: "public"
issuer: "nplus-issuer"
whitelist: "192.168.0.0/16,10.0.0.0/8"
namespace: ingress
# proxyReadTimeout: "360s"
storage:
conf:
class: "cephfs"
ptemp:
class: "cephfs"
data:
class: "ceph-rbd"
disk:
class: "ceph-rbd"
file:
class: "cephfs"
appInstaller: "/pool/app-installer-9.0.1202.jar"
# repoOverride: cr.test.lan
security:
cni:
defaultIngressPolicy: deny
defaultEgressPolicy: deny
createNetworkPolicy: true
excludeUnusedPorts: false

View File

@@ -0,0 +1,12 @@
toolbox:
enabled: true
dav:
enabled: true
nstoreDownloader:
enabled: true
global:
environment:
ingress:
class: "nginx"
domain: "{{ .instance.group | default .Release.Name }}.dev.local"
appInstaller: "/pool/app-installer-9.0.1202.jar"

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: Service
metadata:
name: nplus-operator-nodeport-access
spec:
type: NodePort
selector:
nplus/component: operator
ports:
- port: 8080
targetPort: 8080
nodePort: 31976

98
samples/generic/README.md Normal file
View File

@@ -0,0 +1,98 @@
# Generic Mount Example
This allows you to mount any pre-provisioned PVs, secret or configMap into any container.
It can be used e.g. to mount migration nfs, cifs / samba shares into a pipeliner container.
Use the following format:
```
mounts:
generic:
- name: <name>:
path: <the path in the container, where you want to mount this>
volumeName: <the name of the PV to be mounted>
configMap: <the name of the configMap to bemounted>
secret: <the name of the secret to bemounted>
subPath: [a (optional) subpath to be used inside the PV]
accessMode: <ReadWriteMany|ReadWriteOnce|ReadOnlyMany|ReadWriteOncePod>
size: <size request>
```
## Mounting generic secrets or configMaps
In this example, we create a secret with two sample files and a configMap with two sample files:
```
apiVersion: v1
kind: Secret
metadata:
name: sample-generic-secret
type: Opaque
stringData:
test1.txt: |
This is a test file
lets see if this works.
test2.txt: |
This is a second test file
lets see if this works.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-generic-configmap
data:
test1.txt: |
This is a test file
lets see if this works.
test2.txt: |
This is a second test file
lets see if this works.
```
Then we use these objects to mount them as directory and as files:
```
nappl:
mounts:
generic:
# -- This shows how to mount the contents of a secret into
# a directory
- name: "sample-generic-secret"
secret: "sample-generic-secret"
path: "/mnt/secret"
# -- This shows how to mount the contents of a configMap into
# a directory
- name: "sample-generic-configmap"
configMap: "sample-generic-configmap"
path: "/mnt/configmap"
# -- This shows how to mount a file from a secret to a secret file
- name: "sample-generic-secret-file"
secret: "sample-generic-secret"
path: "/mnt/secret-file.txt"
subPath: "test1.txt"
# -- This shows how to mount a file from a configMap to a file
- name: "sample-generic-configmap-file"
configMap: "sample-generic-configmap"
path: "/mnt/configmap-file.txt"
subPath: "test2.txt"
```
## Mounting generic PVs
Here is an example how to mount any pre-created PV:
```
mounts:
generic:
# -- This shows how to mount a generic Persistent Volume
- name: "migration"
path: "/mnt/migration"
subPath: "{{ .Release.Name }}-mig"
accessModes:
- ReadWriteMany
volumeName: my-migration-data-volume
size: "512Gi"
```

45
samples/generic/build.sh Executable file
View File

@@ -0,0 +1,45 @@
#!/bin/bash
#
# This sample script builds the example as described. It is also used to build the test environment in our lab,
# so it should be well tested.
#
# Make sure it fails immediately, if anything goes wrong
set -e
# -- ENVironment variables:
# CHARTS: The path to the source code
# DEST: The path to the build destination
# SAMPLE: The directory of the sample
# NAME: The name of the sample, used as the .Release.Name
# KUBE_CONTEXT: The name of the kube context, used to build this sample depending on where you run it against. You might have different Environments such as lab, dev, qa, prod, demo, local, ...
# Check, if we have the source code available
if [ ! -d "$CHARTS" ]; then
echo "ERROR Building $SAMPLE example: The Charts Sources folder is not set. Please make sure to run this script with the full Source Code available"
exit 1
fi
if [ ! -d "$DEST" ]; then
echo "ERROR Building $SAMPLE example: DEST folder not found."
exit 1
fi
if [ ! -d "$CHARTS/instance" ]; then
echo "ERROR Building $SAMPLE example: Chart Sources in $CHARTS/instance not found. Are you running this script as a subscriber?"
exit 1
fi
# Set the Variables
SAMPLE="generic"
NAME="sample-$SAMPLE"
# Output what is happening
echo "Building $NAME"
# Create the manifest
mkdir -p $DEST/instance
helm template --debug \
--values $SAMPLES/application/empty.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
--values $SAMPLES/generic/values.yaml \
$NAME $CHARTS/instance > $DEST/instance/$SAMPLE.yaml

View File

@@ -0,0 +1,24 @@
apiVersion: v1
kind: Secret
metadata:
name: sample-generic-secret
type: Opaque
stringData:
test1.txt: |
This is a test file
lets see if this works.
test2.txt: |
This is a second test file
lets see if this works.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-generic-configmap
data:
test1.txt: |
This is a test file
lets see if this works.
test2.txt: |
This is a second test file
lets see if this works.

View File

@@ -0,0 +1,26 @@
nappl:
mounts:
generic:
# -- This shows how to mount the contents of a secret into
# a directory
- name: "sample-generic-secret"
secret: "sample-generic-secret"
path: "/mnt/secret"
# -- This shows how to mount the contents of a configMap into
# a directory
- name: "sample-generic-configmap"
configMap: "sample-generic-configmap"
path: "/mnt/configmap"
# -- This shows how to mount a file from a secret to a secret file
- name: "sample-generic-secret-file"
secret: "sample-generic-secret"
path: "/mnt/secret-file.txt"
subPath: "test1.txt"
# -- This shows how to mount a file from a configMap to a file
- name: "sample-generic-configmap-file"
configMap: "sample-generic-configmap"
path: "/mnt/configmap-file.txt"
subPath: "test2.txt"

56
samples/group/README.md Normal file
View File

@@ -0,0 +1,56 @@
# Grouping Instances
Sometimes Instances become quite large with many components. If you work on them with multiple team members, you end up having to synchronize the deployment of the Instances.
You can easily rip large Instances apart using the `group` tag, joining multiple Instances into one group and making sure the NetworkPolicies are opened to pods from other Instances within the Instance Group.
```yaml
global:
instance:
# -- despite the instance name, all components within this group will be prefixed
# with the group (unless the group name and the environment name are not identical)
# Also this makes sure the network policies are acting on the group, not on the instance.
group: "sample-group"
```
You can query the instance group in your code with `.instance.group`.
Example: We build multiple Instances in one group:
- sample-group-backend
- Database
- nstl
- rs
- sample-group-middleware
- nappl
- application(s)
- sample-group-frontend
- web
- cmis
Portainer is showing the group as if it were an single instance:
![Portainer](assets/portainer.png)
The nplus UI is showing the instances of the group:
![nplus Web Monitoring](assets/monitor.png)
And the nplus CLI is also showing single instances:
```
% kubectl get nscale
NAME INSTANCE COMPONENT TYPE VERSION STATUS
component.nplus.cloud/sample-group-cmis sample-group-frontend cmis cmis 9.2.1200 healthy
component.nplus.cloud/sample-group-database sample-group-backend database database 16 healthy
component.nplus.cloud/sample-group-nappl sample-group-middleware nappl core 9.2.1302 healthy
component.nplus.cloud/sample-group-rs sample-group-backend rs rs 9.2.1201 healthy
component.nplus.cloud/sample-group-web sample-group-frontend web web 9.2.1300 healthy
NAME HANDLER VERSION TENANT STATUS
instance.nplus.cloud/sample-group-backend manual 9.2.1302 healthy
instance.nplus.cloud/sample-group-frontend manual 9.2.1302 healthy
instance.nplus.cloud/sample-group-middleware manual 9.2.1302 healthy
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

View File

@@ -0,0 +1,42 @@
components:
nappl: false
nappljobs: false
web: false
mon: false
rs: true
ilm: false
cmis: false
database: true
nstl: true
nstla: false
nstlb: false
pipeliner: false
application: false
administrator: false
webdav: false
rms: false
pam: false
global:
instance:
# -- despite the instance name, all components within this group will be prefixed
# with the group (unless the group name and the environment name are not identical)
# Also this makes sure the network policies are acting on the group, not on the instance.
group: "sample-group"
# -- We need to make sure, that only ONE instance is creating the default network policies
# and also the certificate for the group.
# All other group members are using the central one
override:
ingress:
# -- this overrides any issuers and makes sure not certificate request is generated for
# cert-manager
issuer: null
# -- since no issuer is set, the default would be to generate a self signed certificate.
# We need to prevent that
createSelfSignedCertificate: false
security:
cni:
# -- Even if we globally switched the creation of network policies on, we do not want that
# for this instance (and the instance chart only: Subcharts might still create the policies.
# If you want to force that off as well, override in global)
createNetworkPolicy: false

56
samples/group/build.sh Executable file
View File

@@ -0,0 +1,56 @@
#!/bin/bash
#
# This sample script builds the example as described. It is also used to build the test environment in our lab,
# so it should be well tested.
#
# Make sure it fails immediately, if anything goes wrong
set -e
# -- ENVironment variables:
# CHARTS: The path to the source code
# DEST: The path to the build destination
# SAMPLE: The directory of the sample
# NAME: The name of the sample, used as the .Release.Name
# KUBE_CONTEXT: The name of the kube context, used to build this sample depending on where you run it against. You might have different Environments such as lab, dev, qa, prod, demo, local, ...
# Check, if we have the source code available
if [ ! -d "$CHARTS" ]; then
echo "ERROR Building $SAMPLE example: The Charts Sources folder is not set. Please make sure to run this script with the full Source Code available"
exit 1
fi
if [ ! -d "$DEST" ]; then
echo "ERROR Building $SAMPLE example: DEST folder not found."
exit 1
fi
if [ ! -d "$CHARTS/instance" ]; then
echo "ERROR Building $SAMPLE example: Chart Sources in $CHARTS/instance not found. Are you running this script as a subscriber?"
exit 1
fi
# Set the Variables
SAMPLE="group"
NAME="sample-$SAMPLE"
# Output what is happening
echo "Building $NAME"
# Create the manifest
mkdir -p $DEST/instance-group
helm template --debug \
--values $SAMPLES/group/backend.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
$NAME-backend $CHARTS/instance > $DEST/instance-group/$SAMPLE-backend.yaml
helm template --debug \
--values $SAMPLES/group/middleware.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
$NAME-middleware $CHARTS/instance > $DEST/instance-group/$SAMPLE-middleware.yaml
helm template --debug \
--values $SAMPLES/group/frontend.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
$NAME-frontend $CHARTS/instance > $DEST/instance-group/$SAMPLE-frontend.yaml

View File

@@ -0,0 +1,26 @@
components:
nappl: false
nappljobs: false
web: true
mon: false
rs: false
ilm: false
cmis: true
database: false
nstl: false
nstla: false
nstlb: false
pipeliner: false
application: false
administrator: false
webdav: false
rms: false
pam: false
global:
instance:
# -- despite the instance name, all components within this group will be prefixed
# with the group (unless the group name and the environment name are not identical)
# Also this makes sure the network policies are acting on the group, not on the instance.
group: "sample-group"
# Notice, that we do NOT override anything here, are we use this instance as the master for the group.

View File

@@ -0,0 +1,47 @@
components:
nappl: true
nappljobs: false
web: false
mon: false
rs: false
ilm: false
cmis: false
database: false
nstl: false
nstla: false
nstlb: false
pipeliner: false
application: true
administrator: false
webdav: false
rms: false
pam: false
application:
docAreas:
- id: "Sample"
global:
instance:
# -- despite the instance name, all components within this group will be prefixed
# with the group (unless the group name and the environment name are not identical)
# Also this makes sure the network policies are acting on the group, not on the instance.
group: "sample-group"
# -- We need to make sure, that only ONE instance is creating the default network policies
# and also the certificate for the group.
# All other group members are using the central one
override:
ingress:
# -- this overrides any issuers and makes sure not certificate request is generated for
# cert-manager
issuer: null
# -- since no issuer is set, the default would be to generate a self signed certificate.
# We need to prevent that
createSelfSignedCertificate: false
security:
cni:
# -- Even if we globally switched the creation of network policies on, we do not want that
# for this instance (and the instance chart only: Subcharts might still create the policies.
# If you want to force that off as well, override in global)
createNetworkPolicy: false

78
samples/ha/README.md Normal file
View File

@@ -0,0 +1,78 @@
# High Availability
To gain a higher level of availability for your Instance, you can
- create more Kubernetes Cluster Nodes
- create more replicas of the *nscale* and *nplus* components
- distribute those replicas across multiple nodes using anti-affinities
This is how:
```
helm install \
--values samples/ha/values.yaml
--values samples/environment/demo.yaml \
sample-ha nplus/nplus-instance
```
The essents of the values file is this:
- We use three (3) *nscale Server Application Layer*, two dedicated to user access, one dedicated to jobs
- if the jobs node fails, the user nodes take the jobs (handled by priority)
- if one of the user nodes fail, the other one handles the load
- Kubernetes takes care of restarting nodes should that happen
- All components run with two replicas
- Pod anti-affinities handle the distribution
- any administration component only connects to the jobs nappl, leaving the user nodes to the users
- PodDisruptionBudgets are defined for the crutial components. These are set via `minReplicaCount` for the components that can support multiple replicas, and `minReplicaCountType` for the **first** replicaSet of the components that do not support replicas, in this case nstla.
```
web:
replicaCount: 2
minReplicaCount: 1
rs:
replicaCount: 2
minReplicaCount: 1
ilm:
replicaCount: 2
minReplicaCount: 1
cmis:
replicaCount: 2
minReplicaCount: 1
webdav:
replicaCount: 2
minReplicaCount: 1
nstla:
minReplicaCountType: 1
administrator:
nappl:
host: "{{ .component.prefix }}nappljobs.{{ .Release.Namespace }}"
waitFor:
- "-service {{ .component.prefix }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 600"
pam:
nappl:
host: "{{ .component.prefix }}nappljobs.{{ .Release.Namespace }}"
waitFor:
- "-service {{ .component.prefix }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 600"
nappl:
replicaCount: 2
minReplicaCount: 1
jobs: false
waitFor:
- "-service {{ .component.prefix }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 600"
nappljobs:
replicaCount: 1
jobs: true
disableSessionReplication: true
ingress:
enabled: false
snc:
enabled: true
waitFor:
- "-service {{ .component.prefix }}database.{{ .Release.Namespace }}.svc.cluster.local:5432 -timeout 600"
application:
nstl:
host: "{{ .component.prefix }}nstl-cluster.{{ .Release.Namespace }}"
nappl:
host: "{{ .component.prefix }}nappljobs.{{ .Release.Namespace }}"
```

58
samples/ha/build.sh Executable file
View File

@@ -0,0 +1,58 @@
#!/bin/bash
#
# This sample script builds the example as described. It is also used to build the test environment in our lab,
# so it should be well tested.
#
# Make sure it fails immediately, if anything goes wrong
set -e
# -- ENVironment variables:
# CHARTS: The path to the source code
# DEST: The path to the build destination
# SAMPLE: The directory of the sample
# NAME: The name of the sample, used as the .Release.Name
# KUBE_CONTEXT: The name of the kube context, used to build this sample depending on where you run it against. You might have different Environments such as lab, dev, qa, prod, demo, local, ...
# Check, if we have the source code available
if [ ! -d "$CHARTS" ]; then
echo "ERROR Building $SAMPLE example: The Charts Sources folder is not set. Please make sure to run this script with the full Source Code available"
exit 1
fi
if [ ! -d "$DEST" ]; then
echo "ERROR Building $SAMPLE example: DEST folder not found."
exit 1
fi
if [ ! -d "$CHARTS/instance" ]; then
echo "ERROR Building $SAMPLE example: Chart Sources in $CHARTS/instance not found. Are you running this script as a subscriber?"
exit 1
fi
# Set the Variables
SAMPLE="ha"
NAME="sample-$SAMPLE"
# Output what is happening
echo "Building $NAME"
# Create the manifest
mkdir -p $DEST/instance
helm template --debug \
--values $SAMPLES/ha/values.yaml \
--values $SAMPLES/hid/values.yaml \
--values $SAMPLES/application/empty.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
$NAME $CHARTS/instance > $DEST/instance/$SAMPLE.yaml
# creating the Argo manifest
mkdir -p $DEST/instance-argo
helm template --debug \
--values $SAMPLES/ha/values.yaml \
--values $SAMPLES/hid/values.yaml \
--values $SAMPLES/application/empty.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
$NAME-argo $CHARTS/instance-argo > $DEST/instance-argo/$SAMPLE-argo.yaml

126
samples/ha/values.yaml Normal file
View File

@@ -0,0 +1,126 @@
components:
nappl: true
nappljobs: true
web: true
mon: true
rs: true
ilm: true
erpproxy: true
erpcmis: true
cmis: true
database: true
nstl: false
nstla: true
nstlb: true
pipeliner: false
application: true
administrator: true
webdav: true
rms: false
pam: true
web:
replicaCount: 2
minReplicaCount: 1
rs:
replicaCount: 2
minReplicaCount: 1
ilm:
replicaCount: 2
minReplicaCount: 1
erpproxy:
replicaCount: 2
minReplicaCount: 1
erpcmis:
replicaCount: 2
minReplicaCount: 1
cmis:
replicaCount: 2
minReplicaCount: 1
webdav:
replicaCount: 2
minReplicaCount: 1
administrator:
nappl:
host: "{{ .component.prefix }}nappljobs.{{ .Release.Namespace }}"
waitFor:
- "-service {{ .component.prefix }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 600"
pam:
nappl:
host: "{{ .component.prefix }}nappljobs.{{ .Release.Namespace }}"
waitFor:
- "-service {{ .component.prefix }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 600"
nappl:
replicaCount: 2
minReplicaCount: 1
jobs: false
waitFor:
- "-service {{ .component.prefix }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 600"
nappljobs:
replicaCount: 1
jobs: true
disableSessionReplication: true
ingress:
enabled: false
snc:
enabled: true
waitFor:
- "-service {{ .component.prefix }}database.{{ .Release.Namespace }}.svc.cluster.local:5432 -timeout 600"
application:
nstl:
host: "{{ .component.prefix }}nstl-cluster.{{ .Release.Namespace }}"
nappl:
host: "{{ .component.prefix }}nappljobs.{{ .Release.Namespace }}"
waitFor:
- "-service {{ .component.prefix }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 1800"
nstla:
minReplicaCountType: 1
accounting: true
logForwarder:
- name: Accounting
path: "/opt/ceyoniq/nscale-server/storage-layer/accounting/*.csv"
serverID: 4711
env:
NSTL_REMOTESERVER_MAINTAINCONNECTION: 1
NSTL_REMOTESERVER_SERVERID: 4712
NSTL_REMOTESERVER_ADDRESS: "nstlb"
NSTL_REMOTESERVER_NAME: "nstla"
NSTL_REMOTESERVER_USERNAME: "admin"
NSTL_REMOTESERVER_PASSWORD: "admin"
NSTL_REMOTESERVER_MAXCONNECTIONS: 10
NSTL_REMOTESERVER_MAXARCCONNECTIONS: 1
NSTL_REMOTESERVER_FORWARDDELETEJOBS: 0
NSTL_REMOTESERVER_ACCEPTRETRIEVAL: 1
NSTL_REMOTESERVER_ACCEPTDOCS: 1
NSTL_REMOTESERVER_ACCEPTDOCSWITHTHISSERVERID: 1
NSTL_REMOTESERVER_PERMANENTMIGRATION: 1
nstlb:
accounting: true
logForwarder:
- name: Accounting
path: "/opt/ceyoniq/nscale-server/storage-layer/accounting/*.csv"
serverID: 4712
env:
NSTL_REMOTESERVER_MAINTAINCONNECTION: 1
NSTL_REMOTESERVER_SERVERID: 4711
NSTL_REMOTESERVER_ADDRESS: "nstla"
NSTL_REMOTESERVER_NAME: "nstla"
NSTL_REMOTESERVER_USERNAME: "admin"
NSTL_REMOTESERVER_PASSWORD: "admin"
NSTL_REMOTESERVER_MAXCONNECTIONS: 10
NSTL_REMOTESERVER_MAXARCCONNECTIONS: 1
NSTL_REMOTESERVER_FORWARDDELETEJOBS: 0
NSTL_REMOTESERVER_ACCEPTRETRIEVAL: 1
NSTL_REMOTESERVER_ACCEPTDOCS: 1
NSTL_REMOTESERVER_ACCEPTDOCSWITHTHISSERVERID: 1
NSTL_REMOTESERVER_PERMANENTMIGRATION: 1

17
samples/hid/README.md Normal file
View File

@@ -0,0 +1,17 @@
## Highest ID
This example shows how to configure storing the HID file in *nscale Server Storage Layer*:
```yaml
global:
# -- enables checking the highest DocID when starting the server.
# this only makes sense, if you also set a separate volume for the highest ID
# This is a backup / restore feature to avoid data mangling
checkHighestDocId: "1"
# -- sets the path of the highest ID file.
dvCheckPath: "/opt/ceyoniq/nscale-server/storage-layer/hid"
```
We use the global section here to have it activated in all nstl instances defined.
This is used by the empty application sample and the ha sample.

7
samples/hid/values.yaml Normal file
View File

@@ -0,0 +1,7 @@
global:
# -- enables checking the highest DocID when starting the server.
# this only makes sense, if you also set a separate volume for the highest ID
# This is a backup / restore feature to avoid data mangling
checkHighestDocId: "1"
# -- sets the path of the highest ID file.
dvCheckPath: "/opt/ceyoniq/nscale-server/storage-layer/hid"

36
samples/nowaves/README.md Normal file
View File

@@ -0,0 +1,36 @@
# Deploying with Argo
## the argo version of the instance
Deployin with argoCD is straight forward, as there is a ready-to-run instance chart version for argo, that takes **exactly** the same values as the *normal* chart:
```bash
helm install \
--values samples/application/empty.yaml \
--values samples/environment/demo.yaml \
sample-empty-argo nplus/nplus-instance-argo
```
## Using Waves
The instance chart already comes with pre-defined waves. They are good to go with (can be modified though):
```yaml
nappl:
meta:
wave: 15
```
**But**: You might be annoyed by ArgoCD, when some services do not come up preventing other services to not be started at all since ArgoCD operates in Waves, so later services might not be deployed at all if an early wave services fails.
Especially in DEV, this can become a testing problem.
To turn *off* Waves completely for a Stage, Environment or Instance, go
```
global:
environment:
utils:
disableWave: true
```

45
samples/nowaves/build.sh Executable file
View File

@@ -0,0 +1,45 @@
#!/bin/bash
#
# This sample script builds the example as described. It is also used to build the test environment in our lab,
# so it should be well tested.
#
# Make sure it fails immediately, if anything goes wrong
set -e
# -- ENVironment variables:
# CHARTS: The path to the source code
# DEST: The path to the build destination
# SAMPLE: The directory of the sample
# NAME: The name of the sample, used as the .Release.Name
# KUBE_CONTEXT: The name of the kube context, used to build this sample depending on where you run it against. You might have different Environments such as lab, dev, qa, prod, demo, local, ...
# Check, if we have the source code available
if [ ! -d "$CHARTS" ]; then
echo "ERROR Building $SAMPLE example: The Charts Sources folder is not set. Please make sure to run this script with the full Source Code available"
exit 1
fi
if [ ! -d "$DEST" ]; then
echo "ERROR Building $SAMPLE example: DEST folder not found."
exit 1
fi
if [ ! -d "$CHARTS/instance" ]; then
echo "ERROR Building $SAMPLE example: Chart Sources in $CHARTS/instance not found. Are you running this script as a subscriber?"
exit 1
fi
# Set the Variables
SAMPLE="nowaves"
NAME="sample-$SAMPLE"
# Output what is happening
echo "Building $NAME"
# creating the Argo manifest
mkdir -p $DEST/instance-argo
helm template --debug \
--values $SAMPLES/application/empty.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
--values $SAMPLES/nowaves/values.yaml \
$NAME-argo $CHARTS/instance-argo > $DEST/instance-argo/$SAMPLE-argo.yaml

View File

@@ -0,0 +1,4 @@
global:
environment:
utils:
disableWave: true

View File

@@ -0,0 +1,33 @@
# OpenTelemetry
You can use Annotations für Telemetry Operators such as Open Telemetry to inject their Agents into the Pods.
To do so, you can either add the annotations manually to the components, like this:
```
nappl:
template:
annotations:
instrumentation.opentelemetry.io/inject-java: "true"
instrumentation.opentelemetry.io/java-container-names: "application-layer"
```
or alternatively, you can turn on the built in functionality for the supported telemetry services.
This is an example for OpenTelemetry:
```
global:
telemetry:
openTelemetry: true
serviceName: "{{ .this.meta.type }}-{{ .instance.name }}-{{ .instance.stage }}"
meta:
stage: "dev"
```
This will automatically set the correct settings as seen above.
Please also see here:
- https://opentelemetry.io/docs/kubernetes/operator/automatic/
- https://github.com/open-telemetry/opentelemetry-operator#opentelemetry-auto-instrumentation-injection

47
samples/pinning/README.md Normal file
View File

@@ -0,0 +1,47 @@
# Pinning Versions
## Old Version
If you like to test rolling updates and the updates to new minor versions, check out the *e90* sample:
This sample will install a version 9.0.1400 for you to test. Since the Cluster Node Discovery changed due to a new jGroups version in nscale, the chart will notice the old version and turn on the legacy discovery mechanism to allow the Pod to find its peers in Versions prior to 9.1.1200.
```
helm install \
--values samples/empty.yaml \
--values samples/demo.yaml \
--values versions/9.0.1400.yaml \
sample-e90 nplus/nplus-instance
```
## New Version Sample
Some nscale Versions are License-Compatible, meaning that for example a Version 9.1 License File will also be able to run a nscale Version 9.0 Software. But that is not always the case.
So you might need to set individual licenses per instance:
```
kubectl create secret generic nscale-license-e10 \
--from-file=license.xml=license10.xml
```
Check, if the license has been created:
```
# kubectl get secret | grep license
nscale-license Opaque 1 7d22h
nscale-license-e10 Opaque 1 17s
```
Now, we install the instance:
```
helm upgrade -i \
--values samples/empty.yaml \
--values samples/demo.yaml \
--values versions/10.0.yaml \
--set global.license=nscale-license-e10 \
sample-e10 nplus/nplus-instance
```

93
samples/pinning/build.sh Executable file
View File

@@ -0,0 +1,93 @@
#!/bin/bash
#
# This sample script builds the example as described. It is also used to build the test environment in our lab,
# so it should be well tested.
#
# Make sure it fails immediately, if anything goes wrong
set -e
# -- ENVironment variables:
# CHARTS: The path to the source code
# DEST: The path to the build destination
# SAMPLE: The directory of the sample
# NAME: The name of the sample, used as the .Release.Name
# KUBE_CONTEXT: The name of the kube context, used to build this sample depending on where you run it against. You might have different Environments such as lab, dev, qa, prod, demo, local, ...
# Check, if we have the source code available
if [ ! -d "$CHARTS" ]; then
echo "ERROR Building $SAMPLE example: The Charts Sources folder is not set. Please make sure to run this script with the full Source Code available"
exit 1
fi
if [ ! -d "$DEST" ]; then
echo "ERROR Building $SAMPLE example: DEST folder not found."
exit 1
fi
if [ ! -d "$CHARTS/instance" ]; then
echo "ERROR Building $SAMPLE example: Chart Sources in $CHARTS/instance not found. Are you running this script as a subscriber?"
exit 1
fi
#
# VERSION 9.0
#
# Set the Variables
SAMPLE="nscale-90"
NAME="sample-$SAMPLE"
# Output what is happening
echo "Building $NAME"
# Create the manifest
mkdir -p $DEST/instance
helm template --debug \
--values $SAMPLES/application/empty.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
--values $WORKSPACE/versions/9.0.1400.yaml \
--set global.license=nscale-license-e92 \
$NAME $CHARTS/instance > $DEST/instance/$SAMPLE.yaml
# creating the Argo manifest
mkdir -p $DEST/instance-argo
helm template --debug \
--values $SAMPLES/application/empty.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
--values $WORKSPACE/versions/9.0.1400.yaml \
--set global.license=nscale-license-e92 \
$NAME-argo $CHARTS/instance-argo > $DEST/instance-argo/$SAMPLE-argo.yaml
#
# VERSION 9.1
#
# Set the Variables
SAMPLE="nscale-91"
NAME="sample-$SAMPLE"
# Output what is happening
echo "Building $NAME"
# Create the manifest
mkdir -p $DEST/instance
helm template --debug \
--values $SAMPLES/application/empty.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
--values $WORKSPACE/versions/9.1.1506.yaml \
--set global.license=nscale-license-e92 \
$NAME $CHARTS/instance > $DEST/instance/$SAMPLE.yaml
# creating the Argo manifest
mkdir -p $DEST/instance-argo
helm template --debug \
--values $SAMPLES/application/empty.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
--values $WORKSPACE/versions/9.1.1506.yaml \
--set global.license=nscale-license-e92 \
$NAME-argo $CHARTS/instance-argo > $DEST/instance-argo/$SAMPLE-argo.yaml

192
samples/resources/README.md Normal file
View File

@@ -0,0 +1,192 @@
## Assigning CPU and RAM
You **should** assign resources to your components, depending on the load that you expect.
In a dev environment, that might be very little and you may be fine with the defaults.
in a qa or prod environment, this should be wisely controlled, like this:
```yaml
nappl:
resources:
requests:
cpu: "100m" # Minimum 1/10 CPU
memory: "1024Mi" # Minimum 1 GB
limits:
cpu: "2000m" # Maximum 2 Cores
memory: "4096Mi" # Maximum 4 GB. Java will see this as total.
javaOpts:
javaMinMem: "512m" # tell Java to initialize the heap with 512 MB
javaMaxMem: "2048m" # tell Java to use max 2 GB of heap size
```
There are many discussions going on how much memory you should give to Java processes and how they react. Please see the internet for insight.
#### Our **current** opinion is:
Do not limit ram. You are not able to foresee how much Java is really consuming as the heap is only part of the RAM requirement. Java also needs *metaspace*, *code cache* and *thread stack*. Also the *GC* needs some memory, as well as the *symbols*.
Java will crash when out of memory, so even if you set javaMaxMem == 1/2 limits.memory (what many do), that guarantees nothing and might be a lot of waste.
So what you can consider is:
```yaml
nappl:
resources:
requests:
cpu: "1000m" # 1 Core guaranteed
memory: "4096Mi" # 4GB guaranteed
limits:
cpu: "4000m" # Maximum 4 Cores
# memory: # No Limit but hardware
javaOpts:
javaMinMem: "1024m" # Start with 1 GB
javaMaxMem: "3072m" # Go up to 3GB (which is only part of it) but be able to take more (up to limit) without crash
```
Downside of this approach: If you have a memory leak, it might consume all of your nodes memory without being stopped by a hard limit.
#### A possible **Alternative**:
You can set the RAM limit equal to the RAM request and leave the java Memory settings to *automatic*, which basically simulates a server. Java will *see* the limit as being the size of RAM installed in the machine and act accordingly.
```yaml
nappl:
resources:
requests:
cpu: "1000m" # 1 Core guaranteed
memory: "4096Mi" # 4GB guaranteed
limits:
cpu: "4000m" # Maximum 4 Cores
memory: "4096Mi" # No Limit but hardware
# javaOpts:
# javaMinMem: # unset, leaving it to java
# javaMaxMem: # unset, leaving it to java
```
#### In a **DEV** environment,
you might want to do more **overprovisioning**. You could even leave it completely unlimited, as in **DEV** you want to see memory and cpu leaks, so a limit might hide them from your sight.
So this is a possible allocation for **DEV**, defining only the bare minimum requests:
```yaml
nappl:
resources:
requests:
cpu: "1m" # 1/1000 Core guaranteed,
# but can consume all cores of the cluster node if required and available
memory: "512Mi" # 512MB guaranteed,
# but can consume all RAM of the cluster node if required and available
```
In this case, Java will see all node RAM as the limit and use whatever it needs. But as you are in a **dev** environment, there is no load and no users on the machine, so this will not require much.
## Resources you should calculate
The default resources assigned by *nplus* are for demo / testing only and you should definitely assign more ressources to your components.
Here is a very rough estimate of what you need:
| Component | Minimum (Demo and Dev) | Small | Medium | Large | XL | Remark |
| --------------- | ---------------------- | ---------------- | ----------------- | ------------------ | ---- | ----------------------------------------------------------- |
| ADMIN | 1 GB RAM, 1 Core | 2 GB RAM, 1 Core | 2 GB RAM, 1 Core | 2 GB RAM, 1 Core | | |
| **Application** | - | - | - | - | | Resources required during deployment only |
| CMIS | 1 GB RAM, 1 Core | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | | |
| **Database** | 2 GB RAM, 2 Core | 4 GB RAM, 4 Core | 8 GB RAM, 6 Core | 16 GB RAM, 8 Core | open | |
| ILM | 1 GB RAM, 1 Core | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | | |
| MON | 1 GB RAM, 1 Core | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | | quite fix |
| **NAPPL** | 2 GB RAM, 2 Core | 4 GB RAM, 4 Core | 8 GB RAM, 6 Core | 16 GB RAM, 8 Core | open | CPU depending on Jobs / Hooks, RAM depending on amount user |
| **NSTL** | 500 MB RAM, 1 Core | 1 GB RAM, 2 Core | 1 GB RAM, 2 Core | 1 GB RAM, 2 Core | | quite fix |
| PAM | | 2 GB RAM, 1 Core | 2 GB RAM, 1 Core | 2 GB RAM, 1 Core | | |
| PIPELINER | 2 GB RAM, 2 Core | 4 GB RAM, 4 Core | 4 GB RAM, 4 Core | 4 GB RAM, 4 Core | open | Depending on Core Mode *or* AC Mode, No Session Replication |
| **RS** | 1 GB RAM, 1 Core | 8 GB RAM, 4 Core | 32 GB RAM, 8 Core | 64 GB RAM, 12 Core | open | CPU depending on format type, RAM depending on file size |
| SHAREPOINT | | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | | |
| WEB | 1 GB RAM, 1 Core | 2 GB RAM, 2 Core | 4 GB RAM, 4 Core | 8 GB RAM, 4 Core | open | |
| WEBDAV | | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | | |
**Bold** components are required by a *SBS* setup, so here are some estimates per Application:
| Component | Minimum (Demo and Dev) | Minimum (PROD) | Recommended (PROD) | Remark |
| --------- | ---------------------- | ----------------- | ------------------ | ------------------ |
| SBS | 6 GB RAM, 4 Core | 16 GB RAM, 8 Core | 24 GB RAM, 12 Core | Without WEB Client |
| eGOV | TODO | TODO | TODO | eGOV needs much more CPU than a non eGOV system |
A word on **eGOV**: The eGOV App brings hooks and jobs, that require much more resources than a *normal* nscale system even with other Apps installed.
## Real Resources in DEV Idle
```
% kubectl top pods
...
sample-ha-administrator-0 2m 480Mi
sample-ha-argo-administrator-0 2m 456Mi
sample-ha-argo-cmis-5ff7d78c47-kgxsn 2m 385Mi
sample-ha-argo-cmis-5ff7d78c47-whx9j 2m 379Mi
sample-ha-argo-database-0 2m 112Mi
sample-ha-argo-ilm-58c65bbd64-pxgdl 2m 178Mi
sample-ha-argo-ilm-58c65bbd64-tpxfz 2m 168Mi
sample-ha-argo-mon-0 2m 308Mi
sample-ha-argo-nappl-0 5m 1454Mi
sample-ha-argo-nappl-1 3m 1452Mi
sample-ha-argo-nappljobs-0 5m 2275Mi
sample-ha-argo-nstla-0 4m 25Mi
sample-ha-argo-nstlb-0 6m 25Mi
sample-ha-argo-pam-0 5m 458Mi
sample-ha-argo-rs-7d6888d9f8-lp65s 2m 1008Mi
sample-ha-argo-rs-7d6888d9f8-tjxh8 2m 1135Mi
sample-ha-argo-web-f646f75b8-htn8x 4m 1224Mi
sample-ha-argo-web-f646f75b8-nvvjf 11m 1239Mi
sample-ha-argo-webdav-d69549bd4-nz4wn 2m 354Mi
sample-ha-argo-webdav-d69549bd4-vrg2n 3m 364Mi
sample-ha-cmis-5fc96b8f89-cwd62 2m 408Mi
sample-ha-cmis-5fc96b8f89-q4nr4 3m 442Mi
sample-ha-database-0 2m 106Mi
sample-ha-ilm-6b599bc694-5ht57 2m 174Mi
sample-ha-ilm-6b599bc694-ljkl4 2m 193Mi
sample-ha-mon-0 3m 355Mi
sample-ha-nappl-0 3m 1278Mi
sample-ha-nappl-1 4m 1295Mi
sample-ha-nappljobs-0 6m 1765Mi
sample-ha-nstla-0 4m 25Mi
sample-ha-nstlb-0 4m 25Mi
sample-ha-pam-0 2m 510Mi
sample-ha-rs-7b5fc586f6-49qhp 2m 951Mi
sample-ha-rs-7b5fc586f6-nkjqb 2m 1205Mi
sample-ha-web-7bd6ffc96b-pwvcv 3m 725Mi
sample-ha-web-7bd6ffc96b-rktrh 9m 776Mi
sample-ha-webdav-9df789f8-2d2wn 2m 365Mi
sample-ha-webdav-9df789f8-psh5q 2m 345Mi
...
```
## Defaults
Check the file `default.yaml`. You can set default memory limits for a container. These defaults are applied if you do not specify any resources in your manifest.
## Setting Resources for sidecar containers and init containers
You can also set resources for sidecar containers and init containers. However, you should only set these if you know exactly what you are doing and what implications they have.
```yaml
nstl:
sidecarResources:
requests:
cpu: "100m" # 0.1 Core guaranteed
memory: "1024Mi" # 1GB guaranteed
limits:
memory: "2048Mi" # Limit to 2 GB
# we do NOT limit the CPU (read [here](https://home.robusta.dev/blog/stop-using-cpu-limits) for details)
```
Init Resources can be set by using `initResources` key.

View File

@@ -0,0 +1,19 @@
apiVersion: v1
kind: LimitRange
metadata:
name: defaults-resources
spec:
limits:
- default: # limits
memory: "2Gi"
cpu: "4"
defaultRequest: # requests
memory: 512Mi
cpu: "5m"
max: # max and min define the limit range
cpu: "4000m"
memory: "4Gi"
min:
cpu: "1m"
memory: "128Mi"
type: Container

225
samples/resources/lab.yaml Normal file
View File

@@ -0,0 +1,225 @@
web:
resources:
requests:
cpu: "10m"
memory: "1.5Gi"
limits:
cpu: "4000m"
memory: "4Gi"
prepper:
resources:
requests:
cpu: "10m"
memory: "128Mi"
limits:
cpu: "1000m"
memory: "128Mi"
application:
resources:
requests:
cpu: "10m"
memory: "512Mi"
limits:
cpu: "4000m"
memory: "2Gi"
nappl:
resources:
requests:
cpu: "10m"
memory: "1.5Gi"
limits:
cpu: "4000m"
memory: "4Gi"
nappljobs:
resources:
requests:
cpu: "10m"
memory: "2Gi"
limits:
cpu: "4000m"
memory: "4Gi"
administrator:
resources:
requests:
cpu: "2m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
cmis:
resources:
requests:
cpu: "2m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
erpcmis:
resources:
requests:
cpu: "2m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
erpproxy:
resources:
requests:
cpu: "2m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
database:
resources:
requests:
cpu: "10m"
memory: "256Mi"
limits:
cpu: "4000m"
memory: "8Gi"
ilm:
resources:
requests:
cpu: "2m"
memory: "256Mi"
limits:
cpu: "2000m"
memory: "2Gi"
mon:
resources:
requests:
cpu: "2m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
nstl:
resources:
requests:
cpu: "5m"
memory: "128Mi"
limits:
cpu: "2000m"
memory: "1Gi"
nstla:
resources:
requests:
cpu: "5m"
memory: "128Mi"
limits:
cpu: "2000m"
memory: "1Gi"
nstlb:
resources:
requests:
cpu: "5m"
memory: "128Mi"
limits:
cpu: "2000m"
memory: "1Gi"
nstlc:
resources:
requests:
cpu: "5m"
memory: "128Mi"
limits:
cpu: "2000m"
memory: "1Gi"
nstld:
resources:
requests:
cpu: "5m"
memory: "128Mi"
limits:
cpu: "2000m"
memory: "1Gi"
pam:
resources:
requests:
cpu: "5m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "1Gi"
rs:
resources:
requests:
cpu: "2m"
memory: "1Gi"
limits:
cpu: "4000m"
memory: "8Gi"
webdav:
resources:
requests:
cpu: "2m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
rms:
resources:
requests:
cpu: "2m"
memory: "128Mi"
limits:
cpu: "1000m"
memory: "512Mi"
rmsa:
resources:
requests:
cpu: "2m"
memory: "128Mi"
limits:
cpu: "1000m"
memory: "512Mi"
rmsb:
resources:
requests:
cpu: "2m"
memory: "128Mi"
limits:
cpu: "1000m"
memory: "512Mi"
sharepoint:
resources:
requests:
cpu: "2m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
sharepointa:
resources:
requests:
cpu: "2m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
sharepointb:
resources:
requests:
cpu: "2m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
sharepointc:
resources:
requests:
cpu: "2m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
sharepointd:
resources:
requests:
cpu: "2m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"

59
samples/rms/README.md Normal file
View File

@@ -0,0 +1,59 @@
# (virtual-) Remote Management Server
The *nplus RMS* creates a virtual IP Address in your subnet. On this IP, you will find an *nscale Remote Management Service* and a Layer 4 Proxy, forwarding the ports of the components to the
belonging pods.
The result is, that under this VIP, it looks as if there is a real server with a bunch of *nscale* components installed. So you can use the desktop admin client to connect to it and configure it. Including offline configuration.
The offline configuration writes settings to the configuration files of the components. These files are injected into the Pods by *nplus* making the legacy magic work again.
Also, Shotdown, Startup and Restart buttons in the Admin client will work, as that will by translated to Kubernetes commands by *nplus*
Anyways, there are some restrictions:
- In a HA scenario, you need multiple virtual server, as nscale does not allow some components to deploy more than one instance per server (like nstl) and they would then also block the default ports. So better to have more RMS
- Log Files are not written, so the Admin cannot grab them. So no log file viewing in Admin
> Please notice that this is a BETA Feature not released for Production use.
This is a sample of RMS in a HA environment with two virtual servers:
```yaml
components:
rmsa: true
rmsb: true
rmsa:
ingress:
domain: "server1.{{ .instance.group | default .Release.Name }}.lab.nplus.cloud"
comps:
nappl:
enabled: true
restartReplicas: 2
nstl:
enabled: true
name: nstla
restartReplicas: 1
host: "{{ .component.prefix }}nstla.{{ .Release.Namespace }}.svc.cluster.local"
rs:
enabled: true
restartReplicas: 2
web:
enabled: true
restartReplicas: 2
rmsb:
ingress:
domain: "server2.{{ .instance.group | default .Release.Name }}.lab.nplus.cloud"
comps:
nappl:
enabled: true
name: nappljobs
restartReplicas: 1
replicaSetType: StatefulSet
host: "{{ .component.prefix }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local"
nstl:
name: nstlb
enabled: true
restartReplicas: 1
host: "{{ .component.prefix }}nstlb.{{ .Release.Namespace }}.svc.cluster.local"
```

66
samples/rms/build.sh Executable file
View File

@@ -0,0 +1,66 @@
#!/bin/bash
#
# This sample script builds the example as described. It is also used to build the test environment in our lab,
# so it should be well tested.
#
# Make sure it fails immediately, if anything goes wrong
set -e
# -- ENVironment variables:
# CHARTS: The path to the source code
# DEST: The path to the build destination
# SAMPLE: The directory of the sample
# NAME: The name of the sample, used as the .Release.Name
# KUBE_CONTEXT: The name of the kube context, used to build this sample depending on where you run it against. You might have different Environments such as lab, dev, qa, prod, demo, local, ...
# Check, if we have the source code available
if [ ! -d "$CHARTS" ]; then
echo "ERROR Building $SAMPLE example: The Charts Sources folder is not set. Please make sure to run this script with the full Source Code available"
exit 1
fi
if [ ! -d "$DEST" ]; then
echo "ERROR Building $SAMPLE example: DEST folder not found."
exit 1
fi
if [ ! -d "$CHARTS/instance" ]; then
echo "ERROR Building $SAMPLE example: Chart Sources in $CHARTS/instance not found. Are you running this script as a subscriber?"
exit 1
fi
# Set the Variables
SAMPLE="administrator-server"
NAME="sample-$SAMPLE"
# Output what is happening
echo "Building $NAME"
# Create the manifest
mkdir -p $DEST/instance
helm template --debug \
--values $SAMPLES/application/empty.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
--values $SAMPLES/rms/server.yaml \
$NAME $CHARTS/instance > $DEST/instance/$SAMPLE.yaml
# Set the Variables
SAMPLE="administrator-server-ha"
NAME="sample-$SAMPLE"
# Output what is happening
echo "Building $NAME"
# Create the manifest
mkdir -p $DEST/instance
helm template --debug \
--values $SAMPLES/ha/values.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
--values $SAMPLES/rms/server-ha.yaml \
--values $SAMPLES/application/empty.yaml \
$NAME $CHARTS/instance > $DEST/instance/$SAMPLE.yaml

View File

@@ -0,0 +1,39 @@
components:
rmsa: true
rmsb: true
# hier könnte man eine IP setzen, oder nimmt eine aus dem Pool.
# Wenn man eine IP setzt, muss sie aus einem Pool kommen.
rmsa:
ingress:
domain: "server1.{{ .instance.group | default .Release.Name }}.lab.nplus.cloud"
comps:
nappl:
enabled: true
restartReplicas: 2
nstl:
enabled: true
name: nstla
restartReplicas: 1
host: "{{ .component.prefix }}nstla.{{ .Release.Namespace }}.svc.cluster.local"
rs:
enabled: true
restartReplicas: 2
web:
enabled: true
restartReplicas: 2
rmsb:
ingress:
domain: "server2.{{ .instance.group | default .Release.Name }}.lab.nplus.cloud"
comps:
nappl:
enabled: true
name: nappljobs
restartReplicas: 1
replicaSetType: StatefulSet
host: "{{ .component.prefix }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local"
nstl:
name: nstlb
enabled: true
restartReplicas: 1
host: "{{ .component.prefix }}nstlb.{{ .Release.Namespace }}.svc.cluster.local"

19
samples/rms/server.yaml Normal file
View File

@@ -0,0 +1,19 @@
components:
# -- Enable the nplus Remote Management Server / rms
rms: true
rms:
ingress:
domain: "admin.{{ .instance.group | default .Release.Name }}.lab.nplus.cloud"
# -- This sets the external IP. Has has to come from the Layer 3 Load Balancer Pool, otherwise your
# L3 Load Balancer will not be able to assign it.
# If you leavet this empty, a VIP will be assigned from the pool
externalIp: 10.17.1.49
comps:
nappl:
enabled: true
nstl:
enabled: true
rs:
enabled: true
web:
enabled: true

View File

@@ -0,0 +1,67 @@
# Security
## All the standards
There are several features that will enhance the security of your system:
- all components are running rootless by default
- all components drop all privileges
- all components deny escalation
- all components have read only file systems
- Access is restricted by NetworkPolicies
## Additional: The backend Protocol
Additionally, you can increase security by encrypting communication in the backend. Depending on your network driver, this might already been done automatically beween the Kubernetes Nodes. But you can double that even within a single node by switching the backend Protocol to https:
```yaml
global:
nappl:
port: 8443
ssl: true
# Web and PAM do not speak https by default yet, CRs have been filed.
nappl:
ingress:
backendProtocol: https
cmis:
ingress:
backendProtocol: https
ilm:
ingress:
backendProtocol: https
webdav:
ingress:
backendProtocol: https
rs:
ingress:
backendProtocol: https
mon:
ingress:
backendProtocol: https
administrator:
ingress:
backendProtocol: https
```
This will turn every communication to https, **but** leave the unencrypted ports (http) **open** for inter-pod communication.
## Zero Trust Mode
This will basically do the same as above, **but** also turn **off** any unencrypted port (like http) and also implement NetworkPolicies to drop unencrypted packages.
This will also affect the way how *probes* are checking the pods health: *nplus* will switch them to use https instead, so even the very internal Healtch Check infrastructure will be encrypted in *zero trust mode*:
```yaml
components:
pam: false #TODO: ITSMSD-8771: PAM does not yet support https backend.
global:
security:
zeroTrust: true
nappl:
port: 8443
ssl: true
```

85
samples/security/build.sh Executable file
View File

@@ -0,0 +1,85 @@
#!/bin/bash
#
# This sample script builds the example as described. It is also used to build the test environment in our lab,
# so it should be well tested.
#
# Make sure it fails immediately, if anything goes wrong
set -e
# -- ENVironment variables:
# CHARTS: The path to the source code
# DEST: The path to the build destination
# SAMPLE: The directory of the sample
# NAME: The name of the sample, used as the .Release.Name
# KUBE_CONTEXT: The name of the kube context, used to build this sample depending on where you run it against. You might have different Environments such as lab, dev, qa, prod, demo, local, ...
# Check, if we have the source code available
if [ ! -d "$CHARTS" ]; then
echo "ERROR Building $SAMPLE example: The Charts Sources folder is not set. Please make sure to run this script with the full Source Code available"
exit 1
fi
if [ ! -d "$DEST" ]; then
echo "ERROR Building $SAMPLE example: DEST folder not found."
exit 1
fi
if [ ! -d "$CHARTS/instance" ]; then
echo "ERROR Building $SAMPLE example: Chart Sources in $CHARTS/instance not found. Are you running this script as a subscriber?"
exit 1
fi
# Set the Variables
SAMPLE="encrypt"
NAME="sample-$SAMPLE"
# Output what is happening
echo "Building $NAME"
# Create the manifest
mkdir -p $DEST/instance
helm template --debug \
--values $SAMPLES/security/encrypt.yaml \
--values $SAMPLES/application/empty.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
$NAME $CHARTS/instance > $DEST/instance/$SAMPLE.yaml
# creating the Argo manifest
mkdir -p $DEST/instance-argo
helm template --debug \
--values $SAMPLES/security/encrypt.yaml \
--values $SAMPLES/application/empty.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
$NAME-argo $CHARTS/instance-argo > $DEST/instance-argo/$SAMPLE-argo.yaml
# Set the Variables
SAMPLE="zerotrust"
NAME="sample-$SAMPLE"
# Output what is happening
echo "Building $NAME"
# Create the manifest
mkdir -p $DEST/instance
helm template --debug \
--values $SAMPLES/ha/values.yaml \
--values $SAMPLES/security/zerotrust.yaml \
--values $SAMPLES/application/empty.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
$NAME $CHARTS/instance > $DEST/instance/$SAMPLE.yaml
# creating the Argo manifest
mkdir -p $DEST/instance-argo
helm template --debug \
--values $SAMPLES/ha/values.yaml \
--values $SAMPLES/security/zerotrust.yaml \
--values $SAMPLES/application/empty.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
$NAME-argo $CHARTS/instance-argo > $DEST/instance-argo/$SAMPLE-argo.yaml

View File

@@ -0,0 +1,28 @@
global:
nappl:
port: 8443
ssl: true
# Web and PAM do not speak https by default yet, CRs have been filed.
nappl:
ingress:
backendProtocol: https
cmis:
ingress:
backendProtocol: https
ilm:
ingress:
backendProtocol: https
webdav:
ingress:
backendProtocol: https
rs:
ingress:
backendProtocol: https
mon:
ingress:
backendProtocol: https
administrator:
ingress:
backendProtocol: https

View File

@@ -0,0 +1,8 @@
components:
pam: false # #TODO: ITSMSD-8771: PAM does not yet support https backend.
global:
security:
zeroTrust: true
nappl:
port: 8443
ssl: true

34
samples/shared/README.md Normal file
View File

@@ -0,0 +1,34 @@
# Sharing Instances
Some organisations have multiple tenants that share common services, like *nscale Rendition Server* or
have a common IT department, thus using only a single *nscale Monitoring Console* acress all tenants.
This is the Central Services Part:
```
helm install \
--values samples/shared/centralservices.yaml \
--values samples/environment/demo.yaml \
sample-shared-cs nplus/nplus-instance
```
And this is the tenant using the Central Services:
```
helm install \
--values samples/shared/shared.yaml \
--values samples/environment/demo.yaml \
sample-shared nplus/nplus-instance
```
If you enable security based on *Network Policies*, you need to add additional Policies to allow access. Please see `shared-networkpolicy.yaml` and `centralservices-networkpolicy.yaml` as an example.
You also want to set the *monitoringInstance* in the `global` section of the values file to enable the Network Policy for incoming monitoring traffic.
```yaml
global:
security:
cni:
monitoringInstance: sample-shared-cs
```

71
samples/shared/build.sh Executable file
View File

@@ -0,0 +1,71 @@
#!/bin/bash
#
# This sample script builds the example as described. It is also used to build the test environment in our lab,
# so it should be well tested.
#
# Make sure it fails immediately, if anything goes wrong
set -e
# -- ENVironment variables:
# CHARTS: The path to the source code
# DEST: The path to the build destination
# SAMPLE: The directory of the sample
# NAME: The name of the sample, used as the .Release.Name
# KUBE_CONTEXT: The name of the kube context, used to build this sample depending on where you run it against. You might have different Environments such as lab, dev, qa, prod, demo, local, ...
# Check, if we have the source code available
if [ ! -d "$CHARTS" ]; then
echo "ERROR Building $SAMPLE example: The Charts Sources folder is not set. Please make sure to run this script with the full Source Code available"
exit 1
fi
if [ ! -d "$DEST" ]; then
echo "ERROR Building $SAMPLE example: DEST folder not found."
exit 1
fi
if [ ! -d "$CHARTS/instance" ]; then
echo "ERROR Building $SAMPLE example: Chart Sources in $CHARTS/instance not found. Are you running this script as a subscriber?"
exit 1
fi
# Set the Variables
SAMPLE="shared"
NAME="sample-$SAMPLE"
# Output what is happening
echo "Building $NAME"
# Create the manifest
mkdir -p $DEST/instance
helm template --debug \
--values $SAMPLES/shared/shared.yaml \
--values $SAMPLES/application/empty.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
$NAME $CHARTS/instance > $DEST/instance/$SAMPLE.yaml
# Adding the extra network policy
echo -e "\n---\n" >> $DEST/instance/$SAMPLE.yaml
cat $SAMPLES/shared/shared-networkpolicy.yaml >> $DEST/instance/$SAMPLE.yaml
# Set the Variables
SAMPLE="shared-cs"
NAME="sample-$SAMPLE"
# Output what is happening
echo "Building $NAME"
# Create the manifest
mkdir -p $DEST/instance
helm template --debug \
--values $SAMPLES/shared/centralservices.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
$NAME $CHARTS/instance > $DEST/instance/$SAMPLE.yaml
# Adding the extra network policy
echo -e "\n---\n" >> $DEST/instance/$SAMPLE.yaml
cat $SAMPLES/shared/centralservices-networkpolicy.yaml >> $DEST/instance/$SAMPLE.yaml

View File

@@ -0,0 +1,53 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: sample-shared-cs-interinstance-core
labels:
nplus/instance: sample-shared-cs
spec:
podSelector:
matchLabels:
nplus/instance: sample-shared-cs
nplus/type: nstl
policyTypes:
- Ingress
ingress:
#
# allow access from alien CORE components to a central nscale Storage Layer
#
- from:
- podSelector:
matchLabels:
nplus/instance: sample-shared
nplus/type: core
ports:
- protocol: TCP
port: 3005
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: sample-shared-cs-interinstance-mon
labels:
nplus/instance: sample-shared-cs
spec:
podSelector:
matchLabels:
nplus/instance: sample-shared-cs
nplus/type: mon
policyTypes:
- Egress
egress:
#
# allow monitoring console to monitor alien components.
# you will have to set the alien monitoring in the target namespace / instance.
# .Values.security.cni.monitoringNamespace .Values.security.cni.monitoringInstance
#
- to:
- podSelector:
matchLabels:
nplus/instance: sample-shared
nplus/type: core
ports:
- protocol: TCP
port: 3005

View File

@@ -0,0 +1,14 @@
components:
application: false
nappl: false
nappljobs: false
rs: true
mon: true
cmis: false
ilm: false
database: false
web: false
nstl: true
pipeliner: false
administrator: false
webdav: false

View File

@@ -0,0 +1,38 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: sample-shared-interinstance
labels:
nplus/instance: sample-shared
spec:
podSelector:
matchLabels:
nplus/instance: sample-shared
nplus/type: core
policyTypes:
- Egress
egress:
#
# allow access from CORE components to a central nscale Storage Layer
#
- to:
- podSelector:
matchLabels:
nplus/instance: sample-shared-cs
nplus/type: nstl
ports:
- protocol: TCP
port: 3005
#
# allow access from CORE components to a central nscale Rendition Server
#
- to:
- podSelector:
matchLabels:
nplus/instance: sample-shared-cs
nplus/type: rs
ports:
- protocol: TCP
port: 8192
- protocol: TCP
port: 8193

View File

@@ -0,0 +1,19 @@
components:
application: true
rs: false
mon: false
nstl: false
application:
enabled: true
docAreas:
- id: "DA"
nstl:
host: "sample-shared-cs-nstl.{{ .Release.Namespace }}"
rs:
host: "sample-shared-cs-rs.{{ .Release.Namespace }}"
global:
security:
cni:
monitoringInstance: sample-shared-cs

View File

@@ -0,0 +1,106 @@
# Specifics of the Sharepoint Connector
Normally, you will have different configurations if you want multiple Sharepoint Connectors. This makes the *nsp* somewhat special:
## Multi Instance HA Sharepoint Connector
This sample shows how to setup a sharepoint connector with multiple instances having **different** configurations for archival, but with **High Availability** on the retrieval side.
SharePoint is one of the few components for which is is quite common to have multiple instances instead of replicas. Replicas would include, that the configuration for all pods is identical. However, you might want to have multiple configurations as you also have multiple sharepoint sites you want to archive.
Running multiple instances with ingress enabled leads to the question, what the context path is for each instance. It cannot be the same as the load balancer would not be able to distinguish between them and thus refuses to add the configuration object - leading in a deadlock situation.
So *nplus* defined different context paths if you have multiple instances:
- sharepointa on `/nscale_spca`
- sharepointb on `/nscale_spcb`
- sharepointc on `/nscale_spcc`
- sharepointd on `/nscale_spcd`
If you only run one instance, it defaults to `/nscale_spc`.
## HA on retrieval
Once archived, you might want to use all instances for retrieval, since they share a common retrieval configuration (same nappl, ...). So in order to gain High Availability even across multiple instances, there are two options:
1. You turn off the services and ingresses on any sharepoint instance but sharepointa. Then you switch sharepointa's service selector to *type mode*, selecting all pods with type `sharepoint` instead of all pods of component `sharepointa`. Then you can access this one service to reach them all.
2. You can turn on the *clusterService*, which is an additional service that selects all `sharepoint` type pods and then adds an extra ingress on this service with the default context path `nscale_spc`
However, in both scenarios, beware that the sharepoint connector can only service one context path at a time, so you will need to change the context path accordingly.
## Sample for solution 1
On the instance, define the following:
```
components:
# -- First, we switch the default SharePoint OFF
sharepoint: false
# -- Then we enable two sharepoint instances to be used with different configurations
sharepointa: true
sharepointb: true
sharepointa:
service:
# -- Switching the service to "type" makes sure we select not only the component pods (in this case all replicas of sharepointa)
# but rather **any** pod of type sharepoint.
selector: "type"
ingress:
# -- The default contextPath for sharepointa is `nscale_spca` to make sure we have distinguishable paths for all sharepoint instances.
# however, in this case we re-use the service as cluster service and die ingress as cluster ingress, so we switch to the general
# contextPath, as if it was a single component deployment
contextPath: "/nscale_spc"
sharepointb:
service:
# -- The other SP Instance does not need a service any more, as it is selected by the cluster service above. So we switch off the component
# service which also switches off the ingress as it would not have a backing service any more
enabled: false
# -- The default contextPath for sharepointb is `nscale_spcb` to make sure we have distinguishable paths for all sharepoint instances.
# however, in this case we re-use the service as cluster service and die ingress as cluster ingress, so we switch to the general
# contextPath, as if it was a single component deployment
contextPath: "/nscale_spc"
```
## Sample for Solution 2
On the instance, define the following:
```
components:
# -- First, we switch the default SharePoint OFF
sharepoint: false
# -- Then we enable two sharepoint instances to be used with different configurations
sharepointa: true
sharepointb: true
sharepointa:
clusterService:
# -- This enabled the cluster service
enabled: true
# -- the cluster Ingress needs to know the context path it should react on.
contextPath: "/nscale_spc"
ingress:
# -- we turn off the original ingress as the common context path would block the deployment
enabled: false
# -- The default contextPath for sharepointa is `nscale_spca` to make sure we have distinguishable paths for all sharepoint instances.
# however, in this case we re-use the service as cluster service and die ingress as cluster ingress, so we switch to the general
# contextPath, as if it was a single component deployment
contextPath: "/nscale_spc"
sharepointb:
clusterService:
# -- on the second SharePoint Instance, we **disable** the cluster service, as it is already created by sharepointa.
enabled: false
# -- however, we need to set the context path, as this tells the networkPolicy to open up for ingress even though we switch die Ingress off in the
# next step
contextPath: "/nscale_spc"
ingress:
# -- we turn off the original ingress as the common context path would block the deployment
enabled: false
# -- The default contextPath for sharepointb is `nscale_spcb` to make sure we have distinguishable paths for all sharepoint instances.
# however, in this case we re-use the service as cluster service and die ingress as cluster ingress, so we switch to the general
# contextPath, as if it was a single component deployment
contextPath: "/nscale_spc"
```

45
samples/sharepoint/build.sh Executable file
View File

@@ -0,0 +1,45 @@
#!/bin/bash
#
# This sample script builds the example as described. It is also used to build the test environment in our lab,
# so it should be well tested.
#
# Make sure it fails immediately, if anything goes wrong
set -e
# -- ENVironment variables:
# CHARTS: The path to the source code
# DEST: The path to the build destination
# SAMPLE: The directory of the sample
# NAME: The name of the sample, used as the .Release.Name
# KUBE_CONTEXT: The name of the kube context, used to build this sample depending on where you run it against. You might have different Environments such as lab, dev, qa, prod, demo, local, ...
# Check, if we have the source code available
if [ ! -d "$CHARTS" ]; then
echo "ERROR Building $SAMPLE example: The Charts Sources folder is not set. Please make sure to run this script with the full Source Code available"
exit 1
fi
if [ ! -d "$DEST" ]; then
echo "ERROR Building $SAMPLE example: DEST folder not found."
exit 1
fi
if [ ! -d "$CHARTS/instance" ]; then
echo "ERROR Building $SAMPLE example: Chart Sources in $CHARTS/instance not found. Are you running this script as a subscriber?"
exit 1
fi
# Set the Variables
SAMPLE="sharepoint"
NAME="sample-$SAMPLE"
# Output what is happening
echo "Building $NAME"
# Create the manifest
mkdir -p $DEST/instance
helm template --debug \
--values $SAMPLES/sharepoint/solution2.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
$NAME $CHARTS/instance > $DEST/instance/$SAMPLE.yaml

View File

@@ -0,0 +1,28 @@
components:
# -- First, we switch the default SharePoint OFF
sharepoint: false
# -- Then we enable two sharepoint instances to be used with different configurations
sharepointa: true
sharepointb: true
sharepointa:
service:
# -- Switching the service to "type" makes sure we select not only the component pods (in this case all replicas of sharepointa)
# but rather **any** pod of type sharepoint.
selector: "type"
ingress:
# -- The default contextPath for sharepointa is `nscale_spca` to make sure we have distinguishable paths for all sharepoint instances.
# however, in this case we re-use the service as cluster service and die ingress as cluster ingress, so we switch to the general
# contextPath, as if it was a single component deployment
contextPath: "/nscale_spc"
sharepointb:
service:
# -- The other SP Instance does not need a service any more, as it is selected by the cluster service above. So we switch off the component
# service which also switches off the ingress as it would not have a backing service any more
enabled: false
ingress:
# -- The default contextPath for sharepointb is `nscale_spcb` to make sure we have distinguishable paths for all sharepoint instances.
# however, in this case we re-use the service as cluster service and die ingress as cluster ingress, so we switch to the general
# contextPath, as if it was a single component deployment
contextPath: "/nscale_spc"

View File

@@ -0,0 +1,35 @@
components:
# -- First, we switch the default SharePoint OFF
sharepoint: false
# -- Then we enable two sharepoint instances to be used with different configurations
sharepointa: true
sharepointb: true
sharepointa:
clusterService:
# -- This enabled the cluster service
enabled: true
# -- the cluster Ingress needs to know the context path it should react on.
contextPath: "/nscale_spc"
ingress:
# -- we turn off the original ingress as the common context path would block the deployment
enabled: false
# -- The default contextPath for sharepointa is `nscale_spca` to make sure we have distinguishable paths for all sharepoint instances.
# however, in this case we re-use the service as cluster service and die ingress as cluster ingress, so we switch to the general
# contextPath, as if it was a single component deployment
contextPath: "/nscale_spc"
sharepointb:
clusterService:
# -- on the second SharePoint Instance, we **disable** the cluster service, as it is already created by sharepointa.
enabled: false
# -- however, we need to set the context path, as this tells the networkPolicy to open up for ingress even though we switch die Ingress off in the
# next step
contextPath: "/nscale_spc"
ingress:
# -- we turn off the original ingress as the common context path would block the deployment
enabled: false
# -- The default contextPath for sharepointb is `nscale_spcb` to make sure we have distinguishable paths for all sharepoint instances.
# however, in this case we re-use the service as cluster service and die ingress as cluster ingress, so we switch to the general
# contextPath, as if it was a single component deployment
contextPath: "/nscale_spc"

87
samples/sim/README.md Normal file
View File

@@ -0,0 +1,87 @@
# Single-Instance-Mode
If you choose to separate tenants on your system not only by *nplus Instances* but also by *nplus Environments*, thus running each tenant in a separate Kubernetes *Namespace*, you do not need to create an *nplus Environment* first, but you can rather enable the *nplus Environment Components* within your instance:
```yaml
components:
sim:
dav: true
backend: true
operator: true
toolbox: true
```
Steps to run a SIM Instance:
1. Create the namespace and the necessary secrets to access the repo, registry as well as the nscale license file
```
SIM_NAME="empty-sim"
kubectl create ns $SIM_NAME
kubectl create secret docker-registry nscale-cr \
--namespace $SIM_NAME \
--docker-server=ceyoniq.azurecr.io \
--docker-username=$NSCALE_ACCOUNT \
--docker-password=$NSCALE_TOKEN
kubectl create secret docker-registry nplus-cr \
--namespace $SIM_NAME \
--docker-server=cr.nplus.cloud \
--docker-username=$NPLUS_ACCOUNT \
--docker-password=$NPLUS_TOKEN
kubectl create secret generic nscale-license \
--namespace $SIM_NAME \
--from-file=license.xml=$NSCALE_LICENSE
```
2. Deploy the Instance
```
helm install \
--values lab.yaml \
--values single-instance-mode.yaml \
--namespace $SIM_NAME \
$SIM_NAME nplus/nplus-instance
```
If you do not have any Application that requires assets such as scripts or apps, you are good to go with this.
However, if your Application does require assets, the *problem* is to get them into your (not existing) environment before the Applications is trying to access them.
There are three possible solutions:
1. You create an umbrella chart and have a job installing the assets into your Instance
2. You pull / download assets from your git server or an asset server before the Application deployment
3. You pull / download assets from your git server or an asset server before the Component deployment, including the Application
**Solution 1** obiously involes some implementation on your end. That is not covered in this documentation.
**Solution 2** can be achieved by defining a downloader in your application chart (see `empty-download.yaml`):
```yaml
components:
application: true
application:
docAreas:
- id: "Sample"
download:
- "https://git.nplus.cloud/public/nplus/raw/branch/master/samples/assets/sample.sh"
run:
- "/pool/downloads/sample.sh"
```
**Solutions 3** should be used if you have any assets that need to be available **before** the nscale Components start, like snippets for the web client etc.
You can use the *Prepper* for that purpose. The *Prepper* prepares everything required for the Instance to work as intended. It is very much like the *Application*, except that it does not connect to any nscale component (as they do not yet run by the time the prepper executes). But just like the Application, the Prepper is able to download assets and run scripts.
You can add this to your deployment:
```yaml
components:
prepper: true
prepper:
download:
- "https://git.nplus.cloud/public/nplus/raw/branch/master/assets/sample.tar.gz"
run:
- "/pool/downloads/sample/sample.sh"
```

46
samples/sim/build.sh Executable file
View File

@@ -0,0 +1,46 @@
#!/bin/bash
#
# This sample script builds the example as described. It is also used to build the test environment in our lab,
# so it should be well tested.
#
# Make sure it fails immediately, if anything goes wrong
set -e
# -- ENVironment variables:
# CHARTS: The path to the source code
# DEST: The path to the build destination
# SAMPLE: The directory of the sample
# NAME: The name of the sample, used as the .Release.Name
# KUBE_CONTEXT: The name of the kube context, used to build this sample depending on where you run it against. You might have different Environments such as lab, dev, qa, prod, demo, local, ...
# Check, if we have the source code available
if [ ! -d "$CHARTS" ]; then
echo "ERROR Building $SAMPLE example: The Charts Sources folder is not set. Please make sure to run this script with the full Source Code available"
exit 1
fi
if [ ! -d "$DEST" ]; then
echo "ERROR Building $SAMPLE example: DEST folder not found."
exit 1
fi
if [ ! -d "$CHARTS/instance" ]; then
echo "ERROR Building $SAMPLE example: Chart Sources in $CHARTS/instance not found. Are you running this script as a subscriber?"
exit 1
fi
# Set the Variables
SAMPLE="sim"
NAME="sample-$SAMPLE"
# Output what is happening
echo "Building $NAME"
# Create the manifest
mkdir -p $DEST/instance-sim
helm template --debug \
--values $SAMPLES/application/empty.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
--values $SAMPLES/sim/values.yaml \
--namespace $NAME \
$NAME $CHARTS/instance > $DEST/instance-sim/$SAMPLE.yaml

7
samples/sim/values.yaml Normal file
View File

@@ -0,0 +1,7 @@
components:
sim:
dav: true
backend: true
operator: true
toolbox: true

169
samples/static/README.md Normal file
View File

@@ -0,0 +1,169 @@
# Static Volumes
## Assigning PVs
For security reasons, you might want to use a storage class that does not perform automatic provisioning of PVs.
In that case, you want to reference a pre-created volume in the PVC.
In nplus, you can do so by setting the volumeName in the values.
Please review `values.yaml` as an example:
```yaml
database:
mounts:
data:
volumeName: "pv-{{ .component.fullName }}-data"
nstl:
mounts:
data:
volumeName: "pv-{{ .component.fullName }}-data"
```
You can also set the environment config volume. Please refer to the environment documentation for that.
```
helm install \
--values samples/environment/demo.yaml \
--values samples/static/values.yaml
sample-static nplus/nplus-instance
```
## Creating PVs
https://github.com/ceph/ceph-csi/blob/devel/docs/static-pvc.md
### Data Disk:
1. Create a pool on your cep cluster
```
ceph osd pool create k-lab 64 64
```
2. Create a block device pool
```
rbd pool init k-lab
```
3. Create an image
```
rbd create -s 50G k-lab/pv-sample-static-database-data
rbd create -s 50G k-lab/pv-sample-static-nstl-data
rbd ls k-lab | grep pv-sample-static-
```
Resize:
```
rbd resize --size 50G k-lab/pv-no-provisioner-database-data --allow-shrink
```
### File Share:
1. Create a Subvolume (FS)
```
ceph fs subvolume create cephfs pv-no-provisioner-rs-file --size 53687091200
```
2. Get the path of the subvolume
```
ceph fs subvolume getpath cephfs pv-no-provisioner-rs-file
```
### Troubleshooting
```
kubectl describe pv/pv-no-provisioner-rs-file pvc/no-provisioner-rs-file
kubectl get volumeattachment
```
### PV Manifests
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-no-provisioner-database-data
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 50Gi
csi:
driver: rook-ceph.rbd.csi.ceph.com
fsType: ext4
nodeStageSecretRef:
# node stage secret name
name: rook-csi-rbd-node
# node stage secret namespace where above secret is created
namespace: rook-ceph-external
volumeAttributes:
# Required options from storageclass parameters need to be added in volumeAttributes
clusterID: rook-ceph-external
pool: k-lab
staticVolume: "true"
imageFeatures: layering
#mounter: rbd-nbd
# volumeHandle should be same as rbd image name
volumeHandle: pv-no-provisioner-database-data
persistentVolumeReclaimPolicy: Retain
# The volumeMode can be either `Filesystem` or `Block` if you are creating Filesystem PVC it should be `Filesystem`, if you are creating Block PV you need to change it to `Block`
volumeMode: Filesystem
storageClassName: ceph-rbd
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-no-provisioner-nstl-data
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 50Gi
csi:
driver: rook-ceph.cephfs.csi.ceph.com
fsType: ext4
nodeStageSecretRef:
# node stage secret name
name: rook-csi-rbd-node
# node stage secret namespace where above secret is created
namespace: rook-ceph-external
volumeAttributes:
# Required options from storageclass parameters need to be added in volumeAttributes
clusterID: rook-ceph-external
pool: k-lab
staticVolume: "true"
imageFeatures: layering
#mounter: rbd-nbd
# volumeHandle should be same as rbd image name
volumeHandle: pv-no-provisioner-nstl-data
persistentVolumeReclaimPolicy: Retain
# The volumeMode can be either `Filesystem` or `Block` if you are creating Filesystem PVC it should be `Filesystem`, if you are creating Block PV you need to change it to `Block`
volumeMode: Filesystem
storageClassName: ceph-rbd
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-no-provisioner-rs-file
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 50Gi
csi:
driver: cephfs.csi.ceph.com
nodeStageSecretRef:
name: rook-csi-cephfs-secret
#rook-csi-cephfs-node
namespace: rook-ceph-external
volumeAttributes:
# Required options from storageclass parameters need to be added in volumeAttributes
clusterID: rook-ceph-external
fsName: cephfs
pool: cephfs_data
staticVolume: "true"
# rootPath kriegt man per ceph fs subvolume getpath cephfs pv-no-provisioner-rs-file
rootPath: "/volumes/_nogroup/pv-no-provisioner-rs-file/3016f512-bc19-4bfb-8eb2-5118430fbbe5"
#mounter: rbd-nbd
# volumeHandle should be same as rbd image name
volumeHandle: pv-no-provisioner-rs-file
persistentVolumeReclaimPolicy: Retain
# The volumeMode can be either `Filesystem` or `Block` if you are creating Filesystem PVC it should be `Filesystem`, if you are creating Block PV you need to change it to `Block`
volumeMode: Filesystem
storageClassName: cephfs
```

48
samples/static/build.sh Executable file
View File

@@ -0,0 +1,48 @@
#!/bin/bash
#
# This sample script builds the example as described. It is also used to build the test environment in our lab,
# so it should be well tested.
#
# Make sure it fails immediately, if anything goes wrong
set -e
# -- ENVironment variables:
# CHARTS: The path to the source code
# DEST: The path to the build destination
# SAMPLE: The directory of the sample
# NAME: The name of the sample, used as the .Release.Name
# KUBE_CONTEXT: The name of the kube context, used to build this sample depending on where you run it against. You might have different Environments such as lab, dev, qa, prod, demo, local, ...
# Check, if we have the source code available
if [ ! -d "$CHARTS" ]; then
echo "ERROR Building $SAMPLE example: The Charts Sources folder is not set. Please make sure to run this script with the full Source Code available"
exit 1
fi
if [ ! -d "$DEST" ]; then
echo "ERROR Building $SAMPLE example: DEST folder not found."
exit 1
fi
if [ ! -d "$CHARTS/instance" ]; then
echo "ERROR Building $SAMPLE example: Chart Sources in $CHARTS/instance not found. Are you running this script as a subscriber?"
exit 1
fi
# Set the Variables
SAMPLE="static"
NAME="sample-$SAMPLE"
# Output what is happening
echo "Building $NAME"
# Create the manifest
mkdir -p $DEST/instance
helm template --debug \
--values $SAMPLES/application/empty.yaml \
--values $SAMPLES/environment/$KUBE_CONTEXT.yaml \
--values $SAMPLES/resources/$KUBE_CONTEXT.yaml \
--values $SAMPLES/static/values.yaml \
$NAME $CHARTS/instance > $DEST/instance/$SAMPLE.yaml
# Adding the static PV to the manifest
echo -e "\n---\n" >> $DEST/instance/$SAMPLE.yaml
cat $SAMPLES/static/pv.yaml >> $DEST/instance/$SAMPLE.yaml

74
samples/static/pv.yaml Normal file
View File

@@ -0,0 +1,74 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-sample-static-database-data
spec:
# -- set an empty string must be explicitly set otherwise default StorageClass will be set
# see https://kubernetes.io/docs/concepts/storage/persistent-volumes/
storageClassName: ""
# -- make sure, this PV may only by bound to a specific claim
claimRef:
name: sample-static-database-data
namespace: lab
accessModes:
- ReadWriteOnce
capacity:
storage: 50Gi
csi:
driver: rook-ceph.rbd.csi.ceph.com
fsType: ext4
nodeStageSecretRef:
# node stage secret name
name: rook-csi-rbd-node
# node stage secret namespace where above secret is created
namespace: rook-ceph-external
volumeAttributes:
# Required options from storageclass parameters need to be added in volumeAttributes
clusterID: rook-ceph-external
pool: k-lab
staticVolume: "true"
imageFeatures: layering
#mounter: rbd-nbd
# volumeHandle should be same as rbd image name
volumeHandle: pv-sample-static-database-data
persistentVolumeReclaimPolicy: Delete
# The volumeMode can be either `Filesystem` or `Block` if you are creating Filesystem PVC it should be `Filesystem`, if you are creating Block PV you need to change it to `Block`
volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-sample-static-nstl-data
spec:
# -- set an empty string must be explicitly set otherwise default StorageClass will be set
# see https://kubernetes.io/docs/concepts/storage/persistent-volumes/
storageClassName: ""
# -- make sure, this PV may only by bound to a specific claim
claimRef:
name: sample-static-nstl-data
namespace: lab
accessModes:
- ReadWriteOnce
capacity:
storage: 50Gi
csi:
driver: rook-ceph.rbd.csi.ceph.com
fsType: ext4
nodeStageSecretRef:
# node stage secret name
name: rook-csi-rbd-node
# node stage secret namespace where above secret is created
namespace: rook-ceph-external
volumeAttributes:
# Required options from storageclass parameters need to be added in volumeAttributes
clusterID: rook-ceph-external
pool: k-lab
staticVolume: "true"
imageFeatures: layering
#mounter: rbd-nbd
# volumeHandle should be same as rbd image name
volumeHandle: pv-sample-static-nstl-data
persistentVolumeReclaimPolicy: Delete
# The volumeMode can be either `Filesystem` or `Block` if you are creating Filesystem PVC it should be `Filesystem`, if you are creating Block PV you need to change it to `Block`
volumeMode: Filesystem

View File

@@ -0,0 +1,22 @@
database:
mounts:
data:
volumeName: "pv-{{ .component.fullName }}-data"
nstl:
mounts:
data:
volumeName: "pv-{{ .component.fullName }}-data"
# # mon:
# # mounts:
# # data:
# # volumeName: "pv-{{ .component.fullName }}-data"
# # pipeliner:
# # mounts:
# # data:
# # volumeName: "pv-{{ .component.fullName }}-data"
# # rs:
# # mounts:
# # file:
# # volumeName: "pv-{{ .component.fullName }}-file"