153 lines
7.0 KiB
Markdown
153 lines
7.0 KiB
Markdown
|
|
# The Samples
|
||
|
|
|
||
|
|
- [Remote Management Server](administrator)
|
||
|
|
Shows how you can setup an ***nplus Remote Management Server*** and expose a complete Instance on a single **virtual IP Adress** to be able to connect with the *nscale Administrator* from your desktop and even
|
||
|
|
perform any **offline configuration** tasks, such as **configuring the *nscale Pipeliner*** with its cold.xml.
|
||
|
|
|
||
|
|
- [Applications](application)
|
||
|
|
- shows how to deploy an ***nplus Application***, inlcuding **creating a Document area** in the *nscale Application Layer* and
|
||
|
|
**installing *nscale Business Apps*** or **custom *Generic Base Apps*** into multiple Document Areas.
|
||
|
|
- It also demonstrates how to use the *prepper* component to **download assets from git** or any other artifacs site.
|
||
|
|
|
||
|
|
- [Blobstores](blobstore)
|
||
|
|
- shows how to connect the *nscale Storage Layer* to an **Amazon S3** Object Store, an **Azure Blobstore** or any other compatiple Object Store, like **CephGW** or **min.io**
|
||
|
|
- it also demonstrates the use of **envMaps** and **envSecrets** to set environment variables outside the values.yaml file
|
||
|
|
|
||
|
|
- [Certificates](certificates)
|
||
|
|
- shows how to **disable certificate generation** and use a **prepared static certificate** for the ingress
|
||
|
|
- it also talks about the pitfalls of `.this`
|
||
|
|
|
||
|
|
- [Cluster](cluster)
|
||
|
|
shows how to render a cluster chart and prepare the cluster for *nplus*
|
||
|
|
|
||
|
|
- [Defaults](default)
|
||
|
|
Renders a **minimalistic** Instance manifest without any customization
|
||
|
|
|
||
|
|
- [Detached Applications](detached)
|
||
|
|
Shows how to separate the application from the instance but still use them in tandem. This technique is used to be able to separately update instance and application.
|
||
|
|
|
||
|
|
- [Environments](environment)
|
||
|
|
holds values for **different environments**, such as a lab environment (which is used for our internal test cases) or a production environment
|
||
|
|
|
||
|
|
- [Instance Groups](group)
|
||
|
|
- Large Instances can be split easily and re-grouped again with the `.instance.group` tag. This example shows how.
|
||
|
|
- It also shows how to switch off any certificate or networkPolicy creation at Instance Level
|
||
|
|
|
||
|
|
- [High Avalability](ha)
|
||
|
|
- showcases a full **High Availability** szenario, including a dedicated **nappljobs** and redundant components to reduce the risk of outages
|
||
|
|
|
||
|
|
- [Highest Doc ID / HID](hid)
|
||
|
|
- shows how to enable highest ID checking in *nstl*.
|
||
|
|
|
||
|
|
- [No Waves](nowaves)
|
||
|
|
Shows how to set up a simple argoCD Application without using any *waves* but just relying on *waitFor*
|
||
|
|
|
||
|
|
- [Version Pinning](pinning)
|
||
|
|
There are several ways how to pin the **version of *nscale* components**. This example shows how to stay flexible in terms of **nplus versions** and still pin *nscale* to a
|
||
|
|
specific version.
|
||
|
|
|
||
|
|
- [Resources](resources)
|
||
|
|
- demonstrates how for different environments, for different tenants depending on the usage (like amount of concurrent users) or scenarios alike,
|
||
|
|
you might want to use different sets of **resouce definitions for RAM, CPU** or HD Space.
|
||
|
|
|
||
|
|
- [Security](security)
|
||
|
|
IT Security is an important aspect when running an nplus Environment. This example shows
|
||
|
|
- how to configure **https for all inter-Pod communications**, and
|
||
|
|
- how to force **all connections** to be **encrypted**. (**Zero Trust Mode**)
|
||
|
|
|
||
|
|
- [Shared Services](shared)
|
||
|
|
You might want to have a central **nplus** Instance that can be used by multiple tenants. This example shows how to do that.
|
||
|
|
|
||
|
|
- [SharePoint Retrieval HA](sharepoint)
|
||
|
|
Normally, you either have HA with multiple replicas *or* you use multiple instances if you want different configurations per instance. This example shows how you can combine both approaches: Having mutliple instances with different configuration (for archiving) and a global ingress to all instances (for HA retrieval)
|
||
|
|
|
||
|
|
- [Single Instance Mode](sim)
|
||
|
|
Some *nplus* subscribers use **maximum tenant separation**, so not only **firewalling each component**, but also running each **Instance in a dedicated Namespace**.
|
||
|
|
This shows how to enable ***Single Instance Mode***, that melts the environment and instance together in a **single deployment**.
|
||
|
|
|
||
|
|
- [Static Volumes](static)
|
||
|
|
Shows how to disable dynamic volume provisioning (depending on the storage class) and use **pre-created static volumes** instead.
|
||
|
|
|
||
|
|
- [Chart](chart)
|
||
|
|
This is an example for a **custom Umbrella Chart**, to create a custom base chart with your **own environment defaults** to build your deployments upon. This is the next step after applying value files.
|
||
|
|
|
||
|
|
|
||
|
|
|
||
|
|
# Configuration Options
|
||
|
|
|
||
|
|
There are several possible ways to configure and customize the *nplus* helm charts:
|
||
|
|
|
||
|
|
1. By using `--set` parameter at the command line:
|
||
|
|
|
||
|
|
```
|
||
|
|
helm install \
|
||
|
|
--set components.rs=false \
|
||
|
|
--set components.mon=false \
|
||
|
|
--set components.nstl=false \
|
||
|
|
--set global.ingress.domain="demo.nplus.cloud" \
|
||
|
|
--set global.ingress.issuer="nplus-issuer" \
|
||
|
|
demo1 nplus/nplus-instance
|
||
|
|
```
|
||
|
|
|
||
|
|
`kubectl get instances` will show this Instance being handled by `helm`.
|
||
|
|
|
||
|
|
2. By adding one or more values files:
|
||
|
|
|
||
|
|
```
|
||
|
|
helm install \
|
||
|
|
--values empty.yaml \
|
||
|
|
--values s3-env.yaml \
|
||
|
|
--values lab.yaml \
|
||
|
|
demo2 nplus/nplus-instance
|
||
|
|
```
|
||
|
|
|
||
|
|
3. By using `helm template` and piping the output to `kubectl`
|
||
|
|
|
||
|
|
```
|
||
|
|
helm template \
|
||
|
|
--values empty.yaml \
|
||
|
|
--values s3-env.yaml \
|
||
|
|
--values lab.yaml \
|
||
|
|
demo2 nplus/nplus-instance | kubectl apply -f -
|
||
|
|
```
|
||
|
|
|
||
|
|
`kubectl get instances` will show this Instance being handled manually (`manual`).
|
||
|
|
|
||
|
|
4. By building *Umbrella Charts* that contain default values for your environment
|
||
|
|
|
||
|
|
|
||
|
|
|
||
|
|
# Using ArgoCD
|
||
|
|
|
||
|
|
Deploying trough ArgoCD is identical to deploying trough Helm. Just use the `instance-argo` chart instead of `instance`.
|
||
|
|
But you can use the same value files for all deployment methods. The `instance-argo` chart will render all values into the *argoCD Application*, taking them fully into account.
|
||
|
|
|
||
|
|
If you deploy Instances by ArgoCD, `kubectl get instances` will show these Instance being handled by `argoCD`.
|
||
|
|
|
||
|
|
|
||
|
|
|
||
|
|
# Stacking Values
|
||
|
|
|
||
|
|
The sample value files provided here work for standard Instances as well as for argoCD Versions.
|
||
|
|
|
||
|
|
The values Files are stacked:
|
||
|
|
|
||
|
|
- `environment/demo.yaml` contains the default values for the environment
|
||
|
|
- `empty/values.yaml` creates a sample document area in the *nscale Application Layer*
|
||
|
|
- `s3/env.yaml` adds a S3 storage to the *nscale Storage Layer*, in form of simple environment variables
|
||
|
|
|
||
|
|
This stack can be installed by using the `helm install` command:
|
||
|
|
|
||
|
|
```
|
||
|
|
helm install \
|
||
|
|
--values environment/demo.yaml \
|
||
|
|
--values empty/values.yaml \
|
||
|
|
--values s3/env.yaml \
|
||
|
|
empty-sample-s3 nplus/nplus-instance
|
||
|
|
```
|
||
|
|
|
||
|
|
The advantage of stacking is to separate and reuse parts of your configuration for different purposes.
|
||
|
|
- Reuse values for different environments like stages or labs, where only the environment is different but the components and applications are (and have to be) the same
|
||
|
|
- Use the same Storage Configuration for multiple Instances
|
||
|
|
- Have one configuration for your Application / Solution and use that on many tenants to keep them all in sync
|