Public Information

This commit is contained in:
2025-01-24 16:18:47 +01:00
commit 0bd2038c86
449 changed files with 108655 additions and 0 deletions

57
ai/jsonl/cookbook.jsonl Normal file
View File

@@ -0,0 +1,57 @@
{"chapter": "Preparing the K8s Cluster", "level": 1, "text": "*nplus* Charts bring some custom resources, *Application*, *Instance* and *Component*. they are created during deployment of a chart and then updated by the environment operator every time the status changes.\nTo make this work, you will need to have the *Custom Resource Definitions* applied to your cluster prior to deploying any environment or component. This deployment is handled by the *Cluster Chart*.\n```bash\nhelm install nplus/nplus-cluster\n```\nThe *CRDs* are grouped into *nscale* and *nplus* (both synonym), so that you can either query for\n```bash\nkubectl get instance\nkubectl get component\nkubectl get application\n```\nor simply all at once with\n```bash\nkubectl get nscale -A\n```\nthe output looks like this (shortened output, showing the installed samples):\n```bash\n$ kubectl get nscale -A\nNAMESPACE NAME INSTANCE COMPONENT TYPE VERSION STATUS\nempty-sim component.nplus.cloud/database empty-sim database database 16 healthy\nempty-sim component.nplus.cloud/nappl empty-sim nappl core 9.2.1302 healthy\nlab component.nplus.cloud/demo-centralservices-s3-nstl demo-centralservices-s3 nstl nstl 9.2.1302 healthy\nlab component.nplus.cloud/demo-ha-web demo-ha web web 9.2.1300 redundant\nlab component.nplus.cloud/demo-ha-webdav demo-ha webdav webdav 9.2.1000 redundant\nlab component.nplus.cloud/demo-ha-zerotrust-administrator demo-ha-zerotrust administrator administrator 9.2.1300 healthy\nlab component.nplus.cloud/no-provisioner-nstl no-provisioner nstl nstl 9.2.1302 healthy\nlab component.nplus.cloud/no-provisioner-rs no-provisioner rs rs 9.2.1201 starting\nlab component.nplus.cloud/no-provisioner-web no-provisioner web web 9.2.1300 healthy\nlab component.nplus.cloud/sbs-nappl sbs nappl core 9.2.1302 healthy\nNAMESPACE NAME INSTANCE APPLICATION VERSION STATUS\nempty-sim application.nplus.cloud/application empty-sim application 9.2.1303-123 healthy\nempty-sim application.nplus.cloud/prepper empty-sim prepper 1.2.1300 healthy\nlab application.nplus.cloud/demo-ha-zerotrust-application demo-ha-zerotrust application 9.2.1303-123 healthy\nlab application.nplus.cloud/demo-shared-application demo-shared application 9.2.1303-123 healthy\nlab application.nplus.cloud/sbs-sbs sbs SBS 9.2.1303-123 healthy\nlab application.nplus.cloud/tenant-application tenant application 9.2.1303-123 healthy\nNAMESPACE NAME HANDLER VERSION TENANT STATUS\nempty-sim instance.nplus.cloud/empty-sim manual 9.2.1302 healthy\nlab instance.nplus.cloud/default manual 9.2.1302 healthy\nlab instance.nplus.cloud/demo-centralservices manual 9.2.1302 healthy\nlab instance.nplus.cloud/rms manual 9.2.1302 healthy\nlab instance.nplus.cloud/sbs manual 9.2.1302 healthy\nlab instance.nplus.cloud/tenant manual 9.2.1302 healthy\n```\n"}
{"chapter": "K8s namespace aka *nplus environment*", "level": 1, "text": "*nplus instances* are deployed into K8s namespaces. Always. even if you do not specify a namespace, it uses a namespace: `default`.\nIn order to use this namespace for *nplus instances*, you need to deploy some shared *nplus components* into it, which are used by the instances. This is done by the environment chart:\n```\nhelm install \\\n--values demo.yaml \\\ndemo nplus/nplus-environment\n```\nAfter that, the K8s namespace is a valid *nplus environment* that can house multiple *nplus instances*.\n"}
{"chapter": "deploying assets into the environment", "level": 2, "text": "Most likely, you will need assets to be used by your instances. Fonts for example: The *nscale Rendition Server* and die *nscale Server Application Layer* both require the Microsoft fonts, that are not allowed to be distributed by neither nscale nor nplus. So this example shows how to upload some missing pieces into the environment:\n```\nkubectl cp ./apps/app-installer-9.0.1202.jar nplus-toolbox-0:/conf/pool\nkubectl cp ./fonts nplus-toolbox-0:/conf/pool\nkubectl cp ./copy-snippet.sh nplus-toolbox-0:/conf/pool/scripts\nkubectl cp ./test.md nplus-toolbox-0:/conf/pool/snippets\nkubectl cp ./snc nplus-toolbox-0:/conf/pool\n```\nAlternatively, you can also use a `prepper` component, that you can activate on the environment chart, to download assets from any web site and deploy them into the environment:\n```\ncomponents:\nprepper: true\nprepper:\ndownload:\n- \"https://git.nplus.cloud/public/nplus/raw/branch/master/assets/sample.tar.gz\"\n```\nPlease see the prepper [README.md](../../charts/prepper/README.md) for more information.\n"}
{"chapter": "Operator Web UI", "level": 2, "text": "The environment comes with the operator, responsible for managing / controlling the [custom resources](../cluster/README.md). It has a Web UI, that can be enabled in the environment chart.\n![screenshot operator](assets/operator.png)\n"}
{"chapter": "*namespace*-less manifests", "level": 2, "text": "Speaking of namespaces: Sometimes you want to drop the namespace from your manifest. This can be done by\n```yaml\nutils:\nincludeNamespace: false\n```\nwhen you then call\n```bash\nhelm template myInstance nplus/nplus-instance > myInstance.yaml\n```\nthe manifest in `myInstance.yaml` will **not** have a namespace set, so you can apply it to multiple namespaces later:\n```bash\nkubectl apply --namespace dev -f myInstance.yaml\nkubectl apply --namespace qa -f myInstance.yaml\nkubectl apply --namespace prod -f myInstance.yaml\n```\n"}
{"chapter": "Installing Document Areas", "level": 1, "text": ""}
{"chapter": "Creating an empty document area while deploying an Instance", "level": 2, "text": "This is the simplest sample, just the core services with an empty document area:\n```\nhelm install \\\n--values samples/application/empty.yaml \\\n--values samples/environment/demo.yaml \\\nempty nplus/nplus-instance\n```\nThe empty Document Area is created with\n```yaml\ncomponents:\napplication: true\nprepper: true\n\napplication:\ndocAreas:\n- id: \"Sample\"\nrun:\n- \"/pool/downloads/sample.sh\"\nprepper:\ndownload:\n- \"https://git.nplus.cloud/public/nplus/raw/branch/master/assets/sample.tar.gz\"\n```\nThis turns on the *prepper* component, used to download a sample tarball from git. It will also extract the tarball into the `downloads` folder that is created on the *pool* automatically.\nThen, after the Application Layer is running, a document area `Sample` is created. The content of the sample script will be executed.\nIf you use **argoCD** as deployment tool, you would go with\n```\nhelm install \\\n--values samples/application/empty.yaml \\\n--values samples/environment/demo.yaml \\\nempty-argo nplus/nplus-instance-argo\n```\n"}
{"chapter": "Deploying the *SBS* Apps to a new document area", "level": 2, "text": "In the SBS scenario, some Apps are installed into the document area:\n```bash\nhelm install \\\n--values samples/applications/sbs.yaml \\\n--values samples/environment/demo.yaml \\\nsbs nplus/nplus-instance\n```\nThe values look like this:\n```yaml\ncomponents:\napplication: true\napplication:\nnameOverride: SBS\ndocAreas:\n- id: \"SBS\"\nname: \"DocArea with SBS\"\ndescription: \"This is a sample DocArea with the SBS Apps installed\"\napps:\n- \"/pool/nstore/bl-app-9.0.1202.zip\"\n- \"/pool/nstore/gdpr-app-9.0.1302.zip\"\n...\n- \"/pool/nstore/ts-app-9.0.1302.zip\"\n- \"/pool/nstore/ocr-base-9.0.1302.zip\"\n```\nThis will create a document area `SBS` and install the SBS Apps into it.\n"}
{"chapter": "Accounting in nstl", "level": 1, "text": "To collect Accounting Data in *nscale Server Storage Layer*, you can enable the nstl accouting feature by setting `accounting: true`.\nThis will create the accounting csv files in *ptemp* under `<instance>/<component>/accounting`.\nAdditionally, you can enable a log forwarder printing it to stdout.\n```\nnstl:\naccounting: true\nlogForwarder:\n- name: Accounting\npath: \"/opt/ceyoniq/nscale-server/storage-layer/accounting/*.csv\"\n```\n"}
{"chapter": "(auto-) certificates and the pitfalls of *.this*", "level": 1, "text": "*nplus* will automatically generate certificates for your ingress. It either uses an issuer like *cert-manager* or generates a *self-signed-certificate*.\nIn your production environment though, you might want to take more control over the certificate generation process and don't leave it to *nplus* to automatically take care of it.\nIn that case, you want to switch the automation *off*.\nTo do so, you need to understand what is happening internally:\n- if `.this.ingress.issuer` is set, the chart requests this issuer to generate a tls secret with the name `.this.ingress.secret`\nby creating a certificate resource with the name of the domain `.this.ingress.domain`\n- else, so no issuer is set, the chart checks wether the flag `.this.ingress.createSelfSignedCertificate` is set to `true` and\ngenerates a tls secret with the name `.this.ingress.secret`\n- else, so neither issuer nor createSelfSignedCertificate are set, the charts will not generate anything\nThe way how `.this` works is, that it gathers the key from `.Values.global.environment`, `.Values.global` and then `.Values` and flattens them merged into `.this`so that you can set your values\non different levels.\nHowever, the *merge* function overwrites non exising values and also boolean `true` overwrites a boolean `false`, not just the nil values. So to make sure we still can cancel functionality\nby setting `null`or `false`, there is a forth merge which is set to forcefully overwrite existing keys: `override`, which can also be set on *environment*, *global* or on the *component* level.\nSo the correct way to cancel the generation process is to force the issuer to null (which will cancel the *cert-manager* generation) and also force `createSelfSignedCertificate` to false (to cancel the *self-signed-certificate* generation):\n```yaml\nglobal:\noverride:\ningress:\nenabled: true\nsecret: myCertificate\nissuer: null\ncreateSelfSignedCertificate: false\n```\nThis makes sure, you will get an ingress, that uses the tls certificate in the secret `myCertificate` for encryption and does not generate anything.\n"}
{"chapter": "Grouping Instances", "level": 1, "text": "Sometimes Instances become quite large with many components. If you work on them with multiple team members, you end up having to synchronize the deployment of the Instances.\nYou can easily rip large Instances apart using the `group` tag, joining multiple Instances into one group and making sure the NetworkPolicies are opened to pods from other Instances within the Instance Group.\n```yaml\nglobal:\ninstance:\n# -- despite the instance name, all components within this group will be prefixed\n# with the group (unless the group name and the environment name are not identical)\n# Also this makes sure the network policies are acting on the group, not on the instance.\ngroup: \"sample-group\"\n```\nYou can query the instance group in your code with `.instance.group`.\nExample: We build multiple Instances in one group:\n- sample-group-backend\n- Database\n- nstl\n- rs\n- sample-group-middleware\n- nappl\n- application(s)\n- sample-group-frontend\n- web\n- cmis\nPortainer is showing the group as if it were an single instance:\n![Portainer](assets/portainer.png)\nThe nplus UI is showing the instances of the group:\n![nplus Web Monitoring](assets/monitor.png)\nAnd the nplus CLI is also showing single instances:\n```\n% kubectl get nscale\nNAME INSTANCE COMPONENT TYPE VERSION STATUS\ncomponent.nplus.cloud/sample-group-cmis sample-group-frontend cmis cmis 9.2.1200 healthy\ncomponent.nplus.cloud/sample-group-database sample-group-backend database database 16 healthy\ncomponent.nplus.cloud/sample-group-nappl sample-group-middleware nappl core 9.2.1302 healthy\ncomponent.nplus.cloud/sample-group-rs sample-group-backend rs rs 9.2.1201 healthy\ncomponent.nplus.cloud/sample-group-web sample-group-frontend web web 9.2.1300 healthy\nNAME HANDLER VERSION TENANT STATUS\ninstance.nplus.cloud/sample-group-backend manual 9.2.1302 healthy\ninstance.nplus.cloud/sample-group-frontend manual 9.2.1302 healthy\ninstance.nplus.cloud/sample-group-middleware manual 9.2.1302 healthy\n```\n"}
{"chapter": "Sharing Instances", "level": 1, "text": "Some organisations have multiple tenants that share common services, like *nscale Rendition Server* or\nhave a common IT department, thus using only a single *nscale Monitoring Console* acress all tenants.\nThis is the Central Services Part:\n```\nhelm install \\\n--values samples/shared/centralservices.yaml \\\n--values samples/environment/demo.yaml \\\nsample-shared-cs nplus/nplus-instance\n```\nAnd this is the tenant using the Central Services:\n```\nhelm install \\\n--values samples/shared/shared.yaml \\\n--values samples/environment/demo.yaml \\\nsample-shared nplus/nplus-instance\n```\nIf you enable security based on *Network Policies*, you need to add additional Policies to allow access. Please see `shared-networkpolicy.yaml` and `centralservices-networkpolicy.yaml` as an example.\nYou also want to set the *monitoringInstance* in the `global` section of the values file to enable the Network Policy for incoming monitoring traffic.\n```yaml\nglobal:\nsecurity:\ncni:\nmonitoringInstance: sample-shared-cs\n```\n"}
{"chapter": "Using detached applications", "level": 1, "text": "All the other samples use an application that is deployed **inside of an instance**. However, you can also deploy an application **detached** from the instance as a solo chart.\nThe reason for this is, that you\n- can update the instance without running the application update\n- update the application without touching the instance\n- have multiple applications deployed within one instance\nThere are two major things you need to do:\n1. make sure the application charts sets the instance name of the instance, it should connect to\n2. take the default values of the application match the ones it would get by an instance deployment\nThis is a sample: (find the complete one in the [application.yaml](application.yaml))\n```yaml\nnameOverride: SBS\ndocAreas:\n- id: \"SBS\"\nname: \"DocArea with SBS\"\ndescription: \"This is a sample DocArea with the SBS Apps installed\"\napps:\n...\ninstance:\n# this is the name of the instance, it should belong to\nname: \"sample-detached\"\n"}
{"chapter": "make sure it can wait for the nappl of the instance to be ready, before it deploys.", "level": 1, "text": "waitImage:\nrepo: cr.nplus.cloud/subscription\nname: toolbox2\ntag: 1.2.1300\npullPolicy: IfNotPresent\nwaitFor:\n- \"-service {{ .component.prefix }}nappl.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 1800\"\n"}
{"chapter": "Now we define where and what to deploy", "level": 1, "text": "nappl:\nhost: \"{{ .component.prefix }}nappl.{{ .Release.Namespace }}\"\nport: 8080\nssl: false\ninstance: \"nscalealinst1\"\naccount: admin\ndomain: nscale\npassword: admin\nsecret:\nnstl:\nhost: \"{{ .component.prefix }}nstl.{{ .Release.Namespace }}\"\nrs:\nhost: \"{{ .component.prefix }}rs.{{ .Release.Namespace }}\"\n```\n"}
{"chapter": "High Availability", "level": 1, "text": "To gain a higher level of availability for your Instance, you can\n- create more Kubernetes Cluster Nodes\n- create more replicas of the *nscale* and *nplus* components\n- distribute those replicas across multiple nodes using anti-affinities\nThis is how:\n```\nhelm install \\\n--values samples/ha/values.yaml\n--values samples/environment/demo.yaml \\\nsample-ha nplus/nplus-instance\n```\nThe essents of the values file is this:\n- We use three (3) *nscale Server Application Layer*, two dedicated to user access, one dedicated to jobs\n- if the jobs node fails, the user nodes take the jobs (handled by priority)\n- if one of the user nodes fail, the other one handles the load\n- Kubernetes takes care of restarting nodes should that happen\n- All components run with two replicas\n- Pod anti-affinities handle the distribution\n- any administration component only connects to the jobs nappl, leaving the user nodes to the users\n- PodDisruptionBudgets are defined for the crutial components. These are set via `minReplicaCount` for the components that can support multiple replicas, and `minReplicaCountType` for the **first** replicaSet of the components that do not support replicas, in this case nstla.\n```\nweb:\nreplicaCount: 2\nminReplicaCount: 1\nrs:\nreplicaCount: 2\nminReplicaCount: 1\nilm:\nreplicaCount: 2\nminReplicaCount: 1\ncmis:\nreplicaCount: 2\nminReplicaCount: 1\nwebdav:\nreplicaCount: 2\nminReplicaCount: 1\nnstla:\nminReplicaCountType: 1\nadministrator:\nnappl:\nhost: \"{{ .component.prefix }}nappljobs.{{ .Release.Namespace }}\"\nwaitFor:\n- \"-service {{ .component.prefix }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 600\"\npam:\nnappl:\nhost: \"{{ .component.prefix }}nappljobs.{{ .Release.Namespace }}\"\nwaitFor:\n- \"-service {{ .component.prefix }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 600\"\nnappl:\nreplicaCount: 2\nminReplicaCount: 1\njobs: false\nwaitFor:\n- \"-service {{ .component.prefix }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local:{{ .this.nappl.port }} -timeout 600\"\nnappljobs:\nreplicaCount: 1\njobs: true\ndisableSessionReplication: true\ningress:\nenabled: false\nsnc:\nenabled: true\nwaitFor:\n- \"-service {{ .component.prefix }}database.{{ .Release.Namespace }}.svc.cluster.local:5432 -timeout 600\"\napplication:\nnstl:\nhost: \"{{ .component.prefix }}nstl-cluster.{{ .Release.Namespace }}\"\nnappl:\nhost: \"{{ .component.prefix }}nappljobs.{{ .Release.Namespace }}\"\n```\n"}
{"chapter": "Assigning CPU and RAM", "level": 2, "text": "You **should** assign resources to your components, depending on the load that you expect.\nIn a dev environment, that might be very little and you may be fine with the defaults.\nin a qa or prod environment, this should be wisely controlled, like this:\n```yaml\nnappl:\nresources:\nrequests:\ncpu: \"100m\" # Minimum 1/10 CPU\nmemory: \"1024Mi\" # Minimum 1 GB\nlimits:\ncpu: \"2000m\" # Maximum 2 Cores\nmemory: \"4096Mi\" # Maximum 4 GB. Java will see this as total.\njavaOpts:\njavaMinMem: \"512m\" # tell Java to initialize the heap with 512 MB\njavaMaxMem: \"2048m\" # tell Java to use max 2 GB of heap size\n```\nThere are many discussions going on how much memory you should give to Java processes and how they react. Please see the internet for insight.\n"}
{"chapter": "Our **current** opinion is:", "level": 4, "text": "Do not limit ram. You are not able to foresee how much Java is really consuming as the heap is only part of the RAM requirement. Java also needs *metaspace*, *code cache* and *thread stack*. Also the *GC* needs some memory, as well as the *symbols*.\nJava will crash when out of memory, so even if you set javaMaxMem == 1/2 limits.memory (what many do), that guarantees nothing and might be a lot of waste.\nSo what you can consider is:\n```yaml\nnappl:\nresources:\nrequests:\ncpu: \"1000m\" # 1 Core guaranteed\nmemory: \"4096Mi\" # 4GB guaranteed\nlimits:\ncpu: \"4000m\" # Maximum 4 Cores\n"}
{"chapter": "memory: # No Limit but hardware", "level": 2, "text": "javaOpts:\njavaMinMem: \"1024m\" # Start with 1 GB\njavaMaxMem: \"3072m\" # Go up to 3GB (which is only part of it) but be able to take more (up to limit) without crash\n```\nDownside of this approach: If you have a memory leak, it might consume all of your nodes memory without being stopped by a hard limit.\n"}
{"chapter": "A possible **Alternative**:", "level": 4, "text": "You can set the RAM limit equal to the RAM request and leave the java Memory settings to *automatic*, which basically simulates a server. Java will *see* the limit as being the size of RAM installed in the machine and act accordingly.\n```yaml\nnappl:\nresources:\nrequests:\ncpu: \"1000m\" # 1 Core guaranteed\nmemory: \"4096Mi\" # 4GB guaranteed\nlimits:\ncpu: \"4000m\" # Maximum 4 Cores\nmemory: \"4096Mi\" # No Limit but hardware\n"}
{"chapter": "javaOpts:", "level": 1, "text": ""}
{"chapter": "javaMinMem: # unset, leaving it to java", "level": 2, "text": ""}
{"chapter": "javaMaxMem: # unset, leaving it to java", "level": 2, "text": "```\n"}
{"chapter": "In a **DEV** environment,", "level": 4, "text": "you might want to do more **overprovisioning**. You could even leave it completely unlimited, as in **DEV** you want to see memory and cpu leaks, so a limit might hide them from your sight.\nSo this is a possible allocation for **DEV**, defining only the bare minimum requests:\n```yaml\nnappl:\nresources:\nrequests:\ncpu: \"1m\" # 1/1000 Core guaranteed,\n# but can consume all cores of the cluster node if required and available\nmemory: \"512Mi\" # 512MB guaranteed,\n# but can consume all RAM of the cluster node if required and available\n```\nIn this case, Java will see all node RAM as the limit and use whatever it needs. But as you are in a **dev** environment, there is no load and no users on the machine, so this will not require much.\n"}
{"chapter": "Resources you should calculate", "level": 2, "text": "The default resources assigned by *nplus* are for demo / testing only and you should definitely assign more ressources to your components.\nHere is a very rough estimate of what you need:\n| Component | Minimum (Demo and Dev) | Small | Medium | Large | XL | Remark |\n| --------------- | ---------------------- | ---------------- | ----------------- | ------------------ | ---- | ----------------------------------------------------------- |\n| ADMIN | 1 GB RAM, 1 Core | 2 GB RAM, 1 Core | 2 GB RAM, 1 Core | 2 GB RAM, 1 Core | | |\n| **Application** | - | - | - | - | | Resources required during deployment only |\n| CMIS | 1 GB RAM, 1 Core | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | | |\n| **Database** | 2 GB RAM, 2 Core | 4 GB RAM, 4 Core | 8 GB RAM, 6 Core | 16 GB RAM, 8 Core | open | |\n| ILM | 1 GB RAM, 1 Core | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | | |\n| MON | 1 GB RAM, 1 Core | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | | quite fix |\n| **NAPPL** | 2 GB RAM, 2 Core | 4 GB RAM, 4 Core | 8 GB RAM, 6 Core | 16 GB RAM, 8 Core | open | CPU depending on Jobs / Hooks, RAM depending on amount user |\n| **NSTL** | 500 MB RAM, 1 Core | 1 GB RAM, 2 Core | 1 GB RAM, 2 Core | 1 GB RAM, 2 Core | | quite fix |\n| PAM | | 2 GB RAM, 1 Core | 2 GB RAM, 1 Core | 2 GB RAM, 1 Core | | |\n| PIPELINER | 2 GB RAM, 2 Core | 4 GB RAM, 4 Core | 4 GB RAM, 4 Core | 4 GB RAM, 4 Core | open | Depending on Core Mode *or* AC Mode, No Session Replication |\n| **RS** | 1 GB RAM, 1 Core | 8 GB RAM, 4 Core | 32 GB RAM, 8 Core | 64 GB RAM, 12 Core | open | CPU depending on format type, RAM depending on file size |\n| SHAREPOINT | | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | | |\n| WEB | 1 GB RAM, 1 Core | 2 GB RAM, 2 Core | 4 GB RAM, 4 Core | 8 GB RAM, 4 Core | open | |\n| WEBDAV | | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | 2 GB RAM, 2 Core | | |\n**Bold** components are required by a *SBS* setup, so here are some estimates per Application:\n| Component | Minimum (Demo and Dev) | Minimum (PROD) | Recommended (PROD) | Remark |\n| --------- | ---------------------- | ----------------- | ------------------ | ------------------ |\n| SBS | 6 GB RAM, 4 Core | 16 GB RAM, 8 Core | 24 GB RAM, 12 Core | Without WEB Client |\n| eGOV | TODO | TODO | TODO | eGOV needs much more CPU than a non eGOV system |\nA word on **eGOV**: The eGOV App brings hooks and jobs, that require much more resources than a *normal* nscale system even with other Apps installed.\n"}
{"chapter": "Real Resources in DEV Idle", "level": 2, "text": "```\n% kubectl top pods\n...\nsample-ha-administrator-0 2m 480Mi\nsample-ha-argo-administrator-0 2m 456Mi\nsample-ha-argo-cmis-5ff7d78c47-kgxsn 2m 385Mi\nsample-ha-argo-cmis-5ff7d78c47-whx9j 2m 379Mi\nsample-ha-argo-database-0 2m 112Mi\nsample-ha-argo-ilm-58c65bbd64-pxgdl 2m 178Mi\nsample-ha-argo-ilm-58c65bbd64-tpxfz 2m 168Mi\nsample-ha-argo-mon-0 2m 308Mi\nsample-ha-argo-nappl-0 5m 1454Mi\nsample-ha-argo-nappl-1 3m 1452Mi\nsample-ha-argo-nappljobs-0 5m 2275Mi\nsample-ha-argo-nstla-0 4m 25Mi\nsample-ha-argo-nstlb-0 6m 25Mi\nsample-ha-argo-pam-0 5m 458Mi\nsample-ha-argo-rs-7d6888d9f8-lp65s 2m 1008Mi\nsample-ha-argo-rs-7d6888d9f8-tjxh8 2m 1135Mi\nsample-ha-argo-web-f646f75b8-htn8x 4m 1224Mi\nsample-ha-argo-web-f646f75b8-nvvjf 11m 1239Mi\nsample-ha-argo-webdav-d69549bd4-nz4wn 2m 354Mi\nsample-ha-argo-webdav-d69549bd4-vrg2n 3m 364Mi\nsample-ha-cmis-5fc96b8f89-cwd62 2m 408Mi\nsample-ha-cmis-5fc96b8f89-q4nr4 3m 442Mi\nsample-ha-database-0 2m 106Mi\nsample-ha-ilm-6b599bc694-5ht57 2m 174Mi\nsample-ha-ilm-6b599bc694-ljkl4 2m 193Mi\nsample-ha-mon-0 3m 355Mi\nsample-ha-nappl-0 3m 1278Mi\nsample-ha-nappl-1 4m 1295Mi\nsample-ha-nappljobs-0 6m 1765Mi\nsample-ha-nstla-0 4m 25Mi\nsample-ha-nstlb-0 4m 25Mi\nsample-ha-pam-0 2m 510Mi\nsample-ha-rs-7b5fc586f6-49qhp 2m 951Mi\nsample-ha-rs-7b5fc586f6-nkjqb 2m 1205Mi\nsample-ha-web-7bd6ffc96b-pwvcv 3m 725Mi\nsample-ha-web-7bd6ffc96b-rktrh 9m 776Mi\nsample-ha-webdav-9df789f8-2d2wn 2m 365Mi\nsample-ha-webdav-9df789f8-psh5q 2m 345Mi\n...\n```\n"}
{"chapter": "Defaults", "level": 2, "text": "Check the file `default.yaml`. You can set default memory limits for a container. These defaults are applied if you do not specify any resources in your manifest.\n"}
{"chapter": "Single-Instance-Mode", "level": 1, "text": "If you choose to separate tenants on your system not only by *nplus Instances* but also by *nplus Environments*, thus running each tenant in a separate Kubernetes *Namespace*, you do not need to create an *nplus Environment* first, but you can rather enable the *nplus Environment Components* within your instance:\n```yaml\ncomponents:\nsim:\ndav: true\nbackend: true\noperator: true\ntoolbox: true\n```\nSteps to run a SIM Instance:\n1. Create the namespace and the necessary secrets to access the repo, registry as well as the nscale license file\n```\nSIM_NAME=\"empty-sim\"\nkubectl create ns $SIM_NAME\nkubectl create secret docker-registry nscale-cr \\\n--namespace $SIM_NAME \\\n--docker-server=ceyoniq.azurecr.io \\\n--docker-username=$NSCALE_ACCOUNT \\\n--docker-password=$NSCALE_TOKEN\nkubectl create secret docker-registry nplus-cr \\\n--namespace $SIM_NAME \\\n--docker-server=cr.nplus.cloud \\\n--docker-username=$NPLUS_ACCOUNT \\\n--docker-password=$NPLUS_TOKEN\nkubectl create secret generic nscale-license \\\n--namespace $SIM_NAME \\\n--from-file=license.xml=$NSCALE_LICENSE\n```\n2. Deploy the Instance\n```\nhelm install \\\n--values lab.yaml \\\n--values single-instance-mode.yaml \\\n--namespace $SIM_NAME \\\n$SIM_NAME nplus/nplus-instance\n```\nIf you do not have any Application that requires assets such as scripts or apps, you are good to go with this.\nHowever, if your Application does require assets, the *problem* is to get them into your (not existing) environment before the Applications is trying to access them.\nThere are three possible solutions:\n1. You create an umbrella chart and have a job installing the assets into your Instance\n2. You pull / download assets from your git server or an asset server before the Application deployment\n3. You pull / download assets from your git server or an asset server before the Component deployment, including the Application\n**Solution 1** obiously involes some implementation on your end. That is not covered in this documentation.\n**Solution 2** can be achieved by defining a downloader in your application chart (see `empty-download.yaml`):\n```yaml\ncomponents:\napplication: true\napplication:\ndocAreas:\n- id: \"Sample\"\ndownload:\n- \"https://git.nplus.cloud/public/nplus/raw/branch/master/samples/assets/sample.sh\"\nrun:\n- \"/pool/downloads/sample.sh\"\n```\n**Solutions 3** should be used if you have any assets that need to be available **before** the nscale Components start, like snippets for the web client etc.\nYou can use the *Prepper* for that purpose. The *Prepper* prepares everything required for the Instance to work as intended. It is very much like the *Application*, except that it does not connect to any nscale component (as they do not yet run by the time the prepper executes). But just like the Application, the Prepper is able to download assets and run scripts.\nYou can add this to your deployment:\n```yaml\ncomponents:\nprepper: true\nprepper:\ndownload:\n- \"https://git.nplus.cloud/public/nplus/raw/branch/master/assets/sample.tar.gz\"\nrun:\n- \"/pool/downloads/sample/sample.sh\"\n```\n"}
{"chapter": "Deploying with Argo", "level": 1, "text": ""}
{"chapter": "the argo version of the instance", "level": 2, "text": "Deployin with argoCD is straight forward, as there is a ready-to-run instance chart version for argo, that takes **exactly** the same values as the *normal* chart:\n```bash\nhelm install \\\n--values samples/application/empty.yaml \\\n--values samples/environment/demo.yaml \\\nsample-empty-argo nplus/nplus-instance-argo\n```\n"}
{"chapter": "Using Waves", "level": 2, "text": "The instance chart already comes with pre-defined waves. They are good to go with (can be modified though):\n```yaml\nnappl:\nmeta:\nwave: 15\n```\n**But**: You might be annoyed by ArgoCD, when some services do not come up preventing other services to not be started at all since ArgoCD operates in Waves, so later services might not be deployed at all if an early wave services fails.\nEspecially in DEV, this can become a testing problem.\nTo turn *off* Waves completely for a Stage, Environment or Instance, go\n```\nglobal:\nenvironment:\nutils:\ndisableWave: true\n```\n"}
{"chapter": "Pinning Versions", "level": 1, "text": ""}
{"chapter": "Old Version", "level": 2, "text": "If you like to test rolling updates and the updates to new minor versions, check out the *e90* sample:\nThis sample will install a version 9.0.1400 for you to test. Since the Cluster Node Discovery changed due to a new jGroups version in nscale, the chart will notice the old version and turn on the legacy discovery mechanism to allow the Pod to find its peers in Versions prior to 9.1.1200.\n```\nhelm install \\\n--values samples/empty.yaml \\\n--values samples/demo.yaml \\\n--values versions/9.0.1400.yaml \\\nsample-e90 nplus/nplus-instance\n```\n"}
{"chapter": "New Version Sample", "level": 2, "text": "Some nscale Versions are License-Compatible, meaning that for example a Version 9.1 License File will also be able to run a nscale Version 9.0 Software. But that is not always the case.\nSo you might need to set individual licenses per instance:\n```\nkubectl create secret generic nscale-license-e10 \\\n--from-file=license.xml=license10.xml\n```\nCheck, if the license has been created:\n```\n"}
{"chapter": "kubectl get secret | grep license", "level": 1, "text": "nscale-license Opaque 1 7d22h\nnscale-license-e10 Opaque 1 17s\n```\nNow, we install the instance:\n```\nhelm upgrade -i \\\n--values samples/empty.yaml \\\n--values samples/demo.yaml \\\n--values versions/10.0.yaml \\\n--set global.license=nscale-license-e10 \\\nsample-e10 nplus/nplus-instance\n```\n"}
{"chapter": "Security", "level": 1, "text": ""}
{"chapter": "All the standards", "level": 2, "text": "There are several features that will enhance the security of your system:\n- all components are running rootless by default\n- all components drop all privileges\n- all components deny escalation\n- all components have read only file systems\n- Access is restricted by NetworkPolicies\n"}
{"chapter": "Additional: The backend Protocol", "level": 2, "text": "Additionally, you can increase security by encrypting communication in the backend. Depending on your network driver, this might already been done automatically beween the Kubernetes Nodes. But you can double that even within a single node by switching the backend Protocol to https:\n```yaml\nglobal:\nnappl:\nport: 8443\nssl: true\n"}
{"chapter": "Web and PAM do not speak https by default yet, CRs have been filed.", "level": 1, "text": "nappl:\ningress:\nbackendProtocol: https\ncmis:\ningress:\nbackendProtocol: https\nilm:\ningress:\nbackendProtocol: https\nwebdav:\ningress:\nbackendProtocol: https\nrs:\ningress:\nbackendProtocol: https\nmon:\ningress:\nbackendProtocol: https\nadministrator:\ningress:\nbackendProtocol: https\n```\nThis will turn every communication to https, **but** leave the unencrypted ports (http) **open** for inter-pod communication.\n"}
{"chapter": "Zero Trust Mode", "level": 2, "text": "This will basically do the same as above, **but** also turn **off** any unencrypted port (like http) and also implement NetworkPolicies to drop unencrypted packages.\nThis will also affect the way how *probes* are checking the pods health: *nplus* will switch them to use https instead, so even the very internal Healtch Check infrastructure will be encrypted in *zero trust mode*:\n```yaml\ncomponents:\npam: false #TODO: ITSMSD-8771: PAM does not yet support https backend.\nglobal:\nsecurity:\nzeroTrust: true\nnappl:\nport: 8443\nssl: true\n```\n"}
{"chapter": "(virtual-) Remote Management Server", "level": 1, "text": "The *nplus RMS* creates a virtual IP Address in your subnet. On this IP, you will find an *nscale Remote Management Service* and a Layer 4 Proxy, forwarding the ports of the components to the\nbelonging pods.\nThe result is, that under this VIP, it looks as if there is a real server with a bunch of *nscale* components installed. So you can use the desktop admin client to connect to it and configure it. Including offline configuration.\nThe offline configuration writes settings to the configuration files of the components. These files are injected into the Pods by *nplus* making the legacy magic work again.\nAlso, Shotdown, Startup and Restart buttons in the Admin client will work, as that will by translated to Kubernetes commands by *nplus*\nAnyways, there are some restrictions:\n- In a HA scenario, you need multiple virtual server, as nscale does not allow some components to deploy more than one instance per server (like nstl) and they would then also block the default ports. So better to have more RMS\n- Log Files are not written, so the Admin cannot grab them. So no log file viewing in Admin\n> Please notice that this is a BETA Feature not released for Production use.\nThis is a sample of RMS in a HA environment with two virtual servers:\n```yaml\ncomponents:\nrmsa: true\nrmsb: true\nrmsa:\ningress:\ndomain: \"server1.{{ .instance.group | default .Release.Name }}.lab.nplus.cloud\"\ncomps:\nnappl:\nenabled: true\nrestartReplicas: 2\nnstl:\nenabled: true\nname: nstla\nrestartReplicas: 1\nhost: \"{{ .component.prefix }}nstla.{{ .Release.Namespace }}.svc.cluster.local\"\nrs:\nenabled: true\nrestartReplicas: 2\nweb:\nenabled: true\nrestartReplicas: 2\nrmsb:\ningress:\ndomain: \"server2.{{ .instance.group | default .Release.Name }}.lab.nplus.cloud\"\ncomps:\nnappl:\nenabled: true\nname: nappljobs\nrestartReplicas: 1\nreplicaSetType: StatefulSet\nhost: \"{{ .component.prefix }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local\"\nnstl:\nname: nstlb\nenabled: true\nrestartReplicas: 1\nhost: \"{{ .component.prefix }}nstlb.{{ .Release.Namespace }}.svc.cluster.local\"\n```\n"}
{"chapter": "Using Object Stores", "level": 1, "text": "Blobstores aka Objectstores have a REST Interface that you can upload your Payload to and receive an ID for it. They are normally structured into *Buckets* or *Containers* to privide\nsome sort of pooling payload within the store.\nThe *nscale Server Storage Layer* supports multiple brands of objectstores, the most popular being Amazon S3 and Microsoft Azure Blobstore.\nIn order to use them, you need to\n- get an account for the store\n- configure the *nstl* with the url, credentials etc.\n- Add firewall rules to access to store\nHave a look at the sample files\n- s3-env.yaml\nfor Amazon S3 compatible storage, and\n- azureblob.yaml\nfor Azure Blobstore\nFor S3 compatible storage, there are multiple S3 flavours available.\n"}
{"chapter": "Custom Environment Variables", "level": 1, "text": "There are multiple ways of how to set custom environment variables in addition to the named values, you set in helm:\n"}
{"chapter": "Using `env`", "level": 2, "text": "Please have a look at `s3-env.yaml` to see how custom environment variables can be injected into a component:\n```\nnstl:\nenv:\n# Archivtyp\nNSTL_ARCHIVETYPE_900_NAME: \"S3\"\nNSTL_ARCHIVETYPE_900_ID: \"900\"\nNSTL_ARCHIVETYPE_900_LOCALMIGRATION: \"0\"\nNSTL_ARCHIVETYPE_900_LOCALMIGRATIONTYPE: \"NONE\"\nNSTL_ARCHIVETYPE_900_S3MIGRATION: \"1\"\n```\nThis will set the environment variables in the storage layer to add an archive type with id 900.\n"}
{"chapter": "Using `envMap` and `envSecret`", "level": 2, "text": "Alternatively to the standard `env`setting, you can also use configmaps and secrets for additional environment variables.\nThe file `s3-envres.yaml` creates a configmap and a secret with the same variables as used in the `s3-env.yaml` sample. `s3-envfrom.yaml` shows how to import them.\nPlease be aware, that data in secrets need to be base64 encoded:\n```\necho \"xxx\" | base64\n```\nSo in order to use the envFrom mechanism,\n- prepare the resources (as in `s3-envres.yaml`)\n- upload the resources to your cluster\n```\nkubectl apply -f s3-envres.yaml\n```\n- add it to your configuration\n```\nnstl:\n# These resources are set in the s3-envres.yaml sample file\n# you can set single values (envMap or envSecret) or lists (envMaps or envSecrets)\nenvMaps:\n- env-sample-archivetype\n- env-sample-device\nenvSecret: env-sample-device-secret\n```\n"}
{"chapter": "Specifics of the Sharepoint Connector", "level": 1, "text": "Normally, you will have different configurations if you want multiple Sharepoint Connectors. This makes the *nsp* somewhat special:\n"}
{"chapter": "Multi Instance HA Sharepoint Connector", "level": 2, "text": "This sample shows how to setup a sharepoint connector with multiple instances having **different** configurations for archival, but with **High Availability** on the retrieval side.\nSharePoint is one of the few components for which is is quite common to have multiple instances instead of replicas. Replicas would include, that the configuration for all pods is identical. However, you might want to have multiple configurations as you also have multiple sharepoint sites you want to archive.\nRunning multiple instances with ingress enabled leads to the question, what the context path is for each instance. It cannot be the same as the load balancer would not be able to distinguish between them and thus refuses to add the configuration object - leading in a deadlock situation.\nSo *nplus* defined different context paths if you have multiple instances:\n- sharepointa on `/nscale_spca`\n- sharepointb on `/nscale_spcb`\n- sharepointc on `/nscale_spcc`\n- sharepointd on `/nscale_spcd`\nIf you only run one instance, it defaults to `/nscale_spc`.\n"}
{"chapter": "HA on retrieval", "level": 2, "text": "Once archived, you might want to use all instances for retrieval, since they share a common retrieval configuration (same nappl, ...). So in order to gain High Availability even across multiple instances, there are two options:\n1. You turn off the services and ingresses on any sharepoint instance but sharepointa. Then you switch sharepointa's service selector to *type mode*, selecting all pods with type `sharepoint` instead of all pods of component `sharepointa`. Then you can access this one service to reach them all.\n2. You can turn on the *clusterService*, which is an additional service that selects all `sharepoint` type pods and then adds an extra ingress on this service with the default context path `nscale_spc`\nHowever, in both scenarios, beware that the sharepoint connector can only service one context path at a time, so you will need to change the context path accordingly.\n"}
{"chapter": "Sample for solution 1", "level": 2, "text": "On the instance, define the following:\n```\ncomponents:\n# -- First, we switch the default SharePoint OFF\nsharepoint: false\n# -- Then we enable two sharepoint instances to be used with different configurations\nsharepointa: true\nsharepointb: true\nsharepointa:\nservice:\n# -- Switching the service to \"type\" makes sure we select not only the component pods (in this case all replicas of sharepointa)\n# but rather **any** pod of type sharepoint.\nselector: \"type\"\ningress:\n# -- The default contextPath for sharepointa is `nscale_spca` to make sure we have distinguishable paths for all sharepoint instances.\n# however, in this case we re-use the service as cluster service and die ingress as cluster ingress, so we switch to the general\n# contextPath, as if it was a single component deployment\ncontextPath: \"/nscale_spc\"\nsharepointb:\nservice:\n# -- The other SP Instance does not need a service any more, as it is selected by the cluster service above. So we switch off the component\n# service which also switches off the ingress as it would not have a backing service any more\nenabled: false\n# -- The default contextPath for sharepointb is `nscale_spcb` to make sure we have distinguishable paths for all sharepoint instances.\n# however, in this case we re-use the service as cluster service and die ingress as cluster ingress, so we switch to the general\n# contextPath, as if it was a single component deployment\ncontextPath: \"/nscale_spc\"\n```\n"}
{"chapter": "Sample for Solution 2", "level": 2, "text": "On the instance, define the following:\n```\ncomponents:\n# -- First, we switch the default SharePoint OFF\nsharepoint: false\n# -- Then we enable two sharepoint instances to be used with different configurations\nsharepointa: true\nsharepointb: true\nsharepointa:\nclusterService:\n# -- This enabled the cluster service\nenabled: true\n# -- the cluster Ingress needs to know the context path it should react on.\ncontextPath: \"/nscale_spc\"\ningress:\n# -- we turn off the original ingress as the common context path would block the deployment\nenabled: false\n# -- The default contextPath for sharepointa is `nscale_spca` to make sure we have distinguishable paths for all sharepoint instances.\n# however, in this case we re-use the service as cluster service and die ingress as cluster ingress, so we switch to the general\n# contextPath, as if it was a single component deployment\ncontextPath: \"/nscale_spc\"\nsharepointb:\nclusterService:\n# -- on the second SharePoint Instance, we **disable** the cluster service, as it is already created by sharepointa.\nenabled: false\n# -- however, we need to set the context path, as this tells the networkPolicy to open up for ingress even though we switch die Ingress off in the\n# next step\ncontextPath: \"/nscale_spc\"\ningress:\n# -- we turn off the original ingress as the common context path would block the deployment\nenabled: false\n# -- The default contextPath for sharepointb is `nscale_spcb` to make sure we have distinguishable paths for all sharepoint instances.\n# however, in this case we re-use the service as cluster service and die ingress as cluster ingress, so we switch to the general\n# contextPath, as if it was a single component deployment\ncontextPath: \"/nscale_spc\"\n```\n"}
{"chapter": "Static Volumes", "level": 1, "text": ""}
{"chapter": "Assigning PVs", "level": 2, "text": "For security reasons, you might want to use a storage class that does not perform automatic provisioning of PVs.\nIn that case, you want to reference a pre-created volume in the PVC.\nIn nplus, you can do so by setting the volumeName in the values.\nPlease review `values.yaml` as an example:\n```yaml\ndatabase:\nmounts:\ndata:\nvolumeName: \"pv-{{ .component.fullName }}-data\"\nnstl:\nmounts:\ndata:\nvolumeName: \"pv-{{ .component.fullName }}-data\"\n```\nYou can also set the environment config volume. Please refer to the environment documentation for that.\n```\nhelm install \\\n--values samples/environment/demo.yaml \\\n--values samples/static/values.yaml\nsample-static nplus/nplus-instance\n```\n"}
{"chapter": "Creating PVs", "level": 2, "text": "https://github.com/ceph/ceph-csi/blob/devel/docs/static-pvc.md\n"}
{"chapter": "Data Disk:", "level": 3, "text": "1. Create a pool on your cep cluster\n```\nceph osd pool create k-lab 64 64\n```\n2. Create a block device pool\n```\nrbd pool init k-lab\n```\n3. Create an image\n```\nrbd create -s 50G k-lab/pv-sample-static-database-data\nrbd create -s 50G k-lab/pv-sample-static-nstl-data\nrbd ls k-lab | grep pv-sample-static-\n```\nResize:\n```\nrbd resize --size 50G k-lab/pv-no-provisioner-database-data --allow-shrink\n```\n"}
{"chapter": "File Share:", "level": 3, "text": "1. Create a Subvolume (FS)\n```\nceph fs subvolume create cephfs pv-no-provisioner-rs-file --size 53687091200\n```\n2. Get the path of the subvolume\n```\nceph fs subvolume getpath cephfs pv-no-provisioner-rs-file\n```\n"}
{"chapter": "Troubleshooting", "level": 3, "text": "```\nkubectl describe pv/pv-no-provisioner-rs-file pvc/no-provisioner-rs-file\nkubectl get volumeattachment\n```\n"}
{"chapter": "PV Manifests", "level": 3, "text": "```yaml\napiVersion: v1\nkind: PersistentVolume\nmetadata:\nname: pv-no-provisioner-database-data\nspec:\naccessModes:\n- ReadWriteOnce\ncapacity:\nstorage: 50Gi\ncsi:\ndriver: rook-ceph.rbd.csi.ceph.com\nfsType: ext4\nnodeStageSecretRef:\n# node stage secret name\nname: rook-csi-rbd-node\n# node stage secret namespace where above secret is created\nnamespace: rook-ceph-external\nvolumeAttributes:\n# Required options from storageclass parameters need to be added in volumeAttributes\nclusterID: rook-ceph-external\npool: k-lab\nstaticVolume: \"true\"\nimageFeatures: layering\n#mounter: rbd-nbd\n# volumeHandle should be same as rbd image name\nvolumeHandle: pv-no-provisioner-database-data\npersistentVolumeReclaimPolicy: Retain\n# The volumeMode can be either `Filesystem` or `Block` if you are creating Filesystem PVC it should be `Filesystem`, if you are creating Block PV you need to change it to `Block`\nvolumeMode: Filesystem\nstorageClassName: ceph-rbd\n---\napiVersion: v1\nkind: PersistentVolume\nmetadata:\nname: pv-no-provisioner-nstl-data\nspec:\naccessModes:\n- ReadWriteOnce\ncapacity:\nstorage: 50Gi\ncsi:\ndriver: rook-ceph.cephfs.csi.ceph.com\nfsType: ext4\nnodeStageSecretRef:\n# node stage secret name\nname: rook-csi-rbd-node\n# node stage secret namespace where above secret is created\nnamespace: rook-ceph-external\nvolumeAttributes:\n# Required options from storageclass parameters need to be added in volumeAttributes\nclusterID: rook-ceph-external\npool: k-lab\nstaticVolume: \"true\"\nimageFeatures: layering\n#mounter: rbd-nbd\n# volumeHandle should be same as rbd image name\nvolumeHandle: pv-no-provisioner-nstl-data\npersistentVolumeReclaimPolicy: Retain\n# The volumeMode can be either `Filesystem` or `Block` if you are creating Filesystem PVC it should be `Filesystem`, if you are creating Block PV you need to change it to `Block`\nvolumeMode: Filesystem\nstorageClassName: ceph-rbd\n---\napiVersion: v1\nkind: PersistentVolume\nmetadata:\nname: pv-no-provisioner-rs-file\nspec:\naccessModes:\n- ReadWriteMany\ncapacity:\nstorage: 50Gi\ncsi:\ndriver: cephfs.csi.ceph.com\nnodeStageSecretRef:\nname: rook-csi-cephfs-secret\n#rook-csi-cephfs-node\nnamespace: rook-ceph-external\nvolumeAttributes:\n# Required options from storageclass parameters need to be added in volumeAttributes\nclusterID: rook-ceph-external\nfsName: cephfs\npool: cephfs_data\nstaticVolume: \"true\"\n# rootPath kriegt man per ceph fs subvolume getpath cephfs pv-no-provisioner-rs-file\nrootPath: \"/volumes/_nogroup/pv-no-provisioner-rs-file/3016f512-bc19-4bfb-8eb2-5118430fbbe5\"\n#mounter: rbd-nbd\n# volumeHandle should be same as rbd image name\nvolumeHandle: pv-no-provisioner-rs-file\npersistentVolumeReclaimPolicy: Retain\n# The volumeMode can be either `Filesystem` or `Block` if you are creating Filesystem PVC it should be `Filesystem`, if you are creating Block PV you need to change it to `Block`\nvolumeMode: Filesystem\nstorageClassName: cephfs\n```\n"}