13 lines
9.6 KiB
Plaintext
13 lines
9.6 KiB
Plaintext
|
|
{"chapter": "Common Storage Configuration", "level": 2, "text": "This works just the same way as the Ingress settings: The Configuration can be performed at various levels:\n- Per Component / Chart\n`storage.`\n- Per Instance\n`global.storage.`\n- Per Environment\n`global.environment.storage.`\nThis enables you to have configuration yaml files per environment (e.g. for DEV, QA and PROD) setting environment defaults.\nYou then do not have to touch the Instance configuration.\nFor storage, there are several volume types:\n- **conf**, Shared File, RWX, global per environment\n- **data**, Disk, RWO, optional per component\n- **file**, Shared File, RWX, optional per ReplicaSet\n- **temp**, EmptyDir\n- **ptemp**, Shared File, RWX, global per environment\n- **log**, EmptyDir, should be empty, so just in case\n- **pool**, optional path on the conf share mounted by some components\n- **generic**, allows to mount any pre-defined PV into a container\n"}
|
||
|
|
{"chapter": "conf", "level": 3, "text": "The *conf* storage is a global PVC with RWX (file) shared by every component in the environment. The component creates a sub directory\non the share and mounts it to the config directory in the container.\n`storage.conf.name` sets the name of the PVC to be created and used.\n`mounts.conf.path` defines the target directory in the container.\nAs the environment normally provides the *conf* share, you can set the class and the size in the environment.\nIf you habe your RWX storage class provided by a CIFS / SMB shared file system, you need to disable linux commands like *chmod*.\nThis can be done in the storage environment settings:\n```\nglobal:\nenvironment:\nstorage:\nconf:\ncifs: true\n```\n"}
|
||
|
|
{"chapter": "data", "level": 3, "text": "Every component can create a data PVC with RWO (disk). You can set the `class` for this disk directly at the mount definition `mounts.data.class`. If unset, it uses the definition for the data class from `global.storage.data.class` or from the environment definition at `global.environment.storage.data.class`.\nIf the class is not defined, it is not included in the manifest and so the cluster default is taken.\nSet the size at `mounts.data.size`. No default for the size.\n"}
|
||
|
|
{"chapter": "file", "level": 3, "text": "Every component can create a file PVC with RWX (shared file). You can set the `class` for this share directly at the mount definition `mounts.file.class`. If unset, it uses the definition for the file class from `global.storage.file.class` or from the environment definition at `global.environment.storage.file.class`.\nIf the class is not defined, it is not included in the manifest and so the cluster default is taken.\nSet the size at `mounts.file.size`. No default for the size.\nThis file mount is used for example for the *nscale Rendition Server* to create a common workload directory for all PODs across cluster nodes.\n"}
|
||
|
|
{"chapter": "temp", "level": 3, "text": "If a *temp* mount point is given in the values file, it creates an `emptyDir` volume with the `sizeLimit` of `mounts.temp.size`. If no limit is given, the volume will have no limit and the cluster node default is used.\nIf you want to back this volume by memory, specify `mounts.temp.medium: memory`. Be aware, that this will utilize a RAM disk and count against your PODs resources.\n> The *nscale Application Layer* caches fulltext data in temp. Please be aware of your component behaviour when setting medium and size. Your plugins might be requireing speed or size.\n"}
|
||
|
|
{"chapter": "ptemp", "level": 3, "text": "*ptemp* is a shared, persistant version of temp. It is used to store temporary data, that needs to live beyond the life of a pod, like exports from the database or account logs from storage layer.\nThe ptemp is created by the environment and all pods are free to use it, just like conf.\n"}
|
||
|
|
{"chapter": "logs", "level": 3, "text": "If a *logs* mount point is given in the values file, it creates an `emptyDir` volume with the `sizeLimit` of `mounts.logs.size`. If no limit is given, the volume will have no limit and the cluster node default is used.\nThe components are writing logs to `stdout` and `stderr`, so the logs directory should not be necessary. This is just in case any plugin writes something to the contaainers file system.\nAdditionally, if you use the *nplus Remote Management Server* component, you might want the legacy way of reading log files, and this would be the storage for that.\n"}
|
||
|
|
{"chapter": "pool", "level": 3, "text": "You can define a path at `mount.pool`, then this component will have access. This is used to hand binary data to the components, such as plugins or *nscale Generic Base Apps* along with the *nscale App Installer*.\n"}
|
||
|
|
{"chapter": "Pre-Created Persistent Volumes", "level": 3, "text": "For security reasons, Persistent Volumes can be pre-created and then referenced by the PVC. In order to do so, you can set\n- `storage.conf.volumeName` in the environment configuration to set a specific volume reference for the config share, and\n- `mounts.data.volumeName` in each components values to set a specific volume reference for the (optional) data volume, as well as\n- `mounts.file.volumeName` in each components values to set a specific volume reference for the (optional) file volume\nAs the volume is specific to a certain volume, it cannot be set globally.\n"}
|
||
|
|
{"chapter": "Setting storage values", "level": 3, "text": "| Key | Component | Instance | Environment |\n| ---- | ----------- | ----------- | ----------- |\n| conf.name | ✔︎ | ✔︎ | `conf` |\n| data.class | ✔︎ | ✔︎ | ✔︎ |\n| data.size | ✔︎ | - | - |\n| data.paths | predefined list | - | - |\n| data.volumeName | ✔︎ | - | - |\n| file.class | ✔︎ | ✔︎ | ✔︎ |\n| file.size | ✔︎ | - | - |\n| file.paths | predefined list | - | - |\n| file.volumeName | ✔︎ | - | - |\n| temp.size | ✔︎ | - | - |\n| temp.medium | ✔︎ | - | - |\n| temp.path | predefined | - | - |\n| logs.size | ✔︎ | - | - |\n| logs.medium | ✔︎ | - | - |\n| logs.path | predefined | - | - |\nAvoid to change the values marked as *predefined*.\n"}
|
||
|
|
{"chapter": "Working with Certificates", "level": 3, "text": "There are two types of certificates than you might want to customize in your deployment:\n- (Root-) Certificate Authorities\n- Private Certificates and Key Files\n**Root CA** extensions will be needed if you want to access other services via https (egress), that have certificates signed by a non-default authority.\nIn that case, you can upload the authority (public) certificate to trust it.\nThe process differs from component to component, as some are written in java (and require the certificate to be inside a keystore)\nand others are written in C++ or else and might require a PEM certificate store (like the Storage Layer).\nFirst thing is to create the store in whatever format it is needed and then upload it into a secret. Within the helm values, you can then\nset the destination path and file name next to the secret where you stored the certificate. There can be multiple certificates.\n```\nmounts:\ncaCerts:\npaths:\n- \"/etc/pki/tls/certs/ca-bundle.crt\"\n- \"/usr/lib/jvm/jre/lib/security/cacerts\"\nsecret: ca-secret\n```\nIn this example, the secret *ca-secret* needs to hold two files:\n- a cacerts file (under that key), which is a java keystore file and will\nbe placed as the cacerts file in the Java deployment of the component (In this case the NAPPL).\n- a *ca-bundle.crt* file which is a PEM format file that holds all trusted CAs you need.\nThe *paths* list defines the path and filename of the target as well as the key of the files within the secret.\nIn Storage Layer, this might look like this:\n```\nmounts:\ncaCerts:\npaths:\n- \"/opt/ceyoniq/nscale-server/storage-layer/etc/CA.CER\"\nsecret: ca-secret\n```\nIn this case, the Sorage Layer requires the root ca certs to be a file of exactly this name in the etc directory of the deployment.\nPlease consult the storage layer manual for more information.\n**component Certificates** and Key files are normally used to hold private tls certificates to encrypt https traffic (ingress).\nThe configuration of these keystores is identical to the ca stores:\n```\nmounts:\ncomponentCerts:\npaths:\n- \"/opt/ceyoniq/nscale-server/application-layer/conf/certificates.store\"\nsecret:\n```\nIn this case, the secret must have a key named *certificates.store* that holds the java keystore with the required certificates.\n> Please note, that alternatively, you can also upload this file to the conf directory of the application layer. If you do not specify a secret, this\nmount will not be implemented.\nUploading to this file to the conf would be like this:\n```\nkubectl cp certificates.store nplus-toolbox-0:/conf/<myInstance>/nappl\n```\n**Alternatively, you can also define a configMap** for the public CA certificates, then the configuration would be like this:\n```\nmounts:\ncaCerts:\npaths:\n- \"/opt/ceyoniq/nscale-server/storage-layer/etc/CA.CER\"\nconfigMap: ca-map\n```\n"}
|
||
|
|
{"chapter": "Using the generic mount interface", "level": 3, "text": "This allows you to mount any pre-provisioned PVs, secret or configMap as a directory or single file into any container.\nIt can be used e.g. to mount migration nfs, cifs / samba shares into a pipeliner container.\nUse the following format:\n```\nmounts:\ngeneric:\n- name: <name>:\npath: <the path in the container, where you want to mount this>\nvolumeName: <the name of the PV to be mounted>\nconfigMap: <the name of the configMap to bemounted>\nsecret: <the name of the secret to bemounted>\nsubPath: [a (optional) subpath to be used inside the PV]\naccessMode: <ReadWriteMany|ReadWriteOnce|ReadOnlyMany|ReadWriteOncePod>\nsize: <size request>\n```\nPlease see the *generic* sample in the samples directory for detailes.\n"}
|