# nplus-component-sharepoint nscale SharePoint Connector, providing SP archiving to the Instance # Single Server with multiple Replicas (and same Config) In the instance, configure ``` components: sharepoint: true sharepoint: replicas: 2 ``` # Multiple Servers with different Configs ``` components: sharepointa: true sharepointb: true sharepointa: replicas: 1 sharepointb: replicas: 1 ``` this way, you will have two sharepoint instances with different config. # Cluster Service in multi server mode, the Instance Chart deploys a sharepoint cluster service that balances across sharepoint A and B. You **can** use it if you want, or you use the services for sharepointa and sharepointb and load balance manually. # Working with multiple SharePoint Connectors and Ingresses If you are working with multiple Sharepoint Connectors (a,b,c,d), you do not want them all to define the same ingress path, as it would block the ingress from being created. There are two solutions to this: 1. Disable the ingress on the sharepoint connectors you do not want an ingress to be created. You can still create an ingress manually, either routing traffic via the sharepoint component service, or alternatively routing the traffic via the global cluster service (see above) that is coming with the instance chart. 2. Instead of disabling the ingress, you can also change the path for each sharepoint instance. Then you could still connect to an individual SP service via the path given, or alternatively route the traffic via the global cluster service that is coming with the instance chart. (same as above) **When would you consider this?** You would want this, if you have multiple instances, each with a completely different archiving configuration, **but** a common retrieval schedule. You would gain a high availability retrieval scenario, even if you only have a single service for archiving, as archiving does not necessarily needs to be HA. ## nplus-component-sharepoint Chart Configuration You can customize / configure nplus-component-sharepoint by setting configuration values on the command line or in values files, that you can pass to helm. Please see the samples directory for details. In case there is no value set, the key will not be used in the manifest, resulting in values taken from the config files of the component. ### Template Functions You can use template functions in the values files. If you do so, make sure you quote correctly (single quotes, if you have double quotes in the template, or escaped quotes). ### Global Values All values can be set per component, per instance or globally per environment. Example: `global.ingress.domain` sets the domain on instance level. You can still set a different domain on a component, such as administrator. In that case, simply set `ingress.domain` for the administrator chart and that setting will have priority: - Prio 1 - Component Level: `ingress.domain` - Prio 2 - Instance Level: `global.ingress.domain` - Prio 3 - Environment Level: `global.environment.ingress.domain` ### Using Values in Templates As it would be a lot of typing to write `.Values.ingress.domain | default .Values.global.ingress.domain | default .Values.global.environment.ingress.domain`in your template code, this is automatically done by nplus. You can simply type `.this.ingress.domain` and you will get a condensed and defaulted version of your Values. So an example in your `values.yaml` would be: ``` administrator: waitFor: - '-service {{ .component.prefix }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local:\{{ .this.nappl.port }} -timeout 600' ``` This example shows `.this.nappl.port` which might come from a component, instance or global setting. You do not need to care. The `.Release.Namespace` is set by helm. You have access to all Release and Chart Metadata, just like in your chart code. The `.component.prefix` is calculated by nplus and gives you some handy shortcuts to internal variables: - `.component.chartName` The name of the chart as in `.Chart.Name`, but with override by `.Values.nameOverride` - `.component.shortChartName` A shorter Version of the name - `nappl` instead of `nplus-component-nappl` - `.component.prefix` The instance Prefix used to name the resources including `-`. This prefix is dropped, if the `.Release.Name` equals `.Release.Namespace` for those of you that only run one nplus Instance per namespace - `.component.name` The name of the component, including `.Values.nameOverride` and some logic - `.component.fullName` The fullName inlcuding `.Values.fullnameOverride` and some logic - `.component.chart` Mainly the `Chart.Name` and `Chart.Version` - `.component.storagePath` The path where the component config is stored in the conf PVC - `.component.handler` The handler (either helm, argoCD or manual) - `.instance.name` The name of the instance, but with override by `.Values.instanceOverride` - `.instance.group` The group, this instance belongs to. Override by `.Values.groupOverride` - `.instance.version` The *nscale* version (mostly taken from Application Layer), this instance is deploying. - `.environment.name` The name of the environment, but with override by `.Values.environmentNameOverride` ### Keys You can set any of the following values for this component: | Key | Description | Default | |-----|-------------|---------| **clusterService**​.contextPath | set the contextPath (url) for the SharePoint Cluster Service (for GET requests to a group of sharepoint instances) | | **clusterService**​.enabled | | `false` | **connector**​.cTagPropertyName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | `"cTag"` | **connector**​.eTagPropertyName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | `"eTag"` | **connector**​.idPropertyName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | `"sharePointId"` | **connector**​.listItemIdPropertyName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | `"SharePointListItemId"` | **connector**​.nscaleExpirationPropertyName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | | **connector**​.nscaleGdprRelevantPropertyName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | | **connector**​.nscaleLegalHidePropertyName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | | **connector**​.nscaleLegalHoldPropertyName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | | **connector**​.nscaleRetentionPropertyName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | | **connector**​.parentIdPropertyName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | `"sharePointParentId"` | **connector**​.sharePointChangeTokenPropertyName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | | **connector**​.sharePointCreatedPropertyName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | `"SharePointCreated"` | **connector**​.sharePointCreatorPropertyName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | `"SharePointCreator"` | **connector**​.sharePointEditedPropertyName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | `"SharePointLastModified"` | **connector**​.sharePointEditorPropertyName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | `"SharePointEditor"` | **connector**​.stubIdPropertyName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | `"SharePointStubId"` | **connector**​.stubListItemIdPropertyName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | `"SharePointStubListItemId"` | **connector**​.webUrlPropertyName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | `"sharePointWebUrl"` | doInitialCrawl | toggle initial crawling. This value is mandatory. | `"false"` | env | Sets additional environment variables for the configuration. | | envMap | Sets the name of a configMap, which holds additional environment variables for the configuration. It is added as envFrom configMap to the container. | | envSecret | Sets the name of a secret, which holds additional environment variables for the configuration. It is added as envFrom secretRef to the container. | | fullnameOverride | This overrides the output of the internal fullname function | | **image**​.name | the name of the image to use | `"sharepoint-connector"` | **image**​.pullSecrets | you can provide your own pullSecrets, in case you use a private repo. | `["nscale-cr", "nplus-cr"]` | **image**​.repo | if you use a private repo, feel free to set it here | `"ceyoniq.azurecr.io/release/nscale"` | **image**​.tag | the tag of the image to use | `"latest"` | **ingress**​.annotations | Adds extra Annotations to the ingress | | **ingress**​.backendProtocol | Overrides the default backend protocol. The default is http, unless in zeroTrust Mode, then it is switched to https automatically. | `http`
`https` in zero trust mode | **ingress**​.class | The ingressclass to use for this ingress. Most likely, this is provided globally by the instance, but you are free to override it here if this component should use a different class e.g. if you have separated ingress controllers, like a public and an internal one | `public` | **ingress**​.contextPath | The default service context path for this ingress. Some components allow to change this (e.g. SharePoint), for the most though this is only a constant used in the scripts. | `"/nscale_spc"` | **ingress**​.cookie | on component level, set cookie affinity for the ingress example: `XtConLoadBalancerSession` for nscale Web | | **ingress**​.deny | deny is used to exclude specific paths from public access, such as administrative paths. For Example, in nappl, webc ist the hessian protocol, webb is the burlap protocol. The configuration service is the endpoint used by the Admin client. | | **ingress**​.domain | Sets the domain to be used. This domain should be provided by the instance globally for all components, but you are free to override it here | | **ingress**​.enabled | You can toggle the ingress on wether you'd like this component to be reachable through an ingress or not. | `true` | **ingress**​.namespace | Specify the namespace in which the ingress controller runs. This sets the firewall rule / networkPolicy to allow traffic from this namespace to our pods. This may be a comma separated list | "ingress, kube-system, ingress-nginx" | **ingress**​.proxyReadTimeout | Sets the annotation `nginx.ingress.kubernetes.io/proxy-read-timeout` on the ingress object, if set. | | **ingress**​.secret | Sets the name of the tls secret to be used for this ingress, that contains the private and public key. These secrets can optionally be provided by the instance | `{{ .this.ingress.domain }}-tls` | **ingress**​.whitelist | optionally sets a whitelist of ip ranges (CIDR format, comma separated) from which ingress is allowed. This is an annotation for nginx, so won't work with other ingress controllers | | **javaOpts**​.javaMaxMem | set the maximum memory, java will consume. Attention: This is NOT the real maximum and it does not include any non Java memory. Please read google, as this is highly discussed | | **javaOpts**​.javaMaxRamPercentage | set the percentage of RAM, Java will use of the total. The total amount is the amount installed in the K8s Cluster Node, OR the Memory Limit set (see resources), if any. | | **javaOpts**​.javaMinMem | set the minimum memory, java will consume | | **javaOpts**​.javaMisc | Any misc Java Options that need to be passed to the container | | **management**​.port | see mail from Manuel, 30.7.2024 | `"18098"` | **management**​.security | see mail from Manuel, 30.7.2024 | `"false"` | **management**​.ssl | see mail from Manuel, 30.7.2024 | `"false"` | **meta**​.language | Sets the language of the main service (in the *service* container). This is used for instance if you turn OpenTelemetry on, to know which Agent to inject into the container. | `"java"` | **meta**​.ports​.http | The http port this component uses (if any). In zero trust mode, this will be disabled.
this is a constant value of the component and should not be changed. | **info only**, do not change
`8098` | **meta**​.ports​.https | The tls / https port, this component uses (if any)
this is a constant value of the component and should not be changed. | **info only**, do not change
`8498` | **meta**​.provider | sets provider (partner, reseller) information to be able to invoice per use in a cloud environment | | **meta**​.serviceContainer | The container name of the main service for this component. This is used to define where to inject the telemetry agents, if any | `"sharepoint-connector"` | **meta**​.stage | A optional parameter to indicate the stage (DEV, QA, PROD, ...) this component, instance or environment runs in. This can be used in template functions to add the stage to for instance the service name of telemetry services like open telemetry. (see telemetry example) | | **meta**​.tenant | sets tenant information to be able to invoice per use in a cloud environment | | **meta**​.type | the type of the component. You should not change this value, except if you use a pipeliner in core mode. In core mode, it should be *core*, else *pipeliner* This type is used to create cluster communication for nappl and nstl and potentially group multiple replicaSets into one service. | `"sharepoint"` | **meta**​.wave | Sets the wave in which this component should be deployed within an ArgoCD deployment if unset, it uses the default wave thus all components are installed in one wave, then relying on correct wait settings just like in a helm installation | | minReplicaCountType | if you set minReplicaCountType, a podDesruptionBudget will be created with this value as minAvailable, using the component type as selector. This is useful for components, that are spread across multiple replicaSets, like sharepoint or storage layer | | **mounts**​.caCerts​.configMap | Alternative 2: the name of the configMap to use. The Key has to be the File Name used in the path setting | | **mounts**​.caCerts​.secret | Alternative 1: the name of the secret to use. The Key has to be the File Name used in the path setting | | **mounts**​.componentCerts​.configMap | Alternative 2: the name of the configMap to use. The Key has to be the File Name used in the path setting | | **mounts**​.componentCerts​.paths | Sets the path to the component certs.
do not change this value | **info only**, do not change
`["/opt/ceyoniq/sharepoint-connector/conf/apicert.pfx", "/opt/ceyoniq/sharepoint-connector/conf/apicert.pem", "/opt/ceyoniq/sharepoint-connector/conf/keystore.ks"]` | **mounts**​.componentCerts​.secret | Alternative 1: the name of the secret to use. The Key has to be the File Name used in the path setting | | **mounts**​.conf​.path | Sets the path to the conf files
do not change this value | **info only**, do not change
`"/opt/ceyoniq/sharepoint-connector/conf"` | **mounts**​.data​.class | Sets the class of the data disk | | **mounts**​.data​.size | Sets the size of the data disk | | **mounts**​.data​.volumeName | If you do not want to have a Volume created by the provisioner, you can set the name of your volume here to attach to this pre-existing one | | **mounts**​.disk​.class | Sets the class of the disk | | **mounts**​.disk​.enabled | enables the use of the second data disk. If enabled, all paths defined will end up on this disk. In case of the (default) disabled, the paths will be added to the primaty data disk. | `false` | **mounts**​.disk​.migration | Enables the migration init container. This will copy the data in paths from the primary data disk to the newly enabled secondary disk. This is done only once and only if there is legacy data at all. No files are overwritten! | `false` | **mounts**​.disk​.size | Sets the size of the disk | | **mounts**​.disk​.volumeName | If you do not want to have a Volume created by the provisioner, you can set the name of your volume here to attach to this pre-existing one | | **mounts**​.file​.class | Sets the class of the shared disk | | **mounts**​.file​.size | Sets the size of the shared disk | | **mounts**​.file​.volumeName | If you do not want to have a Volume created by the provisioner, you can set the name of your volume here to attach to this pre-existing one | | **mounts**​.generic | Allows to define generic mounts of pre-provisioned PVs into any container. This can be used e.g. to mount migration nfs, cifs / samba shares into a pipeliner container. | | **mounts**​.logs​.path | Sets the path to the log files
do not change this value | **info only**, do not change
`"/opt/ceyoniq/sharepoint-connector/bin/logs"` | **mounts**​.logs​.size | Sets the size of the log disk (all paths) | `"1Gi"` | **mounts**​.temp​.paths | Sets a list of paths to the temporary files
do not change this value | **info only**, do not change
`["/opt/ceyoniq/sharepoint-connector/temp", "/tmp"]` | **mounts**​.temp​.size | Sets the size of the temporary disk (all paths) | `"1Gi"` | nameOverride | This overrides the output of the internal name function | | **nappl**​.account | The technical account to login with | | **nappl**​.baseFolder | The base folder, this component should write to | | **nappl**​.docArea | The document area, this component should write to | | **nappl**​.domain | The domain of the technical account | | **nappl**​.host | nappl host name | | **nappl**​.instance | instance of the Application Layer, likely `instance1` | | **nappl**​.password | The password of the technical accunt (if not set by secret) | | **nappl**​.port | nappl port (http 8080 or https 8443) | | **nappl**​.secret | An optional secret that holds the credentials (the keys must be `account` and `password`) | | **nappl**​.ssl | sets the Advanced Connect to tls | | nodeSelector | select specific nodes for this component | | parallelRequests | amount of parallel requests | `5` | **resources**​.limits​.cpu | The maximum allowed CPU for the container | | **resources**​.limits​.memory | The maximum allowed RAM for the container | | **resources**​.requests​.cpu | Set the share of guaranteed CPU to the container. | | **resources**​.requests​.memory | Set the share of guaranteed RAM to the container | | **security**​.containerSecurityContext​.allowPrivilegeEscalation | Some functionality may need the possibility to allow privilege escalation. This should be very restrictive
you should not change this | **info only**, do not change
`false` | **security**​.containerSecurityContext​.readOnlyRootFilesystem | sets the container root file system to read only. This should be the case in production environment
you should not change this | **info only**, do not change
`true` | **security**​.podSecurityContext​.fsGroup | The file system group as which new files are created
there is normally no need to change this | **info only**, do not change
`1001` | **security**​.podSecurityContext​.fsGroupChangePolicy | Under which condition should the fsGroup be changed
there is normally no need to change this | **info only**, do not change
`"OnRootMismatch"` | **security**​.podSecurityContext​.runAsUser | The user under which the container ist run. Avoid 0 / root. The container should run in a non-root context for security
there is normally no need to change this | **info only**, do not change
`1001` | **security**​.zeroTrust | turns on *Zero Trust* Mode, disabling *all* http communication, even the internal http probes | `false` | **service**​.annotations | adds extra Annotations to the service | | **service**​.enabled | enables the service to be consumed by group components and a potential ingress Disabling the service also disables the ingress. | `true` | **service**​.selector | The selector can be `component` or `type` *component* selects only pods that are in the replicaset. *type* selects any pod that has the given type | `"component"` | **sharepoint**​.clientCertPw | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | | **sharepoint**​.clientId | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | | **sharepoint**​.doCheckOut | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | `false` | **sharepoint**​.secret | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | | **sharepoint**​.serviceBusConnectionString | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | | **sharepoint**​.serviceBusQueueName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | | **sharepoint**​.serviceBusRetentionConnectionString | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | | **sharepoint**​.serviceBusRetentionQueueName | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | | **sharepoint**​.serviceBusTopicNameConfigUpdate | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | | **sharepoint**​.spHost | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | `"https://example.com"` | **sharepoint**​.tenantId | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | | **sharepoint**​.triggerProperty | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | `"toBeArchived"` | **sharepoint**​.webUserPw | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | | **ssl**​.keyAlias | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | `"https"` | **ssl**​.keyPassword | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | `"secret"` | **ssl**​.keystore | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | | **ssl**​.keystorePassword | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | `"secret"` | **ssl**​.keystoreSecret | Documentation pending until official release of *nscale SharePoint Connector* by *Ceyoniq* | | **telemetry**​.openTelemetry | turns Open Telemetry on | | **telemetry**​.serviceName | Sets the service name for the telemetry service to more convenient identify the displayed component Example: "{{ .this.meta.type }}-{{ .instance.name }}" | | **template**​.annotations | set additional annotations for pods | | **template**​.labels | set additional labels for pods | | terminationGracePeriodSeconds | Sets the terminationGracePeriodSeconds for the component If not set, it uses the Kubernetes defaults | | timezone | set the time zone for this component to make sure log output has a specific timestamp, internal dates and times are correct (like the creationDate in nappl) etc. | `Europe/Berlin` | tolerations | Set tolerations for this component | | **utils**​.debug | Turn debugging *on* will give you stack trace etc. Please check out the Chart Developer Guide | `false` | **utils**​.disableWait | in case you use the argoCD Wave feature, you might think about switching off the waitFor mechanism, that makes sure PODs are only started after pre-requisites are fulfilled. You can disable the starndard wait mechanism, but at your own risk, as this might start components even if they are not intended to run yet. | `false` | **utils**​.disableWave | If you use argoCD, you most likely want to use the argo Wave Feature as well, making sure the components of an instance are deployed ordered. However, in DEV you might want to disable this to allow live changing components while previous waves are not finished yet. | `false` | **utils**​.includeNamespace | By default, the namespace is rendered into the manifest. However, if you want to use `helm template` and store manifests for later applying them to multiple namespaces, you might want to turn this `false` to be able to use `kubectl apply -n -f template.yaml` later | `true` | **utils**​.maintenance | in Maintenance Mode, all *waitFor* actions will be skipped, the *Health Checks* are ignored and the pods will start in idle, not starting the service at all. This will allow you to gain access to the container to perform recovery and maintenance tasks while having the real container up. | `false` | **utils**​.renderComments | You can turn Comment rendering *on* to get descriptive information inside the manifests. It will also fail on depricated functions and keys, so it is recommended to only switch it off in PROD | `true` | waitFor | Defines a list of conditions that need to be met before this components starts. The condition must be a network port that opens, when the master component is ready. Mostly, this will be a service, since a component is only added to a service if the probes succeed. | | ## Secrets in this components If you use secrets to store credentials, please make sure that you create the secrets in the same namespace as the deployment. In this component, the nappl secret also nneeds to contain the domain of the acessing user: - account - password - domain Also, there is a second secret that can be used to store credentials for the sharepoint system. If used, it needs to comtain: - clientId - tenantId - clientCertPw - webUserPw The https / tls certificates are stored in a keystore to which we might need a password. You can set it by value, by env variable, by envSecret, by envconfigMap or by secret (`.Values.ssl.keystoreSecret`). If you set it by secret, please have - keyPassword (`SERVER_SSL_KEYPASSWORD`) - keystorePassword (`SERVER_SSL_KEYSTOREPASSWORD`) as keys in the secret.