Public Information
This commit is contained in:
29
ai/jsonl/faq.jsonl
Normal file
29
ai/jsonl/faq.jsonl
Normal file
@@ -0,0 +1,29 @@
|
||||
{"question": "How do I add my custom Generic Base App (GBA) to the deployment?", "answer": "You can use the application chart to add your GBAs to a deployment. Please follow the instructions\nin the [chart README](../charts/application/README.md)."}
|
||||
{"question": "I do not find any of my custom objects (roles, classes, ...) from my GBA in the system. Is there an install log file that I can check?", "answer": "Yes. You can either check the log of the application job with\n```\nkubectl logs -l nplus/instance=sbs,nplus/component=application\n```\nor you can check the log at `/conf/<instance>/application/10init.log` from the environment toolbox.\nPlease check out the [chart README](../charts/application/README.md) for more information.\n> Please note, that the job/pod is automatically removed shortly after app installation, so the `kubectl logs` command might not find the ressource any more."}
|
||||
{"question": "Network Policies", "answer": "Kubernetes CNI supports the use of `NetworkPolicy` resources. Every resource, that has a NetworkPolicy attached is monitored by a compatible CNI driver such as Calico oder Cilium and Network Filter Rules are implemented.\nBy this means, one pod can only communicate with other pods, if a network rule has explicely been applied.\nnplus supports NetworkPolicies by the following control structures:\nsecurity.cni. (on component, instance or environment level)\n- defaultIngressPolicy\n can be set to *deny*, *allow* or none.\n *deny* will drop all undefined inbound packages,\n *allow* will forward all undefined inbound packages\n If not defined, the Policy will not be created.\n- defaultEgressPolicy\n can be set to *deny*, *allow* or none.\n *deny* will drop all undefined outbound packages,\n *allow* will forward all undefined outbound packages\n If not defined, the Policy will not be created.\n- createNetworkPolicy\n toggles the policy creation in general\nFor larger projects, it is likely to have a *Central Services* Instance that hold e.g. the *Administrator* and the *Monitoring Console*. If these services are in the same namespace and within the same instance, nothing need to be done (default).\nHowever, if you use *Central Services* you can define the Namespace and the Instance of these services in order to have NetworkPolicies created for inter-namespace and inter-instance traffic.\n- administratorNamespace\n- administratorInstance\n- monitoringNamespace\n- monitoringInstance\n- pamNamespace\n- pamInstance\n> If you use a centralized *Storage Layer* and *Rendition Server*, you will have to apply extra Policies to allow access. Please remember to write ingress and egress rules.\nExample:\n```\nglobal:\n environment:\n security:\n cni:\n defaultIngressPolicy: deny\n defaultEgressPolicy: deny\n createNetworkPolicy: true\n```"}
|
||||
{"question": "How can I use snc in NAPPL to access my SAP System?", "answer": "To use *snc* in NAPPL, you need to\n1. Enable it in NAPPL (`nappl.snc.enabled: true`)\n2. Add the IP Range of your SAP Systems to allow egress access (`nappl.snc.sapIpRange: \"0.0.0.0/0\"`)\n3. Copy the *snc* files to the nplus environment (`kubectl cp snc nplus-toolbox-0:/conf/pool`)\nPlease find more information in the [nappl chart README](../charts/nappl/README.md)"}
|
||||
{"question": "How can I use extra fonts for rendition or OCR?", "answer": "Extra fonts, like the *mscorefonts* can be installed by copying them into the *nplus environment*. The fonts are then automatically applied to all *rendition Server* and *Application Layer* components within all *nplus Instances* within this environment.\nTo copy fonts to the pool, use\n```\nkubectl cp test/fonts nplus-toolbox-0:/conf/pool\n```\nThis copies the local *fonts* directory to the environment pool.\nThe target is `pool/fonts`, where all extra fonts must reside.\nThis is then picked up by the components."}
|
||||
{"question": "How can I completely remove any trace of *nplus* from my cluster?", "answer": "1. Remove all *nplus Instances* from your *nplus Environment*:\nIf you installed with helm:\n```\nhelm uninstall myInstance\n```\nIf you installed using Argo:\n```\nhelm install myInstance-argo\n```\nor whatever the name of your instance is.\nIf you installed by kubectl:\n```\nkubectl delefe -f myInstance.yaml\n````\nDo this for all instances.\n2. Remove the *nplus Environment* from the Kubernetes Namespace\nif installed by helm:\n```\nhelm uninstall <name>\n```\nwhere *name* is the name you used when installing\n3. Remove the *nplus Cluster* from the Kubernetes Cluster\nif installed by helm:\n```\nhelm uninstall <name>\n```\nwhere *name* is the name you used when installing"}
|
||||
{"question": "I would like to connect to the environment dav server to access the config files", "answer": "You can access the *nplus Environment conf dav server* either\n- through an ingress, if you enable it. But you might want to keep it disabled for security reasons. Instead you can access it\n- via a port forwarding from your local machine, in case you have kubectl access to the cluster:\n```\nkubectl port-forward pods/nplus-davserver-0 8080:8080\n```\nThen, you can connect to the server via http://localhost:8080/dav"}
|
||||
{"question": "How can I manually delete all Resources belonging to a specific instance?", "answer": "To delete everything belonging to a specific instance, you can use:\n```\nkubectl delete $(kubectl get svc,sts,deployment,cm,secret,networkpolicy,ing,pvc,certificate,nscale -l nplus/instance=<instance> -o name)\n```"}
|
||||
{"question": "I changed the image tag of *nscale Web*, but when I apply, the component stays healthy", "answer": "Even though it might seem *nscale Web* would not restart, it actually does.\n*nscale Web* is configured as a *Rolling Update DeamonSet*, so it first creates a new Pod and waits till that is ready. Then it stops the old one.\nDuring the update cycle, the services stays healthy.\nNotice, that the *Application* job (if defined) runs as well. That is, because updating the Web component might require new Snippets etc. to be installed,\nto *nplus* is giving the *Application* the chance to do so."}
|
||||
{"question": "Can I check out a nappl image?", "answer": "Yes, you can:\n```sh\ndocker run --rm -it ceyoniq.azurecr.io/release/nscale/application-layer:ubi.9.2.1200.2024052713 /bin/bash\n```"}
|
||||
{"question": "Can I bash into my nappljobs?", "answer": "Indeed:\n```sh\nkubectl exec --stdin --tty demo-ha-nappljobs-0 -- /bin/bash\n```"}
|
||||
{"question": "I keep getting errors, that *chmod* is not allowed on the conf file system", "answer": "This might be because you might be using a CIFS / smb shared file system (like Microsoft Azure File).\nYou can switch off all internal chmod commands by setting `.Values.global.environment.storage.conf.cifs` to `true`."}
|
||||
{"question": "We use multiple ingress controllers in different namespaces. How do we set that?", "answer": "You can set the ingress class per enviroment, per instance or per component.\nComponent bein the highest priority.\nAdditionally, you might want to set the namespace of your controller to allow ingress traffic from that namespace to the pods. Since you probably have multiple namespaces, this is a comma separated list:\n```\n# Set Ingress namespace per component\ningress:\n namespace: \"nginx-ingress\"\n```\nor\n```\n# Set Ingress namespaces for all instances in an environment\nglobal:\n environment:\n ingress:\n namespace: \"ingress, kube-system, external-ingress, internal-ingress, backup-ingress\"\n```"}
|
||||
{"question": "How do I know which tags exist in the registry?", "answer": "You can use Skopeo:\n```\nskopeo list-tags docker://ceyoniq.azurecr.io/release/nscale/application-layer\n```\nThis lists all nappl tags in the registry"}
|
||||
{"question": "We use a forward proxy in our DMZ and have problems with OAuth (or others)", "answer": "If you use a forward proxy, such as in a DMZ Scenario, you will probably need to configure your cluster Load Balancer so it forwards the real IP adress of your clients.\nIn nginx, this is done by the setting `use-forwarded-headers` which needs to be put into the clusterwide config (this is a global option):\n```\nkind: ConfigMap\napiVersion: v1\nmetadata:\n name: nginx-load-balancer-microk8s-conf\n namespace: ingress\ndata:\n use-forwarded-headers: \"true\"\n proxy-real-ip-cidr: \"<Your Reverse Proxy IP>\"\n```\nApply this config map to your nginx LB namespace setting the IP Adress CIDR of your DMZ Reverse Proxy.\nIn the DMZ nginx configuration, make sure you submit all necessary information:\n```\nserver {\n server_name demo.nscale.cloud;\n client_max_body_size 10G;\n proxy_set_header X-Forwarded-For $remote_addr;\n proxy_set_header X-Forwarded-Host $host;\n proxy_set_header X-Forwarded-Proto $scheme;\n if ( $is_bot ) { return 410; }\n location = / { return 301 \"/nscale_web\"; }\n location = /me { return 301 \"/auth/realms/cloud/account\"; }\n location /robots.txt { return 200 \"User-agent: *\\nDisallow: /\"; }\n location /nscale_web { proxy_pass https://dmz.lan; }\n location ~ ^/(auth/realm|auth/login|auth/resources) { proxy_pass https://centralservices.lan; }\n location /nscalealinst1 { proxy_pass https://dms.lan; }\n listen 443 ssl;\n ssl_certificate fullchain.pem;\n ssl_certificate_key privkey.pem;\n}\n```"}
|
||||
{"question": "How yan I set Ressources (CPU / RAM) for the components?", "answer": "You can set the ressources in the Values:\n```yaml\nresources:\n requests:\n cpu: \"100m\" # Minimum 1/10 CPU\n memory: \"500Mi\" # Minimum 500 MB\n limits:\n cpu: \"2000m\" # Maximum 2 Cores\n memory: \"4096Mi\" # Maximum 4 GB. Java will see this as total.\n```\nIf you want to set Java Memory Options:\n```yaml\njavaOpts:\n javaMinMem: \"1024m\"\n javaMaxMem: \"2048m\"\n```"}
|
||||
{"question": "How can I bash into nappl?", "answer": "This is an example of how to bash into a nappl, in this case empty-nappl-0:\n```\nkubectl exec --stdin --tty empty-nappl-0 -- /bin/bash\n```"}
|
||||
{"question": "How can I set the timezone?", "answer": "You can set the timezone per component, instance and/or environment, using the `timezone` value. Please refer to the\ncomponent README.md for more information."}
|
||||
{"question": "How can I use priorityClasses for the components?", "answer": "You can use an existing priorityClass by setting `priority.className: <your class>` on the component, instance or environment.\nIf you want to have the class created for you, you can set `priority.createClass: true`.\nYou can also set the desired value.\nExample:\n```yaml\npriority:\n className: '{{ .component.fullName }}'\n createClass: true\n value: \"1000000\"\n```\n> If you omit the quotes for value, you will end up having a float64 like `1e+06` in your values, which will cause problems.\nTo forcefully switch off any previously set priority for a specific instance, you can override:\n```yaml\nglobal:\n override:\n priority:\n```\nThe **default** is to have no priorityClass at all."}
|
||||
{"question": "How can I enable and access the Web Administrator?", "answer": "To enable the nscale Administrator (Web, aka *RapAdmin*), you have to first enable the *administrator* chart in your instance:\n```yaml\ncomponents.administrator: true\n```\nBy default, the Administrator will use the standard Application Layer for login. You can change that by setting\n```yaml\nadministrator:\n nappl:\n host: '{{ include \"nplus.prefix\" . }}nappljobs.{{ .Release.Namespace }}'\n waitFor:\n - '-service {{ include \"nplus.prefix\" . }}nappljobs.{{ .Release.Namespace }}.svc.cluster.local:{{ .Values.global.nappl.port }} -timeout 600'\n```\nThis is an example, where we use multiple Application Layer and one designated Application Layer for Jobs. And we use this `nappljobs` for administration as well. So the above configuration changes the default and lets the admin client access nappljobs.\nIf you run the Administrator in another instance (Central Services or something alike), you can also cross namespaces and/or instances here to access multiple tenants if desired. But in that case you might need to add individual *networkPolicies* to allow access.\nOnce the Admin Client is running, you can reach it at `https://<Your Domain>/rapadm`."}
|
||||
{"question": "I want to use the same domain for my environment and my instance, so the certificates are created twice", "answer": "First of all, are you sure you want the same domain? Because the environment ingress is used by admins to access the config by dav or the monitoring data from the operator. You normally would not want that to use the same domain / ingress as the users of your services.\nHowever, if you decide to use the same domain, you can easily switch off certificate generation: Certificates are either generated by an issuer like cert-manager or are self-signed and generated by helm.\n- if `.this.ingress.issuer` is set, the chart requests this issuer to generate a tls secret with the name `.this.ingress.secret`\n by creating a certificate resource with the name of the domain `.this.ingress.domain`\n- else, so no issuer is set, the chart checks wether the flag `.this.ingress.createSelfSignedCertificate` is set to `true` and\n generates a tls secret with the name `.this.ingress.secret`\n- else, so neither issuer nor createSelfSignedCertificate are set, the charts will not generate anything\nAfter the instance or environment ran through the generation process, the components use the name of the tls\nsecret `.this.ingress.secret` for their ingresses, in case `.this.ingress.enabled` is `true`.\nSo to cut a long story short:\n1. You better not have the same domain for end users and admins. Please re-consider and try something like\n - `admin.my-domain.internal` for admin access and\n - `my-domain.cloud` for public access\n2. If you do want the same domain, you need to switch off the generation process in either the instance or the environment.\n You can still use the same secret. As the environment is deployed before the instance, it might be a good idea to switch off the instance:\n ```yaml\n global:\n ingress:\n issuer: null\n createSelfSignedCertificate: false\n ```"}
|
||||
{"question": "How can I access my services with a browser?", "answer": "Well, that of course depends on\n- which services you enabled\n- if these services gain access through a web interface\n- this access (ingress) is enabled.\nYou can check like this:\n```bash\nkubectl get ingress -l nplus/instance=<your instance>\n```\nExample using the *demo-ha* example:\n```bash\n% kubectl get ingress -l nplus/instance=demo-ha\nNAME CLASS HOSTS ADDRESS PORTS AGE\ndemo-ha-administrator public demo-ha.lab.nplus.cloud 127.0.0.1 80, 443 10h\ndemo-ha-cmis public demo-ha.lab.nplus.cloud 127.0.0.1 80, 443 10h\ndemo-ha-ilm public demo-ha.lab.nplus.cloud 127.0.0.1 80, 443 10h\ndemo-ha-mon public demo-ha.lab.nplus.cloud 127.0.0.1 80, 443 10h\ndemo-ha-nappl public demo-ha.lab.nplus.cloud 127.0.0.1 80, 443 10h\ndemo-ha-pam public demo-ha.lab.nplus.cloud 127.0.0.1 80, 443 10h\ndemo-ha-web public demo-ha.lab.nplus.cloud 127.0.0.1 80, 443 10h\ndemo-ha-webdav public demo-ha.lab.nplus.cloud 127.0.0.1 80, 443 10h\n```\nThen, you can drill into an ingress, to get the paths:\n```bash\nkubectl describe ingress <ingress>\n```\nYou can also get a list of all hosts + paths:\n```bash\n% kubectl get ingress -l nplus/instance=demo-ha -o json 2> /dev/null| jq -r '.items[] | .spec.rules[] | .host as $host | .http.paths[] | ( $host + .path)' | sort | grep -v ^/\ndemo-ha.lab.nplus.cloud/cmis\ndemo-ha.lab.nplus.cloud/dav\ndemo-ha.lab.nplus.cloud/engine.properties\ndemo-ha.lab.nplus.cloud/index.html\ndemo-ha.lab.nplus.cloud/modeler\ndemo-ha.lab.nplus.cloud/nscale_web\ndemo-ha.lab.nplus.cloud/nscalealinst1\ndemo-ha.lab.nplus.cloud/nscalealinst1/webb/configuration\ndemo-ha.lab.nplus.cloud/nscalealinst1/webc/configuration\ndemo-ha.lab.nplus.cloud/nscalemc\ndemo-ha.lab.nplus.cloud/rapadm\ndemo-ha.lab.nplus.cloud/res\ndemo-ha.lab.nplus.cloud/sap_ilm\n```"}
|
||||
{"question": "I would like to disable the ingress on the operator, but access it through a NodePort Service", "answer": "Sure. Just disable the ingress first on your environment deployment:\n```yaml\noperator:\n ingress:\n enabled: false\n```\nThen add a NodePort Service to access it:\n```bash\ncat << EOF | kubectl apply -f -\napiVersion: v1\nkind: Service\nmetadata:\n name: nplus-operator-nodeport-access\nspec:\n type: NodePort\n selector:\n nplus/component: operator\n ports:\n - port: 8080\n targetPort: 8080\n nodePort: 31976\nEOF\n```\nAccess it:\n- `http://<Your Cluster Node IP>:31976/monitoring`\n- `https://<Your Cluster Node IP>:31977/monitoring`\nhttps://10.17.1.31:31977/monitoring/index.html?page=overview"}
|
||||
{"question": "During Desaster Recovery tests we noticed that we cannot change the Document ID in runtime. What should we do?", "answer": "You can switch the component (in this case the Storage Layer as you mention the Document ID, but this method work for any component) into *Maintenance Mode*. Maintenance Mode will\n- start pods without starting the service, providing the possibility to gain access to the container to perform recovery tasks that need to be done offline. In order to do this:\n - All *waitFor* definitions are ignored\n - All *Health Checks* are ignored\n - The container starts in idle\n - Application Jobs are disabled\nYou can put a component, an instance or the whole environment into maintenance.\n```yaml\nutils:\n maintenance: true\n```\nor global for the instance:\n```yaml\nglobal:\n utils:\n maintenance: true\n```"}
|
||||
{"question": "Why can't I specify pullSecrets on the waitImage?", "answer": "pullSecrets are defined at pod level, not at container level. WaitFor is a container, so it doesn't have its own pullSecrets but rather takes the pod ones."}
|
||||
{"question": "We do not want to use argoCD Waves, can we switch it off?", "answer": "Yes, just add the following to the `values.yaml` to globally turn off the argoCD Wave feature:\n```yaml\nglobal:\n utils:\n disableWave: true\n```\nPlease also see the *nowaves* example"}
|
||||
{"question": "Out Instances became pretty large with lots of components and multiple team members working on parts of it. Can we somehow slices it into smaller chunks?", "answer": "Yes, you can. Simply create multiple Instances with the components you like and then join them all together using a common `.instance.group` tag.\nThis will open the firewall (Network Policies) to allow traffic within the group / between multiple Instances.\nPlease see the *group* example for details"}
|
||||
{"question": "I get frequent DV/DA HID check failures in nstl in my dev Environment", "answer": "In the lab / dev environment, you probably quite often throw away the data disk while keeping the conf folder. The default for the DA_HID.DAT is the conf folder, so they do not match any more. You can easily switch the check off:\n```yaml\nnstl:\n checkHighestDocId: \"0\"\nnstla:\n checkHighestDocId: \"0\"\nnstlb:\n checkHighestDocId: \"0\"\n```\nif you do this in the environment, you have globally switched all nstl da checks off."}
|
||||
{"question": "We use the postgres DB for DEV and would like to get a dump. How can we do that?", "answer": "You can call pg_dump from the command line. Make sure you have the right password and pod.\n```\nkubectl exec --stdin --tty sample-empty-database-0 -- env PGPASSWORD=\"postgres\" pg_dump -U postgres -w nscale > test.dump\n```"}
|
||||
Reference in New Issue
Block a user