Files
nplus/ai/jsonl/operations1.jsonl
2025-01-24 16:18:47 +01:00

3 lines
3.5 KiB
JSON

{"chapter": "Day 1 Ops: Install, Update, Uninstall", "level": 1, "text": "1. Install instance *sample*\nTo demonstrate, we use the sample-tenant chart we find in the samples directory. The main difference\nto the default instance chart is, that a domain is set to `*.sample.nplus.cloud`, so we will be able to\nlog into the web client right away if we redirected this domain correctly.\nYou can easily adopt the examples to your environment.\n```\nhelm install sample nplus/sample-tenant --version 9.0.1400\n```\n2. **Rolling update** of instance *sample* to a later monthly release\nAll nscale components support rolling updates, **but** the *nscale Application Layer*.\nAs the Application Layer has the connection to the database, and this depends on the DB scheme,\nonly cluster members with the same version can work with that DB at the same time.\nThere are no scheme updates in monthly releases, so we can use the default rolling updates here.\n```\nhelm upgrade sample nplus/sample-tenant --version 9.0.1501\n```\n3. **Minor / Major Update** of instance *sample*\nMinor or Major updates require the *nscale Application Layer* to have the same version on all cluster nodes. And since the *nscale Pipeliner* may also have an integrated *nappl* in core mode, we also need to update the pipeliner at the same time.\nWe first need to shut down all *nappl* cluster members, so set the *nscale Application Layer*, the potential *nappl Jobs Node* and the *nscale Pipeliner* stateful sets to replica 0.\nIn *nplus*, these replicaSets are labeled with `nplus/type=core`, so we can easily select them:\n```\nkubectl scale statefulset -l nplus/type=core,nplus/instance=sample --replicas=0\n```\nAfter that, the update is just like a monthly release:\n```\nhelm upgrade sample nplus/sample-tenant --version 9.1.1001\n```\n> As nplus does not know if you run the Pipeliner in core mode, make sure you change the default type `pipeliner` to `core` when installing, indicating that this pipeliner node needs to be scaled down as well.\n4. **Uninstall** the instance *sample*\n\n```\nhelm uninstall sample\n```\n"}
{"chapter": "Install, Update, Uninstall *with argoCD*", "level": 1, "text": "1. Install instance *sample-argo*\n```\nhelm install sample-argo nplus/sample-tenant-argo --version 9.0.1400\n```\n2. **Rolling update** of instance *sample-argo* to a later monthly release\n```\nhelm upgrade sample-argo nplus/sample-tenant-argo --version 9.0.1501\n```\n3. **Minor / Major Update** of instance *sample-argo*\nThe difference to a deployment without argoCD is, that if we manually scale down the *nappl* cluster nodes,\nargoCD tries to immediately **heal** this discrepancy between the description and the status.\nSo we first switch off this healing mechanism, to be able to scale down:\n```\nkubectl -n argocd patch --type='merge' application sample-argo -p \"{\\\"spec\\\":{\\\"syncPolicy\\\":null}}\"\n```\nAfter that, it is the same update procedure as we have with a standard deployment:\n```\nkubectl scale statefulset -l nplus/type=core,nplus/instance=sample-argo --replicas=0\nhelm upgrade sample-argo nplus/sample-tenant-argo --version 9.1.1001\n```\nWhen done, we switch the healing back on which will start to re-sync and recreate all cluster members\nwith the new version:\n```\nkubectl -n argocd patch --type=merge application sample-argo -p \"{\\\"spec\\\":{\\\"syncPolicy\\\":{\\\"automated\\\":{\\\"prune\\\":true,\\\"selfHeal\\\":true}}}}\"\n```\n4. **Uninstall** the instance *sample-argo*\n\n```\nhelm uninstall sample-argo\n```\n"}