{"chapter": "Adding ArgoCD", "level": 1, "text": "In order to be able to deploy *nplus instances* using ArgoCD, you need to add the Chart Repository to Argo:\n```\ncat << EOF | kubectl apply -f -\napiVersion: v1\nkind: Secret\nmetadata:\nname: nplus-repo\nnamespace: argocd\nlabels:\nargocd.argoproj.io/secret-type: repository\nstringData:\ntype: helm\nurl: https://git.nplus.cloud\npassword: $NPLUS_TOKEN\nusername: $NPLUS_ACCOUNT\nEOF\n```\n> This requires the Environment Variables for the *NPLUS_ACCOUNT* and *NPLUS_TOKE* to be set. Check the Quickstart Guide if you are uncertain\nNow you are good to go adding an instance using ArgoCD. We will re-use the myinstance.yaml we created during the Quickstart Guide. You will also find it in [Samples](../samples/myinstance.yaml).\n```\nhelm upgrade -i \\\n--values myinstance.yaml \\\nmyinstance-argo nplus/nplus-instance-argo\n```\nThe only difference with ArgoCD is, that we use a different Chart for the instance: *nplus-instance-argo*.\nThe settings / values file is identical.\n![ ](assets/argo1.png)\nArgoCD will automatically pick up the new instance and start installing it.\nYou can check via command line\n```\n# kubectl get instance\nNAME HANDLER VERSION TENANT STATUS\nmyinstance Helm 9.1.1501 default healthy\nmyinstance-argo argoCD 9.1.1501 default healthy\n```\nOr via agroCD Web UI the current status of the deployment\n![ ](assets/argo2.png)\n> The Instance will report *healthy* in argoCD as well as using command line, even though the SBS Installer is not ready yet (as Applications are installed asynchronously as soon as the instance is healthy)\nAs soon as the Application Installer is done, it looks like this:\n![ ](assets/argo3.png)\n"} {"chapter": "Monitoring ArgoCD", "level": 1, "text": "ArgoCD also has a custom resource, called *application*. The nscale argoCD Resources are created in the *argocd* Namespace. You can get them by\n```\n"} {"chapter": "kubectl get app -n argocd", "level": 1, "text": "NAME SYNC STATUS HEALTH STATUS\nmyinstance-argo Synced Healthy\n```\nOf course you can also check with\n```\n"} {"chapter": "kubectl get instances", "level": 1, "text": "NAME HANDLER VERSION TENANT STATUS\nmyinstance Helm 9.1.1501 default healthy\nmyinstance-argo argoCD 9.1.1501 default healthy\n```\nBut if you require detailed information, the best is to start describing the argoCD App:\n```\n"} {"chapter": "kubectl describe app myinstance-argo -n argocd", "level": 1, "text": "```\nThis gives you a much higher level of detail.\n"} {"chapter": "Troubleshooting ArgoCD", "level": 1, "text": ""} {"chapter": "Cache", "level": 2, "text": "ArgoCD caches helm Chart content. This can be a problem especially during development, when you might now always increase version numbers.\nThen, you might want to hard reset an argoCD Appication to void the cache:\n```\nkubectl patch app/myinstance-argo -n argocd --type merge -p='{\"metadata\": {\"annotations\":{\"argocd.argoproj.io/refresh\": \"hard\"}}}'\n```\n"} {"chapter": "Finalizer", "level": 2, "text": "Finalizers in Kubernetes are taking care of cleanup tasks. Sometimes, these finalizers in argoCD get stuck on deleting complex nplus instances. As a last option, you might want to try removing the finalizer and then cleaning the instance up manually:\n```\nkubectl patch app/myinstance-argo -n argocd \\\n--type json \\\n--patch='[ { \"op\": \"remove\", \"path\": \"/metadata/finalizers\" } ]'\n```\nThen delete the argoCD Application:\n```\nkubectl delete app/myinstance-argo -n argocd\n```\nSince the finalizer did not clean up, all *nplus instance* parts are still there. Luckily, they are labeled, so easy to identify:\n```\nkubectl get all,pvc,ing -l nplus/instance=myinstance-argo\n```\nWe can now use this list to delete everything:\n```\nkubectl delete $(kubectl get all,pvc,ing -l nplus/instance=myinstance-argo -o name)\n```\n> ArgoCD does not use helm to install but rather get the helm template and renders it internally. So there is no need to clean up helm after removing the argo app.\n"} {"chapter": "Default Waves", "level": 1, "text": "The instance chart has some default waves defined. You can use them or overwrite the values with your own demands:\n- **wave 1**: prepper\n- **wave 2**: requirements: nstl, database\n- **wave 3**: essential services: rs, nappljobs, nappl (standalone, if jobs are enabled)\n- **wave 4**: hook: free to use for anything that needs to be done before the cluster starts\n- **wave 5**: consumer services: nappl (serving consumers) if jobs are disabled\n- **wave 6**: consumer services: web\n- **wave 7**: peripheral services: mon, pipeliner, ilm, cmis, webdav, sharepoint\n- **wave 8**: tools: administrator, pam\n- **wave 9**: tools: rms (Remote Management Server)\n- **wave 10**: solutions: application (incl. GBAs)\n"}