Files
nplus/samples/resources/README.md
2025-01-24 16:18:47 +01:00

11 KiB

Assigning CPU and RAM

You should assign resources to your components, depending on the load that you expect.

In a dev environment, that might be very little and you may be fine with the defaults.

in a qa or prod environment, this should be wisely controlled, like this:

nappl:
  resources:
    requests:
      cpu: "100m"         # Minimum 1/10 CPU
      memory: "1024Mi"    # Minimum 1 GB
    limits:
      cpu: "2000m"        # Maximum 2 Cores
      memory: "4096Mi"    # Maximum 4 GB. Java will see this as total.
  javaOpts:
    javaMinMem: "512m"    # tell Java to initialize the heap with 512 MB
    javaMaxMem: "2048m"   # tell Java to use max 2 GB of heap size

There are many discussions going on how much memory you should give to Java processes and how they react. Please see the internet for insight.

Our current opinion is:

Do not limit ram. You are not able to foresee how much Java is really consuming as the heap is only part of the RAM requirement. Java also needs metaspace, code cache and thread stack. Also the GC needs some memory, as well as the symbols.

Java will crash when out of memory, so even if you set javaMaxMem == 1/2 limits.memory (what many do), that guarantees nothing and might be a lot of waste.

So what you can consider is:

nappl:
  resources:
    requests:
      cpu: "1000m"        # 1 Core guaranteed
      memory: "4096Mi"    # 4GB guaranteed
    limits:
      cpu: "4000m"        # Maximum 4 Cores
#     memory:             # No Limit but hardware
  javaOpts:
    javaMinMem: "1024m"   # Start with 1 GB
    javaMaxMem: "3072m"   # Go up to 3GB (which is only part of it) but be able to take more (up to limit) without crash

Downside of this approach: If you have a memory leak, it might consume all of your nodes memory without being stopped by a hard limit.

A possible Alternative:

You can set the RAM limit equal to the RAM request and leave the java Memory settings to automatic, which basically simulates a server. Java will see the limit as being the size of RAM installed in the machine and act accordingly.

nappl:
  resources:
    requests:
      cpu: "1000m"        # 1 Core guaranteed
      memory: "4096Mi"    # 4GB guaranteed
    limits:
      cpu: "4000m"        # Maximum 4 Cores
      memory: "4096Mi"    # No Limit but hardware
# javaOpts:
#   javaMinMem:           # unset, leaving it to java
#   javaMaxMem:           # unset, leaving it to java

In a DEV environment,

you might want to do more overprovisioning. You could even leave it completely unlimited, as in DEV you want to see memory and cpu leaks, so a limit might hide them from your sight.

So this is a possible allocation for DEV, defining only the bare minimum requests:

nappl:
  resources:
    requests:
      cpu: "1m"           # 1/1000 Core guaranteed, 
                          # but can consume all cores of the cluster node if required and available
      memory: "512Mi"     # 512MB guaranteed, 
                          # but can consume all RAM of the cluster node if required and available

In this case, Java will see all node RAM as the limit and use whatever it needs. But as you are in a dev environment, there is no load and no users on the machine, so this will not require much.

Resources you should calculate

The default resources assigned by nplus are for demo / testing only and you should definitely assign more ressources to your components. Here is a very rough estimate of what you need:

Component Minimum (Demo and Dev) Small Medium Large XL Remark
ADMIN 1 GB RAM, 1 Core 2 GB RAM, 1 Core 2 GB RAM, 1 Core 2 GB RAM, 1 Core
Application - - - - Resources required during deployment only
CMIS 1 GB RAM, 1 Core 2 GB RAM, 2 Core 2 GB RAM, 2 Core 2 GB RAM, 2 Core
Database 2 GB RAM, 2 Core 4 GB RAM, 4 Core 8 GB RAM, 6 Core 16 GB RAM, 8 Core open
ILM 1 GB RAM, 1 Core 2 GB RAM, 2 Core 2 GB RAM, 2 Core 2 GB RAM, 2 Core
MON 1 GB RAM, 1 Core 2 GB RAM, 2 Core 2 GB RAM, 2 Core 2 GB RAM, 2 Core quite fix
NAPPL 2 GB RAM, 2 Core 4 GB RAM, 4 Core 8 GB RAM, 6 Core 16 GB RAM, 8 Core open CPU depending on Jobs / Hooks, RAM depending on amount user
NSTL 500 MB RAM, 1 Core 1 GB RAM, 2 Core 1 GB RAM, 2 Core 1 GB RAM, 2 Core quite fix
PAM 2 GB RAM, 1 Core 2 GB RAM, 1 Core 2 GB RAM, 1 Core
PIPELINER 2 GB RAM, 2 Core 4 GB RAM, 4 Core 4 GB RAM, 4 Core 4 GB RAM, 4 Core open Depending on Core Mode or AC Mode, No Session Replication
RS 1 GB RAM, 1 Core 8 GB RAM, 4 Core 32 GB RAM, 8 Core 64 GB RAM, 12 Core open CPU depending on format type, RAM depending on file size
SHAREPOINT 2 GB RAM, 2 Core 2 GB RAM, 2 Core 2 GB RAM, 2 Core
WEB 1 GB RAM, 1 Core 2 GB RAM, 2 Core 4 GB RAM, 4 Core 8 GB RAM, 4 Core open
WEBDAV 2 GB RAM, 2 Core 2 GB RAM, 2 Core 2 GB RAM, 2 Core

Bold components are required by a SBS setup, so here are some estimates per Application:

Component Minimum (Demo and Dev) Minimum (PROD) Recommended (PROD) Remark
SBS 6 GB RAM, 4 Core 16 GB RAM, 8 Core 24 GB RAM, 12 Core Without WEB Client
eGOV TODO TODO TODO eGOV needs much more CPU than a non eGOV system

A word on eGOV: The eGOV App brings hooks and jobs, that require much more resources than a normal nscale system even with other Apps installed.

Real Resources in DEV Idle

% kubectl top pods
...
sample-ha-administrator-0                                2m           480Mi           
sample-ha-argo-administrator-0                           2m           456Mi           
sample-ha-argo-cmis-5ff7d78c47-kgxsn                     2m           385Mi           
sample-ha-argo-cmis-5ff7d78c47-whx9j                     2m           379Mi           
sample-ha-argo-database-0                                2m           112Mi           
sample-ha-argo-ilm-58c65bbd64-pxgdl                      2m           178Mi           
sample-ha-argo-ilm-58c65bbd64-tpxfz                      2m           168Mi           
sample-ha-argo-mon-0                                     2m           308Mi           
sample-ha-argo-nappl-0                                   5m           1454Mi          
sample-ha-argo-nappl-1                                   3m           1452Mi          
sample-ha-argo-nappljobs-0                               5m           2275Mi          
sample-ha-argo-nstla-0                                   4m           25Mi            
sample-ha-argo-nstlb-0                                   6m           25Mi            
sample-ha-argo-pam-0                                     5m           458Mi           
sample-ha-argo-rs-7d6888d9f8-lp65s                       2m           1008Mi          
sample-ha-argo-rs-7d6888d9f8-tjxh8                       2m           1135Mi          
sample-ha-argo-web-f646f75b8-htn8x                       4m           1224Mi          
sample-ha-argo-web-f646f75b8-nvvjf                       11m          1239Mi          
sample-ha-argo-webdav-d69549bd4-nz4wn                    2m           354Mi           
sample-ha-argo-webdav-d69549bd4-vrg2n                    3m           364Mi           
sample-ha-cmis-5fc96b8f89-cwd62                          2m           408Mi           
sample-ha-cmis-5fc96b8f89-q4nr4                          3m           442Mi           
sample-ha-database-0                                     2m           106Mi           
sample-ha-ilm-6b599bc694-5ht57                           2m           174Mi           
sample-ha-ilm-6b599bc694-ljkl4                           2m           193Mi           
sample-ha-mon-0                                          3m           355Mi           
sample-ha-nappl-0                                        3m           1278Mi          
sample-ha-nappl-1                                        4m           1295Mi          
sample-ha-nappljobs-0                                    6m           1765Mi          
sample-ha-nstla-0                                        4m           25Mi            
sample-ha-nstlb-0                                        4m           25Mi            
sample-ha-pam-0                                          2m           510Mi           
sample-ha-rs-7b5fc586f6-49qhp                            2m           951Mi           
sample-ha-rs-7b5fc586f6-nkjqb                            2m           1205Mi          
sample-ha-web-7bd6ffc96b-pwvcv                           3m           725Mi           
sample-ha-web-7bd6ffc96b-rktrh                           9m           776Mi           
sample-ha-webdav-9df789f8-2d2wn                          2m           365Mi           
sample-ha-webdav-9df789f8-psh5q                          2m           345Mi           
...

Defaults

Check the file default.yaml. You can set default memory limits for a container. These defaults are applied if you do not specify any resources in your manifest.

Setting Resources for sidecar containers and init containers

You can also set resources for sidecar containers and init containers. However, you should only set these if you know exactly what you are doing and what implications they have.

nstl:
  sidecarResources:
    requests:
      cpu: "100m"        # 0.1 Core guaranteed
      memory: "1024Mi"   # 1GB guaranteed
    limits:
      memory: "2048Mi"   # Limit to 2 GB
      # we do NOT limit the CPU (read [here](https://home.robusta.dev/blog/stop-using-cpu-limits) for details)

Init Resources can be set by using initResources key.