Maximum memory amount to allocate to huge pages
Issue
-
Essentially what we want to know is how much of our compute node's memory can be assigned to the tenant's workload in DPDK environments where cpu pinning and hugepages are enabled.
-
In our specific environment, we have currently the following settings:
# Compute Gen 9 44core, HT, DPDK, CPU-Pinning, 240 x 1G Hugepages
ComputeKernelArgs: "intel_iommu=on default_hugepagesz=1GB hugepagesz=1G hugepages=240"
ComputeHostIsolatedCoreList: '1-21,45-65,23-43,67-87'
NeutronComputeSocketMemory: "8192,8192"
ComputeHostCpusList: "'0,44,22,66'"
NeutronComputeCoreList: "'1,2,45,46,23,24,67,68'"
# Compute Extraconfig
ComputeExtraConfig:
nova::compute::reserved_host_memory: 4096
nova::compute::vcpu_pin_set: ['3-21','47-65','25-43','69-87']
-
As you can see 240GB for 2 NUMAs and we have taken 8GB in each NUMA (16GB in total) for DPDK.
-
This is how things look like in the compute node:
Output of cat /sys/devices/system/node/node*/meminfo:
Node 0 MemTotal: 201197900 kB
Node 0 MemFree: 68061740 kB
Node 0 MemUsed: 133136160 kB
Node 0 Active: 952580 kB
Node 0 Inactive: 336688 kB
Node 0 Active(anon): 422760 kB
Node 0 Inactive(anon): 1784 kB
Node 0 Active(file): 529820 kB
Node 0 Inactive(file): 334904 kB
Node 0 Unevictable: 266444 kB
Node 0 Mlocked: 266444 kB
Node 0 Dirty: 152 kB
Node 0 Writeback: 0 kB
Node 0 FilePages: 872480 kB
Node 0 Mapped: 88040 kB
Node 0 AnonPages: 683784 kB
Node 0 Shmem: 2412 kB
Node 0 KernelStack: 11184 kB
Node 0 PageTables: 15932 kB
Node 0 NFS_Unstable: 0 kB
Node 0 Bounce: 0 kB
Node 0 WritebackTmp: 0 kB
Node 0 Slab: 235152 kB
Node 0 SReclaimable: 104892 kB
Node 0 SUnreclaim: 130260 kB
Node 0 AnonHugePages: 2048 kB
Node 0 HugePages_Total: 120
Node 0 HugePages_Free: 16
Node 0 HugePages_Surp: 0
Node 1 MemTotal: 201326592 kB
Node 1 MemFree: 68746324 kB
Node 1 MemUsed: 132580268 kB
Node 1 Active: 730732 kB
Node 1 Inactive: 377576 kB
Node 1 Active(anon): 390968 kB
Node 1 Inactive(anon): 472 kB
Node 1 Active(file): 339764 kB
Node 1 Inactive(file): 377104 kB
Node 1 Unevictable: 26732 kB
Node 1 Mlocked: 26732 kB
Node 1 Dirty: 16 kB
Node 1 Writeback: 0 kB
Node 1 FilePages: 727480 kB
Node 1 Mapped: 112644 kB
Node 1 AnonPages: 408296 kB
Node 1 Shmem: 2660 kB
Node 1 KernelStack: 6784 kB
Node 1 PageTables: 10084 kB
Node 1 NFS_Unstable: 0 kB
Node 1 Bounce: 0 kB
Node 1 WritebackTmp: 0 kB
Node 1 Slab: 180252 kB
Node 1 SReclaimable: 59036 kB
Node 1 SUnreclaim: 121216 kB
Node 1 AnonHugePages: 4096 kB
Node 1 HugePages_Total: 120
Node 1 HugePages_Free: 24
Node 1 HugePages_Surp: 0
- Output of lscpu:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 88
On-line CPU(s) list: 0-87
Thread(s) per core: 2
Core(s) per socket: 22
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
Stepping: 1
CPU MHz: 2200.000
CPU max MHz: 2200.0000
CPU min MHz: 1200.0000
BogoMIPS: 4394.98
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 56320K
NUMA node0 CPU(s): 0-21,44-65
NUMA node1 CPU(s): 22-43,66-87
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_ppin intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
- Our tenant wants to maximize the available memory for them in our production environment but we are not sure if that's the best idea. How much more (if any) can we assign for them?
Environment
- Red Hat OpenStack Platform 10.0 (RHOSP)
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.