Each hypervisor hosts about 45-50 instances.
address@hidden:/etc/libvirt/qemu# free -g
total used free shared buffers cached
Mem: 251 250 1 0 0 1
-/+ buffers/cache: 248 2 <------------ this number
Swap: 82 25 56
RSS sum of all the qemu processes:
address@hidden:/etc/libvirt/qemu# ps -eo rss,cmd|grep qemu|awk '{ sum+=$1} END {print sum}'
204191112
RSS sum of the non qemu processes:
address@hidden:/etc/libvirt/qemu# ps -eo rss,cmd|grep -v qemu|awk '{ sum+=$1} END {print sum}'
2017328
As you can see, the RSS total is only 196G.
slabtop usage:
Active / Total Objects (% used) : 473924562 / 480448557 (98.6%)
Active / Total Slabs (% used) : 19393475 / 19393475 (100.0%)
Active / Total Caches (% used) : 87 / 127 (68.5%)
Active / Total Size (% used) : 10482413.81K / 11121675.57K (94.3%)
Minimum / Average / Maximum Object : 0.01K / 0.02K / 15.69K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
420153856 420153856 7% 0.02K 18418442 256 73673768K kmalloc-16
55345344 49927985 12% 0.06K 864771 64 3459084K kmalloc-64
593551 238401 40% 0.55K 22516 28 360256K radix_tree_node
1121400 1117631 99% 0.19K 26700 42 213600K dentry
680784 320298 47% 0.10K 17456 39 69824K buffer_head
10390 9998 96% 5.86K 2078 5 66496K task_struct
1103385 901181 81% 0.05K 12981 85 51924K shared_policy_node
48992 48377 98% 1.00K 1531 32 48992K ext4_inode_cache
4856 4832 99% 8.00K 1214 4 38848K kmalloc-8192
58336 33664 57% 0.50K 1823 32 29168K kmalloc-512
13552 11480 84% 2.00K 847 16 27104K kmalloc-2048
146256 81149 55% 0.18K 3324 44 26592K vm_area_struct
113424 109581 96% 0.16K 2667 48 21336K kvm_mmu_page_header
18447 13104 71% 0.81K 473 39 15136K task_xstate
26124 26032 99% 0.56K 933 28 14928K inode_cache
3096 3011 97% 4.00K 387 8 12384K kmalloc-4096
106416 102320 96% 0.11K 2956 36 11824K sysfs_dir_cache
Using virsh dommemstat, I'm only using 194GB:
rss:
address@hidden:/etc/libvirt/qemu#
for i in instance-0000*.xml; do inst=$(echo $i|sed s,\.xml,,); virsh
dommemstat $inst; done|awk '/rss/ { sum+=$2} END {print sum}'
204193676
allocated:
address@hidden:/etc/libvirt/qemu#
for i in instance-0000*.xml; do inst=$(echo $i|sed s,\.xml,,); virsh
dommemstat $inst; done|awk '/actual/ { sum+=$2} END {print sum}'
229111808
Basically,
the math doesn't add up. The qemu processes are using less that what's
allocated to them. In the example above, node-2 has 250G, with 2G
free.
qemu has been allocated 218G, w/ 194G used in RSS. That
means 24G is not used yet (218 - 194) and I only have 2G free. You can
guess what would happen if the instances decided to use that 24G...
thx