qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-discuss] Memory calculation guidance


From: Jerry Stuckle
Subject: Re: [Qemu-discuss] Memory calculation guidance
Date: Thu, 24 Nov 2016 09:58:54 -0500
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.5.0

On 11/24/2016 6:55 AM, Henti Smith wrote:
> Good day all. 
> 
> We have a server with 64G of memory. 
> 
> We deployed servers with a total allocation of 62.464G. The remainder of
> the OS uses 285MB. This seemed more than enough memory for the setup.
> This has been working without any problems however In the last 2 weeks,
> we have been getting OOM Killer events on the server. 
> 
> Upon investigation I have found that the RSS used by some of the VM's
> can be up to 107% of the allocation for the VM. For instance. 
> 
> borin.internal is allocated to use 1024MB. The RSS of the process is
> using 1104.5703125MB
>

1024M is the amount allocated to the client.  The hypervisor requires
memory in addition to that.

> dev-windows-01.internal.semmle.com
> <http://dev-windows-01.internal.semmle.com> is allocated to use 4096MB.
> The RSS is 4195.02734375MB
> 

Ditto.

> In total we have 62464MB allocated for VM's but RSS total for
> all qemu-system-x86_64 processes is 63003MB wich seems to obviously
> change with usage. 
>

Yes, it can, depending on how much memory QEMU is using at the time.

> To avoid possible OOM killers in future, what is the recommend way of
> calculating real memory allocation per VM to ensure we don't have more
> OOM Kill situations ? 
> 
> Regards
> Henti 
> 

Don't try to allocate everything to your VMs.  Leave enough room for
QEMU.  And I've found the memory required can vary widely, depending on
many factors including but not limited to the platform being emulated,
the host system, devices being used, and from what I can figure, the
positions of the moons of Jupiter.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]