gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gluster-devel] memory leaks


From: Brent A Nelson
Subject: [Gluster-devel] memory leaks
Date: Thu, 8 Mar 2007 12:52:35 -0500 (EST)

In my setup, it appears that the performance translators are all well-behaved on the server-side, unlike the client-side. Hopefully, this will provide some useful clues...

Chaining all the performance translators off of my protocol/server volume, all the translators seem to load on glusterfsd and they don't appear to be causing any harm. data-only transfers don't trigger a huge memory leak in readahead, and metadata transfers appear to cause a memory leak in glusterfsd only at the usual rate (it grows slowly whether or not I use performance translators). io-thread does not cause glusterfsd to die.

Do I chain the performance translators for the server the same way as for the client? E.g.:

volume server
  type protocol/server
subvolumes share0 share1 share2 share3 share4 share5 share6 share7 share8 share9 share10 share11 share12 share13 share14 share15
  ...
end-volume

volume statprefetch
  type performance/stat-prefetch
  option cache-seconds 2
  subvolumes server
end-volume

volume writebehind
  type performance/write-behind
  option aggregate-size 131072 # in bytes
  subvolumes statprefetch
end-volume

volume readahead
  type performance/read-ahead
  option page-size 65536 ### in bytes
  option page-count 16 ### memory cache size is page-count x page-size per file#
  subvolumes writebehind
end-volume

volume iot
  type performance/io-threads
  option thread-count 8
  subvolumes readahead
end-volume

Is that correct/appropriate?

Thanks,

Brent




reply via email to

[Prev in Thread] Current Thread [Next in Thread]