gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] How namespace works in mainline-2.5?


From: Anand Babu Periasamy
Subject: Re: [Gluster-devel] How namespace works in mainline-2.5?
Date: Fri, 22 Jun 2007 03:12:10 -0700

Dale, your assumptions are correct. Here is more info..

PREVIOUS DESIGN:
GlusterFS/unify will parallely query every sub-volume (brick) in the
cluster. The brick with the file responds with a file descriptor
successfully. This model is fast enough and simple.

NEW DESIGN:
GlusterFS/unify uses a cache to lookup the name space (to find where
a file is located). Though it takes an extra call before open, it
minimizes lookup operations. There are also other reasons why we
implemented name space cache and made it mandatory.
* GlusterFS can avoid creation of files that already exist on a
failed brick (temporarily down). * Rarely some tools use inodes instead of file names. GlusterFS
  issues globally unique inodes consistently through the cache.

Cache is stored on a regular POSIX volume. It can be local or remote
or even over AFR'd remote volume. Having a cache volume remotely is
better because, the cache is shared across all the clients. Remember
it is just a cache. If you delete this cache directory, it will
rebuild itself transparently.

--
Anand Babu Periasamy

Dale Dude writes:
Im giving input even though Im not positive. The namespace volume is
needed because all the files/dirs are created there. Files being 0
bytes. It seems to be used as the lookup "database" for a set of
volumes. So the space requirements are low and I dont think its
disposable. I believe they will be using this to keep the AFR
volumes in sync.

Regards,
Dale

Brent A Nelson wrote:
On Thu, 21 Jun 2007, Dale Dude wrote:

As for the doc/example...I see the cluster-client.vol was fixed,
but the bricks-ns isnt configured in any of the
cluster-server#.vol files.


Speaking of which, what are the requirements/recommendations for
the namespace cache volume? How big should it be? Any special
considerations if we're targetting fault-tolerance (e.g., RAID or
AFR underneath), or is it truly disposable?

Thanks,

Brent


_______________________________________________
Gluster-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/gluster-devel



_______________________________________________
Gluster-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/gluster-devel

--
Anand Babu
GPG Key ID: 0x62E15A31
Blog [http://ab.freeshell.org]
The GNU Operating System [http://www.gnu.org]







reply via email to

[Prev in Thread] Current Thread [Next in Thread]