gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] gluster, gfs, ocfs2, and lustre (lustre.org)


From: gordan
Subject: Re: [Gluster-devel] gluster, gfs, ocfs2, and lustre (lustre.org)
Date: Fri, 2 May 2008 18:08:09 +0100 (BST)
User-agent: Alpine 1.10 (LRH 962 2008-03-14)

On Fri, 2 May 2008, Brandon Lamb wrote:

On Fri, May 2, 2008 at 9:30 AM, Brandon Lamb <address@hidden> wrote:
On Fri, May 2, 2008 at 9:21 AM, Shaofeng Yang <address@hidden> wrote:
Can anybody share some thoughts about those cluster file systems? We are
trying to compare the pros and cons for each solution.

Thanks,
Shaofeng

Tought question as it depends on what you are needing. Myself I have
messed around with 3 of those for the last 2 years, so far I am still
just using an 2 NFS servers, one for mail and one for web for my 14 or
so client machines until I figure out how to use glusterfs.

I tried gfs (redhat) and I dont remember if I even ever got it to
actually run, I was trying it out on fedora distros. It seemed very
over complicated and not very user friendly (just my experience).

The key to your problems is in Fedora. It _REALLY_ isn't fit for anything more than a hobbyist home setup. It is the alphaware versin of RHEL. For example, FC{7,8} ships only with GFS2 which is not yet stable, and nobody claims it to be. RHEL5 comes with GFS1 and GFS2, GSF2 being there just as a tech preview but not for use in production systems.

RHCS has a somewhat steep learning curve, but it's not one that can't be overcome in half a day with assistance from the mailing list. Once you figure out what you're doing it's pretty straightforward, and I've deployed quite a few clusters based on it for various clients.

OCFS2 seemed very clean and I was able to use with with ISCSI but man
the load on my server was running at 7 and it was on the slow side.
What I was trying to do with it was create a single drive to put my
maildir data onto (millions of small mail files). The way it worked
was you actually mounted the file system like it was a local file
system on all machines that needed it and the cluster part would
handle the locking or whatnot. Cool concept but overkill for what I
needed.

ANY shared storage FS will suffer major performance penalties for this. File write requires a directory lock. If you start getting contention (e.g. shared imap folders), the performance will go through the floor because you're dealing with distributed lock management overhead and network latencies on top of normal disk latencies. Having said that, most POSIX locking supporting cluster FS-es will suffer from this issue, some more than others.

Also I believe both GFS and OCFS2 are these "specialized" file
systems. What happens if it breaks or goes down? How do you access
your data? Well if gfs or ocfs2 is broken you cant.

That's a bit like saying that if ext3 breaks, you can't access your data. The only specilist thing about them is that they are designed for shared storage, i.e. SAN or DRBD replicated volume.

With glusterfs,
you have direct access to your underlying data. So you can have your
big raid mounted on a server and use XFS file system, glusterfs just
sits on top of this so if for some reason you break your glusterfs
setup you *could* revert back to some other form of serving files
(such as NFS). Obviously this totally depends on your situation and
how you are using it.

Indeed, but it is fundamentally a distributed rather than centralized storage approach. This isn't a bad thing, but it is an important distinction. GlusterFS is a essentially for a cluster oriented NAS. GFS and OCFS2 are SAN oriented. That is a major difference.

Hence the reason that *so far* I am still using NFS. It comes on every
linux installation, its fairly easy to setup by editing what, 4 lines
or so. GlusterFS takes the same simple approach and if you do break
it, you still have access to your data.

The learning curve for glusterfs is much better than the others from
my experience so far. The biggest thing is just learning all of the
different ways you can configure spec files.

IME, RHCS/GFS didn't take me any more head scratching than GlusterFS did when I first got into it. They are designed for different purposes, and chosing one over the other should be based on project requirements, not on simplicity.

I just wanted to add the stressing of simplicity.

When the *#($ hits the fan, I would much rather be fixing something
that is on the simple side from the start, rather wondering what the
### is going on with a specialized filesystem and all the extra pieces
it adds and not having access to my data.

Calling GFS or OCFS2 specialized for this reason is bogus, as I explained earlier. You might as well call ext3 specialized then, along with every other FS.

That is what my company
finally decided on. I was looking into buying iscsi hbas and seeing
about upgrading our network, using DRBD and OCFS2 to sync our two RAID
servers and after two weeks we just looked at each other and said, you
know what. NFS may not be the most kickass thing or lightning fast, or
have builtin replication, but it WORKS. And if a server failed well it
would suck but we could copy from a backup onto the other nfs server
and be running again.

*shrug*
I got it all working and without any nightmarish effort. Sure, it's more than 4 lines of config than NFS requires, but the benefits are worth it. You only have to put in the setup time once, and any O(1) effort is preferable to dealing with downtimes in the future.

This is the reason I am down to only investing time into glusterfs.
Its simple but powerful! It does all kinds of cool stuff, and if the
worst happens, Im not really all THAT worried because I know I can
still get my files and have a SIMPLE backup plan.

If GlusterFS works for you - great. I use both GlusterFS and GFS, and use them for fundamentally different tasks. They simply aren't interchangeable in a reasonably thought out systems architecture.

Gordan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]