gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] ls: .: no such file or directory


From: Amar S. Tumballi
Subject: Re: [Gluster-devel] ls: .: no such file or directory
Date: Thu, 12 Jul 2007 10:52:22 +0530

Daniel,
Thats the reason why we say namespace is the 'single point' of failure :|
As fuse FS works based on inode number, and we send inode number from
namespace brick to the fuse layer (if unify is used), if namespace is down,
we return -1, with file not found error. One solution right now is making
AFR'd namespace.

In future, we are planing to come with distributed namespace so problems of
single point of failure, lack of inodes etc can be solved. But with
1.3.xreleases, it will be like this :|

-amar

On 7/12/07, Daniel van Ham Colchete <address@hidden> wrote:

On 7/11/07, DeeDee Park <address@hidden> wrote:
>
> if all the bricks are not up at the time of the gluster client startup
> i get the above error message. if all bricks are up, things are fine.
> if the brick goes down after a client is up, things are fine -- it is
only
> at startup.
> i'm still seeing this in the latest patch-299
>

I was able to reproduce the problem here.

I will have the error message if, and only if, the namespace cache brick
is
offline. I have the error even if the directory is full of files. If I try
to open() a file while the namespace cache brick is down I get the
"Transport endpoint is not connected" error.

Also with patch-299.

Client spec file:volume client-1
        type protocol/client
        option transport-type tcp/client
        option remote-host 127.0.0.1
        option remote-port 6991
        option remote-subvolume brick1
end-volume

volume client-2
        type protocol/client
        option transport-type tcp/client
        option remote-host 127.0.0.1
        option remote-port 6992
        option remote-subvolume brick2
end-volume

volume client-ns
        type protocol/client
        option transport-type tcp/client
        option remote-host 127.0.0.1
        option remote-port 6999
        option remote-subvolume brick-ns
end-volume

volume afr
        type cluster/afr
        subvolumes client-1 client-2
        option replicate *:2
        option self-heal on
        option debug off
end-volume

volume unify
        type cluster/unify
        subvolumes afr
        option namespace client-ns
        option scheduler rr
        option rr.limits.min-free-disk 5
end-volume

volume writebehind
        type performance/write-behind
        option aggregate-size 131072
        subvolumes unify
end-volume

Best regards,
Daniel Colchete
_______________________________________________
Gluster-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/gluster-devel




--
Amar Tumballi
http://amar.80x25.org
[bulde on #gluster/irc.gnu.org]


reply via email to

[Prev in Thread] Current Thread [Next in Thread]