gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] upstream: Symbolic link not getting healed


From: Vijay Bellur
Subject: Re: [Gluster-devel] upstream: Symbolic link not getting healed
Date: Sat, 21 Dec 2013 23:11:57 +0530
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.1.0

On 12/21/2013 12:32 AM, Harshavardhana wrote:

On Fri, Dec 20, 2013 at 4:36 AM, Vijay Bellur <address@hidden
<mailto:address@hidden>> wrote:

    On 12/19/2013 02:28 PM, Harshavardhana wrote:

        GFAPI observes ENOENT with glfs_stat() - so the fix is necessary.


    I agree that the fix is necessary. We will address it for
    release-3.5 and master now. Getting this into release-3.4 at this
    point in time is dicey as we are planning to release 3.4.2 on
    Monday. Given that the libgfapi problem has existed in 3.4.1 and is
    not a new regression in 3.4.2, we can target the complete fix for
    3.4.3. At the moment, I am inclined to revert that fix for getting
    3.4.2 out.

    -Vijay


There is a business case for Bluedatainc
(http://www.bluedata.com/) which needs that fix who are a consumer of
GFAPI - they are using the community version 3.4.1qa1 with this fix.

Hadoop jobs "TestDFS_IO" fails without this fix.



Sure, there could be a custom 3.4.2 build on which they can continue to operate till the problem gets fixed and tested thoroughly in 3.4.3. As I mentioned in my OP, the current fix causes self-healing of symbolic links to fail and this can potentially cause data loss. This has far more serious consequences than a particular workload failing through libgfapi.

We have delayed 3.4.2 by a fair while and I don't think we can afford to hold it back further. We can consider releasing 3.4.3 soon after the errno issue gets fixed properly and we have adequate test coverage on that. For now the only option is to revert this patch.

Regards,
Vijay




reply via email to

[Prev in Thread] Current Thread [Next in Thread]