make-w32
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Switching from CVS to GIT


From: Eli Zaretskii
Subject: Re: Switching from CVS to GIT
Date: Tue, 16 Oct 2007 02:25:21 -0400

> Date: Tue, 16 Oct 2007 07:14:56 +0200
> From: Andreas Ericsson <address@hidden>
> CC: Daniel Barkalow <address@hidden>,  address@hidden, 
>  address@hidden,  address@hidden,  address@hidden, 
>  address@hidden
> 
> > Sorry I'm asking potentially stupid questions out of ignorance: why
> > would you want readdir to return `README' when you have `readme'?
> > 
> 
> Because it might have been checked in as README, and since git is case
> sensitive that is what it'll think should be there when it reads the
> directories. If it's not, users get to see
> 
>       removed: README
>       untracked: readme

This is a non-issue, then: Windows filesystems are case-preserving, so
if `README' became `readme', someone deliberately renamed it, in which
case it's okay for git to react as above.

> could be an intentional rename, but we don't know for sure.

It _must_ have been an intentional rename.  While years ago there used
to be old DOS programs that could cause such a rename as a side effect
of modifying a file, that time is long gone.  There's no longer a need
to cater to such programs, as even DOS programs can support
case-preserving APIs on Windows.

> To be honest though, there are so many places which do the readdir+stat
> that I don't think it'd be worth factoring it out

Something for Windows users to decide, I guess.  It's not hard to
refactor this, it just needs a motivated volunteer.

> especially since it
> *works* on windows. It's just slow, and only slow compared to various
> unices.

I think only the Linux filesystem is as fast as you say.  But I may be
wrong (did someone compare with *BSD, say?).

> I *think* (correct me if I'm wrong) that git is still faster
> than a whole bunch of other scm's on windows, but to one who's used to
> its performance on Linux that waiting several seconds to scan 10k files
> just feels wrong.

Unless that 10K is a typo and you really meant 100K, I don't think 10K
files should take several seconds to scan on Windows.  I just tried
"find -print" on a directory with 32K files in 4K subdirectories, and
it took 8 sec elapsed with a hot cache.  So 10K files should take at
most 2 seconds, even without optimizing file traversal code.  Doing
the same with native Windows system calls ("dir /s") brings that down
to 4 seconds for 32K files.

On the other hand, what packages have 100K files?  If there's only one
-- the Linux kernel -- then I think this kind of performance is for
all practical purposes unimportant on Windows, because while it is
reasonable to assume that someone would like to use git on Windows,
assuming that someone will develop the Linux kernel on Windows is --
how should I put it -- _really_ far-fetched ;-)

As for speed of file ops ``just feeling wrong'': it's not limited to
git in any way.  You will see the same with "tar -x", with "find" and
even with "cp -r", when you compare Linux filesystems, especially on a
fast 64-bit machine, with comparable Windows operations.  A Windows
user who occasionally works on GNU/Linux already knows that, so seeing
the same in git will not come as a surprise.  Again, I wonder how this
compares with other free OSes, like FreeBSD (unless they use the same
filesystem), and with proprietary Unices, like AIX and Solaris.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]