bug-fileutils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: ls, mv, rm and directories with over ~13000 files.


From: Bob Proulx
Subject: Re: ls, mv, rm and directories with over ~13000 files.
Date: Thu, 29 Nov 2001 21:09:17 -0700

> I'd like to know where the limit in the ability of ls, mv and rm to handle
> more then approximately 13000 files is from. Is it from the shell (i.e.
> bash)? Is it an arbitrary number that can be changed? 

See the (recently posted) FAQ for an answer to your question.

  
http://www.gnu.org/software/fileutils/doc/faq/core-utils-faq.html#Argument%20list%20too%20long

> We do wearable computing research and thus generate a lot of images (e.g.
> 30fps * 60secs * 60min = 108000 files for an hour). Any help would be
> appreciated.

Depending on the filesystem of course, but traditional filesystems
slow down tremendously when the number of files in any single
directory level grow beyond a couple of thousand.  Directories
traditionally have been implemented as linear lists and therefore
requiring linear searches leading to long directory access times when
there are a large number of files in a directory.

Some newer filesystems implement directories as B-trees instead of
linear lists and avoid this.  But if I were you I would avoid the
problem by design in user space and build a hierarchy of names such
that no single directory can grow too large.  A simple example of this
is /usr/share/terminfo (aka on older systems /usr/lib/terminfo) which
do a radix encoding using the first letter.  Not sufficient for your
case but should give an idea of one way to proceed.  Forewarned is
forearmed.

Bob



reply via email to

[Prev in Thread] Current Thread [Next in Thread]