emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: using finalizers


From: Tomas Hlavaty
Subject: Re: using finalizers
Date: Sat, 01 Jan 2022 23:36:10 +0100

On Fri 31 Dec 2021 at 15:23, Rudolf Schlatte <rudi@constantly.at> wrote:
> I'm sure you knew this already, but in general, using gc for non-memory
> resource management (e.g., "please close this file when this Lisp object
> is GCed") is not a good idea--depending on the GC behavior, you'll run
> out of file handles or whatnot.  The RAII pattern in C++
> deterministically calls a destructor when a stack-allocated object goes
> out of scope; in Lisp, the various `with-foo' macros serve the same
> purpose.

This is another topic.

Somehow it seems very controversial.  Some even violently oppose the
idea of using gc for non-memory resources but I have not found anybody
yet who would help me to understand real deep pros and cons and let me
form my own opinion without forcing his own.

Following stack discipline is great.  Most of the time.  But it comes
with severe restrictions and consequences.  There is a reason why gc was
invented.  For use-cases, which are not covered by with-foo stuff, one
can implement own ad-hoc resource management, or reuse gc.  Thus somehow
there seems to be a need for finalizers.

The negative consequences of following the stack discipline can be seen
in many places.

Lets have a look at the function directory-files.  It is likely the
worst possible API for accessing filesystem.  (Common Lisp has the same
flaw.)  It opens a directory, does its work and closes the directory.
Nice, it is done "properly": open, work, close.  The problem is, that
the amount of work done inside open/close is not under control of the
programmer and is not bound in space and time.  Additionally, the whole
work needs to be done completely, or completely aborted.  It means that
unless the amount of work is trivial, the whole thing and all the things
that are built on top of it are useless.  That sounds a bit extreme
claim.  Lets test it with something real: M-x find-lisp-find-dired
/nix/store [.]service$.  While Emacs blocked, no results seen.  C-g
after several seconds.  That was not very useful.  Lets try M-x
find-dired /nix/store -name '*.service'.  That works nicely.  Why?
Because the directory traversal runs in a second process (the control
loop pushes the results out of the process step by step) and the first
emacs process displays the results step by step as they come (with
directory entry granularity).  How could find-lisp-find-dired be fixed?
Could it be fixed while still using directory-files?  I do not think so.
I guess it can only be fixed by pushing directory entries out from
another thread (like find-dired does) or using a "stream" of directory
entries pulled on demand.  But when should close be called, when such
stream is lazy?  I am sure a reasonable ad-hoc solution could be
implemented.  But what if the directory is traversed recursively?  When
should each close be called?  A case for another ad-hoc solution?  Looks
like finalizers would be great here.

Yes there would be issues, like hitting the open file limit.  But are
those issues show stoppers?  Could there be a useful solution?  Maybe
run gc and try to open the file again?  Or something more sophisticated?
Have somebody explored this area?



reply via email to

[Prev in Thread] Current Thread [Next in Thread]