guile-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Unmemoization of macros


From: Mikael Djurfeldt
Subject: Unmemoization of macros
Date: 20 Dec 2000 05:40:03 +0100
User-agent: Gnus/5.0807 (Gnus v5.8.7) Emacs/20.7

Marius Vollmer <address@hidden> writes:

> I can see two lines of argumentation.

Yet, I'd like to suggest a third:

While I understand the intuitions behind your "static" and "dynamic"
views, I think it might be misleading to present the situation that
way.  For example, in your presentation of the properties of the
"dynamic" view, one can get the impression that the property of
nothing happening elsewhere in the system when a macro is redefined is
a natural property associated with being interactive or dynamic.  It
seems like there are some natural "actions" of the system and while
the "static" view is hiding these, thereby attaining its form of
declarative consistency, the "dynamic" view is instead making these
actions explicit, thereby attaining its form of consistency.

I don't think there are any natural actions.  I think we have multiple
choices.  I think the important goal is that the user can form a
mental model of the system which is workable, not too complex, and
that the system is consistent with this model, so that he can predict
how the system will behave.  I think this last thing about being able
to predict is most important.

If you read my description of the computer exercise with the OO
system, you know that I don't think it is natural in an interactive
development environment to have to reload everything after having
redefined a macro.

Similarly, it is my experience that the instance update protocol in
GOOPS enhances the interactivity tremendously: I have my entire
simulator up and running and notice that I need to add a slot in a
class.  I can simply add it, send the new definitions over, and
continue to run.  I think Craig agrees with me that this protocol pays
well back for the complexity it adds.

Notice also that it is possible to arrange this kind of models in
hierarchies: In Scheme, we can use the substitution model when
thinking about many problems.  For other problems we may need the less
superficial environment model.  In GOOPS, we can use the superficial
model of the class simply being the "declarative" description of each
instance in most cases, while we have the model of the update MOP when
we have a need for more detail.

The superficial model I propose is the one of a simple interpreter
with macros but without compilation or memoization (like a Guile or
SCM with memoization disabled): a macro use is expanded every time it
is being evaluated.

My proposed detailed model is that we want to store extra information
along with the source in order to speed up evaluation (memoization in
the current interpreter, byte-codes in Keisuke's VM, machine code when
we have a compiler).  This information is based on various sources and
when such a source changes, the extra information needs to be
invalidated (we can postpone recomputing it if we want).  More
specifically, this is performed by the unmemoization protocol or some
generalization thereof.

Now, of course, such models break down if we move outside their region
of validity.  Defining this region can be a little subjective---a
matter of taste.  For example, some people might find that the
superficial model of the simple interpreter breaks down if it takes
much longer time to evaluate code immediately after a macro has been
redefined.  Some might accept it because of the simplicity and
convenience of development which it brings.

Perhaps a more serious violation of this particular model is that we
can't count upon the macro being expanded every time the code is
evaluated.  (Using SCM terminology, the macro behaves somewhere along
the continuum between a macro and an mmacro.)

Now, as I've said, this is just a proposal---I'm not sure how good the
idea is, but it seems to me that this last violation isn't very
serious.  In modern macro systems, macros are concerned with
expressing syntactic transformations and have fairly limited capacity
to depart from this view by producing side-effects.  And it really
doesn't have a large significance if this transformation is invoked
one time or several for a given expression.  If you use syntax-rules,
there's no problem.  If you use syntax-case, you have the possibility
to store away information in a local environment which the transformer
lambda has closed over.  In addition, you can do pathologic things
like using I/O.

Macros are expanded at "translation-time".  We already have the two
rules that we have no guarantee 1. when translation-time is or
2. what the translation-time environment is (apart from it containing
the R5RS + syntax-case bindings).  My suggestion is to add the third
rule of not guaranteeing 3. how many times translation can take place.

I really don't see this third rule as a big loss.  As long as one
regards the macro transformer as a syntactic transformation, there's
no problem.

This motivates why it is OK to modify the simple interpreter model by
adding that the interpreter *may* be smart and not do the translation
every time, but promises to do so if the macro has changed.

Now I've decribed the constraints on the user-level.  I'm sure
everybody can see that it is *possible* to implement the protocol
supporting this simple model.  Three types of questions remain;

1. Is the constraints on the user-level acceptable?

2. What detailed behaviours of the underlying protocol is possible
   and/or desirable?

3. Is the added complexity acceptable or will it easily lead to the
   simple model breaking down?



reply via email to

[Prev in Thread] Current Thread [Next in Thread]