bug-bison
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: C++.bison


From: Hans Aberg
Subject: Re: C++.bison
Date: Sat, 20 Apr 2002 10:46:10 +0200

At 17:28 +0200 2002/04/19, Akim Demaille wrote:
>| This still does not contain bison.c++ (after run through m4); is it CVS
>| Bison that runs the skeleton file through m4 in order to produce multi-file
>| output?
>
>This sentence doesn't really make sense.  bison.c++ is fed to m4
>together with another m4 file which reflects your parser.  So these
>files are what is created after the m4 expansion, _and_ segmentation
>by bison.

As I do not have the setup, I merely made a guess: From the stuff you sent
me, it looked as though a simple -S bison.c++ could produce multiple file
output. How do you achieve that? Why can't I simply compile the standard
Bison and then feed a pre-processed bison.c++ skeleton file to that?

>| - The standard choice for std::stack is std::deque, perhaps because it for
>| some reason is more efficient. So perhaps you should use that default as
>| well.
>
>I don't understand your point here.  You suggest that we drop vectors?

One should be able to use any container; the question is about the default.

>I think vectors are certainly more efficient, but you may benchmark it :)
>That would be nice input...

The C++ standard ANSI-ISO+IEC+14882-1998 has a class std::stack (23.2.3.3,
"Template class stack", [lib.stack]), and as default container, it uses
std::deque.

The ought to be a reason for that. -- I thought it might be more efficient,
for some reason I do not know.

Also, note that std::deque does not have a reserve function as std::vector
has. In the C++ newsgroups, I was told that std::deque did not really need
it, but I do not recall the reason.

I merely point this out, giving you a chance to look it up.

>Nevertheless, the SGI's STL documentation seems clear: we don't need deque:
>
>        A deque [1] is very much like a vector: like vector, it is a
>        sequence that supports random access to elements, constant
>        time insertion and removal of elements at the end of the
>        sequence, and linear time insertion and removal of elements in
>        the middle.
>
>        The main way in which deque differs from vector is that deque
>        also supports constant time insertion and removal of elements
>        at the beginning of the sequence [2]. Additionally, deque does
>        not have any member functions analogous to vector's capacity()
>        and reserve(), and does not provide any of the guarantees on
>        iterator validity that are associated with those member
>        functions. [3]

This is essentially what the C++ standard says as well (but, please rely on
the standard first, especially when it comes down to documentation about
the original STL that later was adapted to the C++ standard).

As I said above, I do not know why the C++ standard uses std::deque as
std::stack default, I only noted it does.

>| - It is more efficient to use only one stack (I did that in my C++ skeleton
>| file).
>
>This is really a surprise to me.  I suppose it depends on how heavily
>you use $n etc. since you pay an additional indirection for each
>member.  It is definitely cuter/simpler with a single stack, but I
>expected it to be less efficient.

The truth will be revealed by profiling. :-)

The current C stack is like a single stack spread on three block within the
same allocation. It makes it efficient when re-allocating.

But if one copies that directly over to C++, one gets three stacks, which
each will call its own re-allocation function. In addition, push and pop
will be called two or three time for every stack operation.

>Anyway, it has a serious impact of the actions (we have to recode the
>translation of $n etc.), which we have not language-independentized :)

This is how I arrived at those macros I described before:
// Given a reference x to a stack value
//   YYVAL(x) = a reference to the semantic value.
//   YYLOC(x) = a reference to the location value.
// Since the stacks are different, these expand to the argument:
#ifndef YYVAL
# define YYVAL(x) x
#endif
#ifndef YYLOC
# define YYLOC(x) x
#endif
 // The macro YYVSP should produce a reference to $n,
 // given n, the rule_length and:
 // x = stack pointer, to $0 (resp. @0) in this implementation.
#ifndef YYVSP
# define YYVSP(x, n, rule_length) (*(x + n))
#endif

My Bison version (perhaps "HABison" :-) ) translates $n into YYVSP(yyvsp,
1, yylen).

In addition, in order to handle a C++ polymorphic class hierarchy (instead
of %union), I had to introduce an additional macro:

 // If cast_name has been indicated in the input grammar,
 // YYCAST($$, cast_name) resp. YYCAST($n, cast_name) will be written
 // in the rule actions.
 // x = reference to $$ or $n value
 // cast_name = the type used in the Bison grammar (in the .y file).
#ifndef YYCAST
# define YYCAST(x, cast_name) (yycast<cast_name>(x))
#endif

My Bison writes $$ and $n, when a cast_name has been indicated in the .y
sources, as YYCAST($$) resp. YYCAST($n), where $$ and $n are given the
favorite translation as before.

Now, if you only translate the bison.simple fairly verbatim to C++, as you
have now done, you end up with three stacks, and you can that way avoid the
YYVSP etc macros.

But I doubt you can figure out a way to handle C++ polymorphy without an
extra macro like YYCAST. -- There are simply too many different ways to
implement C++ polymorphy, and it is hard to straight off finding a model
that is suitable to everyone.

So I think therefore, one is landing on the equation of introducing such
macros anyway.

One drawback is then that the old bison.simple files will no longer work
without those macros defined somewhere.

Another solution might be to later on integrating M4 with Bison. -- Then
one lands on my ideas of a "formatter" language for Bison skeleton files.

>| - You should use the C++ IO standard streams, not the C compatibility ones
>| in <cstdio>. Even though the streams are the same, I think that when using
>| both, their buffers must be synchronized (or so I recall); which can cause
>| a performance penalty. (I did this change in my skeleton file.)
>
>Correct.  But printf is sooooo much more pleasant...

Not really: I made this change in my C++ skeleton files.

For an official Bison C++ version, in view of the synchronization problem,
I think you may need a choice: The default would then be the true C++ IO's,
the C compatibility as a choice.

Also not that C++ has three standard output streams, std::cout, std::cerr,
and std::clog, that can be deflected to different output streams. std::cerr
differs from std::clog in that it may have less buffering.

Thus user output should go to std::cout, errors to std:cerr and debugging
to std::clog, I believe.

>| - In the case of the zero length rule default action, I think that you can
>| change to $$ = YYSTYPE(): Under C++, unlike (old) C, basic types have
>| default constructors. -- I think this was added in order to ensure various
>| template functions working. So one can just as well assume that the types
>| used have such a default constructor.
>
>We are aware of this big problem.  It is to be quicked, indeed.  But I
>fear backward compatibility issues here :(  I rely on $$ = $1 being
>performed.  I'm not a problem: I _will_ adjust my code.  But I don't
>know if out there, some people don't rely on this.

Under C++, it makes no difference, as there is no official C++ version out
there yet: So I think these can be different, keep C as it is (because it
works under C), and change it for C++.

  Hans Aberg





reply via email to

[Prev in Thread] Current Thread [Next in Thread]