[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [avr-libc-dev] Re: bug #22163 (atomic not working as expected inC++)

From: Stu Bell
Subject: RE: [avr-libc-dev] Re: bug #22163 (atomic not working as expected inC++)
Date: Wed, 9 Jun 2010 13:00:03 -0600

> > There's no good reason why the user would want the compiler to 
> > re-order the assembly around cli() and sei(), as that's just asking 
> > for trouble.
> There's a common miscomprehension: a memory barrier ensures 
> that all write operations are committed to memory, and memory 
> locations will be read again (so it's actually quite a big 
> pessimization, as if all variables were declared "volatile"), 
> but there is currently *no* method to prevent the compiler 
> from reordering code.  Something like that is simply missing 
> in the C language.

*That* is an interesting comment.  I don't know about anyone else, but
I've got to reply.

Before the rant below, let me make sure:  I interpret this command to
mean that there is no way for the AVR GCC compiler writers to tell the
optimizer, "Thou Shalt Not Reorder Code Around This Boundary".  Even
more, there is no way for any mechanism (even #pragma?) for C source to
tell the compiler this.  If this is wrong, just say so and please ignore
the rest of this rant.

First, I will be a little pedantic.  I would say that there is no method
to prevent the *optimizer* from reordering code.  If I understand it
correctly, the compiler parses the code and uses templates to generate a
baseline code base.  That "baseline" may be in internal pseudo-code, but
it seems to show up quite nicely with -O0.  Though a template may move
the test for a "while" from the beginning of the loop to the end, in
general if I list the assembly out from a -O0 run and compare it to the
C code that went in, there seems to be a one-to-one correspondence, no

As far as C goes, I mostly agree; the language provides no mechanism,
per se, to prevent an optimizer (any optimizer) from moving code around.
And the definition of an optmizer is to produce code that is
smaller/faster, according to some unspecified standard, so long as the
*logical* function is the same. Ergo, I would not expect the language to
specify how an optimizer should work.  Further I expect that the
optimizer *must* reorder code, to some extent, to accomplish its job.

However, even the Big Iron guys talk to hardware and will have the same
problem as we Tiny Iron guys have when it comes to doing things in
exactly the right sequence. Most of the time we run into this problem is
when we are trying to make sure that something is computed *after* we
grab it, without interference from the hardware.  That interference can
come from either the hardware itself (timers, etc.) or from an

So, blaming the language to say, "well, it just happens" is specious.
GCC allows Linux to run, somehow.  And I cannot imagine that they run
unoptimized code everywhere.  (Perhaps there are (tiny) parts of the
system that they compile unoptimized?  I dunno, but again, I doubt it.)

At any rate, I have to suspect that GCC's optimization supports this
concept, somehow.  Our problem is how do we access it?

Best regards, 

Stu Bell 
DataPlay (DPHI, Inc.) 

reply via email to

[Prev in Thread] Current Thread [Next in Thread]