avr-libc-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [avr-libc-dev] RFC: avr/bits.h


From: Erik Walthinsen
Subject: Re: [avr-libc-dev] RFC: avr/bits.h
Date: Tue, 01 Mar 2005 12:25:18 -0800
User-agent: Debian Thunderbird 1.0 (X11/20050116)

E. Weddington wrote:
The reason why this should be done is to get around the automatic integer promotion. The C language bit operators automatically promote it's operands to an int, 16 bits. This is unacceptable for operations on 8 bit values which are commonly used when operating on the AVR registers. Typecasting is necessary to tell the compiler to optimize the generated assembly. There's a FAQ item in the avr-libc user manual about this issue.

No, the compiler does not automatically over-promote the constants, as I stated in the RFC:

int main() {
  volatile uint8_t u8;
  volatile uint16_t u16;
  bit_set(u8,5);
  bit_set(u16,11);
}

compiles to:

  10:test.c        ****   bit_set(u8,5);
  67                    .LM2:
  68 0008 2B81                  ldd r18,Y+3
  69 000a 2062                  ori r18,lo8(32)
  70 000c 2B83                  std Y+3,r18
  11:test.c        ****   bit_set(u16,11);
  72                    .LM3:
  73 000e 8981                  ldd r24,Y+1
  74 0010 9A81                  ldd r25,Y+2
  75 0012 9860                  ori r25,hi8(2048)
  76 0014 8983                  std Y+1,r24
  77 0016 9A83                  std Y+2,r25

This is with a basic 3.4.3 compiler, build from rod.info's script.

OTOH, 32-bit operations are very touchy. I've managed to find a sequence that works (all others don't generate a single instruction at all if bit is >15):

  (uint32_t)u32 |= ((uint32_t)1<<18);

Specifically, the (1) itself has to be cast *before* being shifted, as the constant has a default width of 16 and does not seem to be auto-promoted to 32-bit (which could very well be correct C behavior, but is a bit screwy).

Now, gcc has a keyword called 'typeof':
http://gcc.gnu.org/onlinedocs/gcc/Typeof.html

This allows us to define bit_set() as:

#define bit_set(var, bit)
    ((typeof(var))(var) |= ((typeof(var))1 << (bit)))

This should produce code that works in all cases, no matter what the width of the operand is. GCC will forcibly cast both the variable (which is totally redundant afaict) and the bit being set to the appropriate width:

int main() {
  volatile uint32_t u32;
  bit_set(u32,27);
}

   9:test.c        ****   bit_set(u32,27);
  67                    .LM2:
  68 0008 8981                  ldd r24,Y+1
  69 000a 9A81                  ldd r25,Y+2
  70 000c AB81                  ldd r26,Y+3
  71 000e BC81                  ldd r27,Y+4
  72 0010 B860                  ori r27,hhi8(134217728)
  73 0012 8983                  std Y+1,r24
  74 0014 9A83                  std Y+2,r25
  75 0016 AB83                  std Y+3,r26
  76 0018 BC83                  std Y+4,r27

Of course there's still the potential debate about whether it should optimize out the unnecessary load/stores (esp in the 32-bit case where you can be sure there are no 2-byte-register sequences to obey), but the that's for the avr-gcc-list to figure out.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]