avr-libc-corelib
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Avr-libc-corelib] Style guidelines up! Let's get coding...


From: Ruddick Lawrence
Subject: Re: [Avr-libc-corelib] Style guidelines up! Let's get coding...
Date: Tue, 22 Dec 2009 11:29:47 -0500



2009/12/16 Frédéric Nadeau <address@hidden>
2009/12/16 Ruddick Lawrence <address@hidden>:
> I included multislave mode because Ron mentioned it might be useful to have
> a callback (that would select the slave) that would be called before
> transmitting the first byte in a buffer. I think it can probably be removed
> because people can just select the slave before calling the send function.

There is no interrupt when pin SS goes from high to low. Therefore whe
the interrupt occurs, one byte as already been sent. That is why most
slave will send you 0xFF as a first byte.


Another good reason not to have a multislave mode.
 

> spi_change_mode would really only be used for a multimaster system. The idea
> is that the device can be a slave until something (which the client code
> would determine) signals that it should become the master. Then
> spi_change_mode would switch from slave to multimaster mode. Not sure if
> that's the best way to do it, but it seemed like a fairly flexible way to
> deal with multimaster systems.

Here is my point of view: On a multi master topology(let assume 2 AVR)
both should be configure as master with the SS pin as input and pull
up activated. When first avr send pull low the other's SS pin(SS pin
are not connected together but rather to a GPIO as output) and then
send one byte, the slave will only be notified via interrupt at the
end of the byte and so for the master. So on for all bytes transfer.
when the master pull high the slave's SS pin, the slave now return in
a master state.

see ATmega16A http://atmel.com/dyn/resources/prod_documents/doc8154.pdf
page 141 section 18.3.2

As I understant it, in multimaster, no need to change the mode.


As I understood the datasheet, when an AVR is configured as an SPI Master and the SS pin is an input, if it is pulled low, the master bit is changed to a slave and an interrupt is triggered right away (not after a byte is sent). Is this not how it works (I've never done multimaster)?
 

> I like the idea of a non-interrupt transfer (maybe call it blocking send or
> something like that).

Non-interrupt transfer is handy on fast SPI with low byte count per
transfer. At busclock/2, it takes 16 clock cycles to send 1 byte,
faster that to enter the interrupt entry/exist overhead. Another case:
SPI transfer act as your IDLE task, thus all your other task that are
interrupt driven will only induce delay in your SPI transfer, which
you may not care about.


Agreed.
 
> I guess the delay would have to use a timer interrupt, which would make it
> very messy... Any other suggestions?

Could you give example of when delay between byte is needed? I only
worked with a few devices and all of them where too fast for the AVR,
event at 16MHz.


I think the delay was meant to give the slave time to process the byte it just received. I don't know if this is something that is regularly needed.
 
> I forgot to include the interrupts in the API, but basically each module
> would have a static inline function for each interrupt vector it needs to
> use. The user would then call each function from the appropriate interrupt
> vector. This allows us to abstract how the interrupt works, let the user
> retain control of the ISR, and have it run as quickly as if the code were
> just in the vector.

Could you elaborate, not shure what you mean.


So basically the module would define a function along the lines of:

static inline spi_SPI_STC_vect() {
  SPDR = nextByte;
}

and the client code would be responsible for calling it from the actual ISR:

ISR(SPI_STC_vect) {
  spi_SPI_STC_vect();
}
 

--
Frédéric Nadeau ing. jr




reply via email to

[Prev in Thread] Current Thread [Next in Thread]