[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [avr-libc-dev] Printing octal (brain dump)

From: George Spelvin
Subject: Re: [avr-libc-dev] Printing octal (brain dump)
Date: 19 Dec 2016 18:51:19 -0500

> Is 8000 ticks too slow?
> Is 3000 ticks acceptable? And for what reason? Are 3000 acceptable just
> because we have an algorithm that performs in 3000 ticks?
> My strong preference is still to have a one-fits-all algorithm that
> might very well be slower than an optimal one.  But hey, an ordinary
> division of a 64-bit value by 10 already costs 2300 cycles, so why
> should we hunt cycles just for printf...?

Well, I went and asked the customer.

As I mentioned, the motivating application is the TAPR time interval
counter (TICC).
Info:   http://tapr.org/kits_ticc.html
Source: https://github.com/TAPR/TICC            (Not up to date.)
Manual: http://www.tapr.org/~n8ur/TICC_Manual.pdf

Basically, it timestamps input events to sub-nanosecond resolution.
It prints them with picosecond (12 decimal place) resolution.

E.g. fed a 1 Hz input signal, it might print:


It would like to be able to run until 2^64 picoseconds wrap around in
213 days.

Anyway, although it only prints every input transition, the main
processing loop has a 1 ms schedule to meet (it *can* print at up to
1 kHz, synchronized with the USB polling interval), and of the 16,000
clock cycles available in that ms, 8000 are currently spoken for.
8000 are available for formatting and output device drivers.

So yeah, they'd definitely prefer 4000 cycles to 8000.

But they're going to use custom code *anyway*, since they don't want
to wait for an avr-libc release, so that doesn't have to determine what
avr-libc does.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]