"Piet van Oostrum" <address@hidden> writes:
"Eli Zaretskii" <address@hidden> (EZ) wrote:
From: "Piet van Oostrum" <address@hidden>
Date: Wed, 28 Apr 2004 13:14:59 +0200
(gdb) frame 1
#1 0x0012eeac in sys_select (n=126, rfds=0x38e7e4, wfds=0x0,
efds=0x0, timeout=0xbfffc770) at mac.c:2787
2787 return select(n, rfds, wfds, efds, timeout);
(gdb) print *timeout
$3 = {
tv_sec = 0,
tv_usec = 999996
}
So this looks normal.
EZ> Well, yes and no: how come it's 999996 microseconds instead of a
full
EZ> second? That is, why don't you see this instead?
EZ> $3 = {
EZ> tv_sec = 1,
EZ> tv_usec = 0
EZ> }
EZ> This higher frame in the backtrace:
#2 0x00119948 in wait_reading_process_input (time_limit=1,
microsecs=0, read_kbd=3506604, do_display=0) at process.c:4311
EZ> seems to imply that wait_reading_process_input was called to wait
for
EZ> 1 second and 0 microseconds, so where from did the small
inaccuracy
EZ> creep in?
There is some code just above the select that manipulates the usecs.
(The ADAPTIVE_READ_BUFFERING stuff). But I think it always makes it a
multiple of READ_OUTPUT_DELAY_INCREMENT, which this isn't.
Then there's also timer_delay which gets calculated from something
with the current time in it, so maybe that's it.
The Linux kernel adjusts the timeout value upon return from select to
contain the amount of time "remaining to the specified timeout". This
is very useful in some situations.
Maybe OS/X does that too.
--
Kim F. Storm <address@hidden> http://www.cua.dk
_______________________________________________
Emacs-devel mailing list
address@hidden
http://mail.gnu.org/mailman/listinfo/emacs-devel