gnash-commit
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Re[4]: [Gnash-commit] gnash ChangeLog gui/NullGui.cpp


From: Martin Guy
Subject: Re: Re[4]: [Gnash-commit] gnash ChangeLog gui/NullGui.cpp
Date: Sat, 7 Jul 2007 17:41:44 +0100

2007/7/7, Udo Giacomozzi <address@hidden>:
it's just 5 lines of code... don't we have other problems?

Of course. I was just having a rant because Gnash is full of awful
code and I hate to see more of it growing like fungus :). I
particularly liked the
switch (foo) {
 case 0x00: bar = 0; break;
 case 0x01: bar = 1; break;
...
for all possible values of foo :O/

usleep() is *not* a timing function. It's primary purpose is to tell
the system scheduler that the process is not going to do anything
within the next X nanoseconds. The scheduler then gives the next
process the running state.

You accurately describe how that effect is achieved on a certain
multitasking operating system, but that's not the same as the
function's meaning.

Check the POSIX definition e.g.
http://www.segmentationfault.org/man/man3p/usleep.3p.html

It suspends process execution for a certain amount of real time.
There's a paragraph near the end about "An implementation may impose a
granularity on the argument in which case the value shall be rounded
up" but you can't rely on all systems mis-implementing functions the
same way as the certain system or two you are familiar with. If you
work from definitions rather than by probing implementations, your
code is more likely to work everywhere.

AFAIK all normal i386 systems use a 100 Hz timer

That used to be true in linux 2.2 but now in Linux it is a parameter
you set when you compile the kernel, and it suggests values of 100,
250 and 1000Hz where "normal" is now 250Hz for desktop systems (for
better mouse motion response I presume)

In theory, when absolutely no process on your system wants to do
anything (ie. all are usleep()'ing) then usleep() might return
earlier.

Earlier than 1/100th of a second you mean?
Indeed it does. after 1/250th of a second with this kernel on this machine.
Compile
main()
{
       int i;
       for (i=1000; i>0; i--) usleep(1);
}
and run "time a.out". Unless you are running a 2.2 kernel I think you
will be surprised. I was!

However any OS would be free to return control after 1 usec if it wished.
The POSIX definition says usleep(0) is a no-op, whatever Linux and
WIndows may actually do!

usleep(1) is a common way to cause a task switch that
won't return until all other processes with the same priority have got
their chance..

How about sched_yield() ? It's an optional POSIX feature that is meant
to do exactly what you describe, rather than doing that by chance on
certain non-conformant operating systems.

while (1) { }               // burns CPU
while (1) { usleep(1); }    // process will show 0.00% CPU usage

Yes, that does surprise me.

Pre-1.11 versions did just wait _interval time between the end of one
frame and the start of the next one.

SDL did worse. It used to SDL_pause(10) regardless of frame rate so
videos always ran at 100FPS minus renderer CPU-time :-/

I guess the developer in question only ran interactive games where the
actual frame rate doesn't really matter.
My test-pieces are mostly free-running animations where long-term
timing precision is crucial to staying in sync with the sound.

MG> Better yet if we used framedropping

That could be done, but would not match the PP behaviour

"PP"?

Any user would notice the difference, so I don't think it's a good idea.

It's better than the visuals lagging behind the sound. Users notice
that far more!

  M




reply via email to

[Prev in Thread] Current Thread [Next in Thread]