iiwusynth-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [iiwusynth-devel] Fw: Re: [linux-audio-dev] more about iiwusynth


From: Peter Hanappe
Subject: Re: [iiwusynth-devel] Fw: Re: [linux-audio-dev] more about iiwusynth
Date: Tue, 11 Jun 2002 03:09:46 +0200
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:0.9.8) Gecko/20020205

Juan Linietsky wrote:


Unforntunatedly, my biggest problem with Iiwusynth is the speed and
overall slowness of it (it's just _very_ slow). I've been messing with
the code and doing some tests, and found out that the source of the
slowness is the iiwu_run_dsp function, which is HIGHLY unoptimized.
I'm feeling like improving the mixer for speed by converting it fullly
to fixed point, but i'll need some forward and info about the
format/ranges of some buffers/variables. Attached is the mail i've
sent to linux-audio-dev with my thoughts about the mixer code.


Hi Juan,
Hi all,

I was a LinuxTag this weekend showing Swami and the iiwusynth so I
missed the whole discussion this weekend. I'll try to catch up a bit.

First, I remember we discussed this already on the LAD mailing list
and I started doing some testing with fixed point DSP (new to me).
In fact, the iiwu_voice.c file in CVS still includes my first try to
implement the iiwu_run_dsp function in fixed point. and I agree we
should try to use fixed point. (Moreover, I'd like to see the synth
running on handhelds such as the iPaq or the Zaurus, which don't have a
FPU). We might even be able to do the reverb or the chorus in fixed
point.

I also started profiling the synthesizer, trying to figger out where it
spends time. I've put some results below. The are taken using iiwuplay
using a well made, standard, techo-style MIDI file.

First, a simple check on how long voices live in average:

on/off                 246msec
release phase          1s077msec

The first number shows the average time between a note on and
the corresponding note off event. This is the time a voice spends in the
delay, attack, up to the sustain phase of the envelope.

The second number shows the average times the voices spent
in the release phase.

So you see that in average, notes spent four times more time in the
release phase than the actual note! I did a quick test this
evening and reduced the release phase by two and now the times are:

on/off                254msec
release phase         588msec

Between the first and the second test, the CPU usage dropped from 42%
to 28%. So if we try to be smarter about cutting of notes sooner, we can
gain a lot.

I also measured the time used to synthesize the voices.
One voice takes about 8usec (iiwu_synth_write). Synthesizing
all voice takes 614usec (iiwu_synth_one_block). This means
that in average, for that MIDI song, there are 76 voices playing!
Or rather, 15 voices and 60 voices in their release phase.

In the test where I reduced the release phase the numbers are,
respectively, 8usec (duh!) and 406usec. So "only" 50 voices are playing
in average.

So I repeat my point, yes, we should go for fixed point if that makes
us gain CPU. But there is CPU to gain from managing the voice releases
better as well.


Cheers!
Peter



cheers

Juan Linietsky






reply via email to

[Prev in Thread] Current Thread [Next in Thread]