iiwusynth-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [iiwusynth-devel] Fw: Re: [linux-audio-dev] more about iiwusynth


From: M. Nentwig
Subject: RE: [iiwusynth-devel] Fw: Re: [linux-audio-dev] more about iiwusynth
Date: Fri, 07 Jun 2002 13:00:24 +0300

> First the amp/amp_incr vars (for volume smoothing and volume ramping).

> From what I see, the values on these vars are amazingly low, what kind

> of final value ranges is the mixing buffer expecting? I suppose quite
> low since floats can only store 23 bits stuff...
>

The conversion short to float is done at the same time: (from
iiwu_voice.c):
 /* correct the amplitude for the short to float conversion. */
  amp /= 32768.0f;
  amp_incr = (amp - voice->amp) / IIWU_BUFSIZE;
  amp = voice->amp;
With the limited number of bits, it may well be, that this is the cause
of some quantization noise I keep hearing (when notes are fading out)...

>
> > The DSP loop works in chunks of 64 samples (more if you increase the

> > buffer size), so anything outside of run_dsp will not affect the
> > performance much, even if it's still float. So I'd approach it like
> > this, for a start:
>
> Isnt 64 chunks incredibly low?
> like, if i play at 44100hz.. 44100/64 .. somewhat around 700
> iterations per second is insane. If we take in consideration that the
> internal midi timer resolution is around 10ms (limited to the x86
> clock).. which means around 100 iterations. As for "response" times
> (lapse between which you change variables), not even trackers go that
> low. This is kind of pointless specially if you use volume ramps.
>

In my opinion 1 ms (441 samples) would still be acceptable for real-time
playing. Midi interfaces are that fast and use interrupts.
If the buffer becomes too big, we end up with a major problem in the
synth: The modulation envelope may sweep the filter, sometimes very
rapidly. The filter coefficients are only updated once for each buffer,
that results in audible jumps (zipper sound), if the internal buffer
size is too big.
But you are right, the default buffer size may be a bit low, should be
checked. There are two sounds in the 'Vintage dreams Vers2' sound font,
which are 'hard to handle' for the filter ('warbling bird').

>
> > - Retrieve amplitude, phase, ... and convert to fixed point
> > - run_dsp in fixed point
> > - Convert back to float and store
> >
> > That isn't optimal, but doesn't require a complete rewrite of the
> > synth core.
>
> That's my approach, which i almost finished.. my only problem is that
> i dont know the ranges expected by the destination buffers...
>

I think it's -1..+1.

> >
> > One place to save is the filter; there are resonant two-pole LPs out

> > there, which use less coefficients than my off-the-shelf,
> > standard-issue low-cost heavy-duty filter implementation :-)
> >
>
> For now i dont think it's much fo an issue, I have a filter
> implementation in cheesetracker which only uses 1 gain and 2
> coheficients, but i doubt it's worth changing it for now.

The filter uses roughly as many multiplications as the interpolation,
that may save some time.
...

> > And thinking about interpolation: EMU uses eight (9?) point
> > interpolation and the specs are tailored to that. Linear
> > interpolation could probably be a low-quality option, but the
> > aliasing is just too bad.
>
> Hehe, well, as I said before, (wow 9 is insane), Linear is not really
> that bad. Linear interpolatin aliases high frequencies, which is
> something that doesnt matter much if you are using multisampled
> instruments. For now i'm going to do that implementation, and later
> will probably add a hermite curve interpolation for high quality..
> and maybe even a fir interpolation would be nice. (I still think that
> linear is just only around 85% the quality of cubic or more, so that's

> why i'm not worried)

I have implemented a FIR interpolation for the variable delay line in
the chorus (bandlimited interpolation with a windowed sinc function).
It's straight from the DSP textbook, so the implementation is no way
optimized.
There will be different optimum tradeoffs between speed and quality,
depending on what you want to use the synth for. I use it for real-time
playing, never more than 10 fingers, so I see myself tending more
towards the 'quality' corner. But let's find a good compromise in the
middle and optimize that, we'll probably get a better result than
implementing different options.
>
>
> Regards!
>
> Juan Linietsky
>
Cheers

Markus




reply via email to

[Prev in Thread] Current Thread [Next in Thread]