octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Performance tests in Octave's unit testing


From: Carnë Draug
Subject: Re: Performance tests in Octave's unit testing
Date: Wed, 3 Aug 2016 19:04:48 +0100

On 3 August 2016 at 15:12, Nir Krakauer <address@hidden> wrote:
> On 3 August 2016 at 00:14, Barbara Lócsi <address@hidden> wrote:
>> Dear all,
>>
>> As far as I know there is no tests in Octave that tests the time passed with
>> tic toc or cputime.
>>
>> I am working on new calling forms for eig() (GSoC). In Matlab in case of a
>> generalized eigenvalue problem you can choose which algorithm you want to
>> use('chol' or 'qz').
>>
>>
>> 'chol' supposed to be faster for large-scale inputs(symmetric), and
>> sometimes it is more accurate (and sometimes not). But when it is not, the
>> only reason to use it is because it is supposed to be faster, so it makes
>> sense to be tested.
>> [...]
>
> I think that timing, while certainly worth checking (via profiling and
> benchmarks) when speed is a concern, is not generally part of unit testing
> for good reasons, for example that it is likely to be quite
> machine-dependent and even fluctuate on the same machine depending on load
> and other factors.

But difference in performance is one of the features.  If we document
that for input of type X, foo is faster than bar, why not test that?
Benchmarking and profiling is to choose an algorithm with new type of
data, indeed not a concern of Octave.  But this is testing with the known
best and worst case-scenarios, to validate Octave is doing what it says
to do.

I can see issues on testing this if the difference in performance between
the two algorithms is heavily machine-dependent, e.g., an algorithm is faster
at expense of memory usage.  Is that the case between the different
algorithms for eig?

We can account for spikes of load on the machine by running it multiple
times while alternating between algorithms.  There is always some variance,
but if the complexity of one algorithm is linear on the matrix size, while
another is quadratic time, we should see some difference.  Is the complexity
of this algorithm well studied?

I may be massively simplifying the issue of comparing performance but I
would guess that if there's a performance difference between two algorithms,
a difference that makes it worth to have this as an option, then that
difference should be easily measurable.

A more general question then: how could we test that Octave is doing
what is supposed to be doing, when the only difference between two
algorithms is speed?  Is it even worth testing for this? (seems that
is starting to become hip to reject unit testing).

Carnë



reply via email to

[Prev in Thread] Current Thread [Next in Thread]