On Thu, Oct 16, 2008 at 04:31:17PM -0400, John W. Eaton wrote:
On 16-Oct-2008, Thomas Weber wrote:
| On Thu, Oct 16, 2008 at 01:45:02PM -0400, John W. Eaton wrote:
| > We seem to be having a lot of regressions in the graphics code
| > lately, so maybe it is time to start thinking of some way to provide
| > tests for these functions. Unfortunately, I don't have any good way
| > to do automatic tests for most of the graphics functions since the
| > results are visual.
|
| Is this the new backend or the gnuplot one?
|
| It seems gnuplot produces identical images (md5sum matches) for
| identical input.
If the code was more or less complete and stable this might work. But
at this point, with frequent changes that do affect the output or the
commands sent to gnuplot, I don't think it will be all that useful to
take this approach. We will find ourselves spending a lot of time
generating new "correct" test results. Plus, wouldn't this limit all
of us to using precisely the same version of gnuplot in order to get
all the tests to pass?
I'm thinking more along the lines of subdirectories
20081014/
20081015/
20081016/
where the tests/demos simply save their images. A script runs through
today's images and "compares" them with yesterday's (or earlier) images.
If there's a difference, copy both images into a temporary directory.
Thus, you only need to inspect images where a difference is actually
reported.