[Ecls-list] Benchmarking (and versioning)
Juan Jose Garcia-Ripoll
juanjose.garciaripoll at googlemail.com
Tue Dec 6 09:02:09 UTC 2011
On Tue, Dec 6, 2011 at 1:44 AM, Matthew Mondor <mm_lists at pulsar-zone.net>wrote:
> On Tue, 6 Dec 2011 00:27:26 +0100
> The few tests to look closer at seem to be: BIGNUM/PARI-200-5,
> MANDELBROT/DFLOAT, FACTORIAL, FFT, WALK-LIST/SEQ, SUM-PERMUTATIONS.
> I can also see a very slight preformance regression shown by the PI-*
> benchmarks, nothing major
There was a problem with gnuplot. I was printing floating point numbers
with the wrong exponent character and the plots got messed up. Now they
should be ok: FFT is one of the benchmarks that improved most!
> Is there an easy way for users to reproduce these benchmarks and
> generate plots? I remember that when trying some benchmark suite I had
> some trouble to set it up, but it was a while ago. It'd be nice to
> have a comparision between official release versions too once the next
> release it out.
It is a collection of hacks, plus fixes of Eric's benchmarks, so that they
also run on ECL, catch errors, etc. I uploaded a tarball with the current
version of those files here
This reminds me that the releases are rare, and that the CVS/GIT
> snapshots continue to pretend to be the same version over long periods
> of time, which is confusing when users submit bug reports
This only partially true. Right now ECL uses also the VCS (git) commit id
to identify itself. This distinguishes the different versions and allows me
to better find out what changes the user has.
OTOH, the release engineering problem continues to be that: a problem. The
only reasons why there are no releases is because testing on all platforms
takes time and the current infraestructure I have does not work well
enough. See for instance how many of the platforms here have outdated
Instituto de Física Fundamental, CSIC
c/ Serrano, 113b, Madrid 28006 (Spain)
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ecl-devel