[Ecls-list] wrong value for (float-sign -0.0)

Raymond Toy toy.raymond at gmail.com
Mon Sep 29 00:13:42 UTC 2008


Gabriel Dos Reis wrote:
> Raymond Toy <toy.raymond at gmail.com> writes:
>
> | Gabriel Dos Reis wrote:
> | > Raymond Toy <toy.raymond at gmail.com> writes:
> | >
> | > | Gabriel Dos Reis wrote:
> | > | > Hi Juanjo,
> | > | >
> | > | >   ECL reports wrong value for the sign of -0.0.
> | > | >
> | > | >   > *features*
> | > | >   (:LINUX :FORMATTER :IEEE-FLOATING-POINT :RELATIVE-PACKAGE-NAMES :DFFI
> | > | >    :CLOS-STREAMS :CMU-FORMAT :UNIX :ECL-PDE :DLOPEN :CLOS :BOEHM-GC :ANSI-CL
> | > | >    :COMMON-LISP :ECL :COMMON :PENTIUM3 :FFI :PREFIXED-API)
> | > | >   > (float-sign -0.0)
> | > | >   0.0
> | > | >
> | > | >
> | > | > The correct value is -1.0.
> | > | >
> | > | >   
> | > | Signed zeroes aren't required to be supported.
> | >
> | > Notice :IEEE-FLOATING-POINT on the *features*.
> | What does that mean?
>
> What do you think it would mean?  ECL assumes it in all systems, as
> far as I can tell from the source code.
>   
AFAIK, the spec doesn't really say what having :ieee-floating-point
implies.  So it could mean many different things.
>
>
> | > ECL is using the machine native `double'; there is no point in
> | > pretending it does not have signed 0.0.
> | >
> | > |  It looks like ecl
> | > | doesn't support signed zeroes.  (eql -0.0 0.0) -> T.
> | >
> | > That equality has to hold no matter whether signed zeros are supported
> | > or not.  
> | >   
> | 
> | No, that's incorrect.  -0.0 and 0.0 have completely different
> | representations and eql makes it clear. 
>
> OK, that is an unfortunate literal contradiction between CLHS and
> IEEE-754.  I'm talking of the IEEE 754 semantics.  In IEEE-754, 
> -0.0 and 0.0 must compare equal; that is not open for interpretation
> or debate. The only way you distinguish -0.0 from 0.0 is to ask for
> its signbit. 
>   
You misunderstand the difference between eql and =. 
>
> | 
> | There are quite a few people who think signed zeroes without an unsigned
> | zero is bad.  (I actually like signed zeroes since it makes branch cuts
> | easier.)
>
> What do you call `unsigned zero'? And why is it necessary?  We have been 
> successfully writing scientific applications with signed zeros as
> described by IEEE 754 for decades.  Why is it bad to have signed zero
> without the other one?  In what concreate situations is it bad to have
> signed zeros but not `unsigned zero'?  
>   
I am not an expert in this area.  I know Kahan has argued strongly for
signed zeroes.  Some discussions on comp.arch.arithmetic from
knowledgeable people think signed zeroes without an unsigned zero is
bad.  An unsigned zero is a zero exactly on the axis, as opposed to +0
which is, essentially, above the axis and -0 which is below the axis.

Ray





More information about the ecl-devel mailing list