Zsh Mailing List Archive
Messages sorted by: Reverse Date, Date, Thread, Author

Re: zsh converts a floating-point number to string with too much precision

2019-12-21 01:50:05 +0100, Vincent Lefevre:
> On 2019-12-20 18:12:18 +0100, Roman Perepelitsa wrote:
> > I think what Vincent meant is that zsh should produce the shortest
> > string that, when parsed, results in a value equal to the original.
> > 
> > For your example, "1.1" is the shortest string that parses into
> > floating point value equal to the original, hence this (according to
> > Vincent) is what zsh should produce.
> Yes, this is exactly what I meant, and what Java's System.out.println
> seems to do. This is also specified like that in XPath.
> I think that's the best compromise in practice.

OK, I think I see what you mean.

So on a system (with a compiler) where C doubles are implemented
as IEEE 754 double precision, both 1.1 and 1.1000000000000001
are represented as the same binary double (whose exact value is

So you're saying echo $((1.1000000000000001)) and echo $((1.1))
should output 1.1, because even though 1.1000000000000001 is
closer to that value than 1.1000000000000000, zsh should pick
the latter because people prefer to see shorter number
representations and in that case it doesn't matter which one we
pick as both lead to the same double.

How would we do that?

Is there a standard C API for that?

Or would we get the output of sprintf("%.17g"), look at the last
two significant digits, if the second last is 9 or 0, then see
if rounding it and doing a strtod again yields the same double?

That seems a bit overkill (and I suspect that's not even a valid

Or should we implement the conversion to decimal string
representation from scratch without using sprintf() and adapt to
every system's double representation? or assume doubles are IEEE
754 ones as is more or less already done?

How are those other languages doing it?


Messages sorted by: Reverse Date, Date, Thread, Author