[Maxima] strange behaviour with simple decimals
Andrey G. Grozin
A.G.Grozin at inp.nsk.su
Wed Apr 11 15:05:09 CDT 2007
On Wed, 11 Apr 2007, Jay Belanger wrote:
> Base 10 most certainly does stand out.
> While other bases have their uses, I would guess most people enter
> data in base 10.
How data are input is irrelevant for the choice of the best way to
> It may well be the case that getting small errors when doing decimal
> arithmetic is an acceptable cost, but it was previously implied that
> it is a silly thing to talk about. I disagree. What's more, I think
> that if getting small errors when computing 1.4^2 is the cost of using
> Maxima, the manual should clearly state that.
I think that everybody should know that when one writes any number with .
in it (a floating-point number), all subsequent calculations will be
approximate. Results which differ from each other by something of the
order of the precision used for calculations are *equally* good. Saying
that one result is better than the other is completely meaningless. And I
think that seeing things like 0.9999999999999998 or 1.0000000000000001
sometimes is a good thing: it is a healthy remainder to the user that,
having input something with a dot in it (hence inexact), the user has
accepted the consequences - that the result will be approximate. Hiding
this fundamental fact only leads to unnecessary misunderstandings.
And as for teaching junior school children, I am sure many things are done
wrong. A classical example is x^(1/3). School children (and even teachers)
beleive that it is real and negative for x<0. Maxima uses a more
consistent definition - a cut along a negative real half-axis, with an
additional rule that when we are exactly on the cut, the value from its
upper side is used. So, for x<0 the result is complex. I'd say that here
(as very often) maxima is right, and the school education is wrong.
More information about the Maxima