# [Maxima] bfloat precision question

Barton Willis willisb at unk.edu
Sat Jun 20 15:41:24 CDT 2009

As an alternative to abs((((4.0/3.0 rounded) - 1.0)*3.0 - 1.0), you
can use the functions in floatproperties:

(%i48) bigfloat_eps(), fpprec : 42;
(%o48) 1.79366203433576585078237386661109264803874b-43

Also

(%i97) fpprec : 17;
(%o97) 17

(%i98) ?fpprec;
(%o98) 59

I'm not positive, but I think bigfloat_eps() returns the least
positive number such that 1 + x # 1:

(%i102) rationalize(1 + bigfloat_eps());
(%o102) 288230376151711745/288230376151711744

(%i103) rationalize(1 + bigfloat_eps() * (9/10));
(%o103) 1

The number of bits in the fractional part of a big float is ?fpprec - 1;
and bigfloat_eps() is a tad larger than 1/2^?fpprec:

(%i109) is(rationalize(0.5b0^59 < bigfloat_eps()));
(%o109) true

For round to even, bigfloat_eps() might be a tad larger than needed
for y fop z = (y op z)/(1-a), |a| < bigfloat_eps(). But I've never

If you are trying to prove theorems using float point numbers, such
things matter, but for most work, I think the different definitions of
the machine epsilon (or unit roundoff) don't matter all that much.  I
have a hunk of code that uses the bounds for the relative difference
between y fop z and y op z 100s or 1000s times. After a few 100 over
estimations, the bounds are all way too big (well almost always)
anyway.

Barton

-----maxima-bounces at math.utexas.edu wrote: -----

>To: "maxima at math.utexas.edu" <maxima at math.utexas.edu>
>From: Sheldon Newhouse <sen1 at math.msu.edu>
>Sent by: maxima-bounces at math.utexas.edu
>Date: 06/20/2009 12:57PM
>Subject: [Maxima] bfloat precision question
>
>Hello,
> I was looking at some of W. Kahan's papers on round-off errors and
>found the following test for estimation of the roundoff error in a
>computer.
>
>In some notes on his web page entitled
> "OLD Notes on Errors and Equation-Solving"
>
>he states the following
>
>If y fop z denotes the floating point representation of the mathematical
>operation  y op z where
>    op is one of +,-,*,/, then
>
>y fop z = (y op z)/(1-a)
>
>where
>
>abs(a) < abs( ( ((4.0/3.0 rounded) - 1.0)*3.0 - 1.0 )
>
> There is a difference between the stated estimate for standard double
>precision and the output using 'bfloat'.
> Kahan mentions that his estimate is an over-estimataion, so there is no
>
>My question is whether the computed outcome with 'bfloat' is also a
>correct upper estimate for 'a'.  Note: I did not check the mathematics
>involved. I thought someone (maybe RJF) would know the answer immediately.
>
>Here are some outputs (maxima 5.18.1 with cmucl).
>
>(%i5) fpprec;
>(%o5)                                 16
>(%i6) abs(((4.0/3.0) - 1.0)*3.0 -1.0);
>(%o6)                        2.220446049250313e-16
>
>Here is the computation using 'bfloats'.
>(%i7) abs(((bfloat(4.0)/bfloat(3.0)) - bfloat(1.0))*bfloat(3.0)
>-bfloat(1.0));
>(%o7)                        2.775557561562891b-17
>
>Is this smaller number an accurate estimate ?
>
>More generally,   does the method produce valid estimations for any
>precision?
>
>In particular,  are the following upper bounds valid?
>
>%i10) fpprec: 32;
>(%o10)                                32
>(%i11) abs(((bfloat(4.0)/bfloat(3.0)) - bfloat(1.0))*bfloat(3.0)
>-bfloat(1.0));
>(%o11)               3.0814879110195773648895647081359b-33
>(%i12) fpprec: 64;
>(%o12)                                64
>(%i13) abs(((bfloat(4.0)/bfloat(3.0)) - bfloat(1.0))*bfloat(3.0)
>-bfloat(1.0));
>(%o13)
>3.798227098303919498989296907824782861688386333447977986511911996b-65
>
>TIA,
> -sen
>
>_______________________________________________
>Maxima mailing list
>Maxima at math.utexas.edu
>http://www.math.utexas.edu/mailman/listinfo/maxima