help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: If pi is so accurate why it's not producing that accurate result.


From: Nicholas Jankowski
Subject: Re: If pi is so accurate why it's not producing that accurate result.
Date: Fri, 23 Mar 2018 17:57:41 -0400

On Fri, Mar 23, 2018 at 3:57 PM, Nicholas Jankowski <address@hidden> wrote:
On Fri, Mar 23, 2018 at 3:44 PM, Nicholas Jankowski <address@hidden> wrote:
On Fri, Mar 23, 2018 at 3:16 PM, Dildar Sk <address@hidden> wrote:
Sorry,
I know it's very hard to deal with floating point arithmetic.
But I am just asking why tan(pi/2) is not close to infinity.Though there
in Octave max is 10^308 but it producing 10^17 something.
And I wonder how pi is so accurate then!!



procrastination tangent. so here are some (incomplete) details I've gleaned over time on floating point numbering. someone better at this can correct my mistakes:

so none of our responses really explained why infinity stopped at 1.6331e+016, and 'zero' at ~6e-17. since it _can_ represent numbers closer to 0 and inf, why doesn't it?

here's another conversation about realmin, realmax, and eps:
https://blogs.mathworks.com/loren/2009/08/20/precision-and-realmax/

and a good explanation of IEEE Standard 754 floating point numbers:
http://steve.hollasch.net/cgindex/coding/ieeefloat.html


So, floating point can represent very large numbers (10^308), but the relative precision of the floating point math changes with the magnitude of the number you're working with.

epsval = eps(x)

 gives you the smallest discernible increment around the value x.  i.e., epsval is the smallest number where epsval+x doesn't round back down to x

eps by itself returns the eps around 1.0, and is what we usually consider machine precision, which is 2^-52 on my machine.

eps(1.0) = 2.2204e-16

log2(eps(1.0) = -52

as the IEEE explanation above states, this value corresponds to 64bits (double precision) floating point assignment: 1 bit for sign, 11 bits for exponent, leaving 52bits for the base.  So a single bit change represents a value change of 2^-52 about 1.0.  Larger and smaller values of x will have larger and smaller values of eps when the change in exponent cause a shift in the magnitude of the smallest bit.  Octave doesn't list this, but the first article above and the Matlab help for eps give eps at different values. some of these from small to large are:

>> a = realmin, eps(a), log2(eps(a))
a =   2.2251e-308
ans =   4.9407e-324
ans = -1074

>> a = eps, eps(a), log2(eps(a))
a =   2.2204e-016
ans =   4.9304e-032
ans = -104

>> a=0.1,eps(a),log2(eps(a))
a =  0.10000
ans =   1.3878e-017
ans = -56

>> a=1,eps(a),log2(eps(a))
a =  1
ans =    2.2204e-016
ans = -52

>> a=10^2,eps(a),log2(eps(a))
a =  100
ans =   1.4211e-014
ans = -46

>> a=10^10,eps(a),log2(eps(a))
a =   1.0000e+010
ans =   1.9073e-006
ans = -19

>> a=10^15,eps(a),log2(eps(a))
a =   1.0000e+015
ans =  0.12500
ans = -3

>> a=10^20,eps(a),log2(eps(a))
a =   1.0000e+020
ans =  16384
ans =  14

>> a=realmax,eps(a),log2(eps(a))
a =   1.7977e+308
ans =   1.9958e+292
ans =  971


NOW, more germane to your question, why does octave/matlab stop at ~10^-17 and ~10^17 with pi?

well:

>> a=cos(pi/2),eps(a),log2(eps(a))
a =   6.1230e-017
ans =   1.2326e-032
ans = -106

>> a=tan(pi/2),eps(a),log2(eps(a))
a =   1.6331e+016
ans =  2
ans =  1

on the high side, 1.6331e-16 is the largest value where the smallest discernible change reaches 2^1.  in the former, it's close to eps(1)/4. I believe (although I haven't found a simple reference to confirm, so would love confirmation) that these are the smallest and largest values that can be represented with 52 fractional bits while including the unit value.

nickj


reply via email to

[Prev in Thread] Current Thread [Next in Thread]