[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Octave "precision"/sig. digits ?
From: |
Ted Harding |
Subject: |
Re: Octave "precision"/sig. digits ? |
Date: |
Sun, 13 Jul 1997 15:44:38 +0100 (GMT+0100) |
( Re Message From: Daniel Friedman )
>
> Summary: Is there a way to get Octave to regard x, where 0<x<eps,
> as > 0 (where eps is the Octave built-in variable)?
Octave does consider that eps > 0 (even quite high powers of eps > 0,
e.g. eps.^20 = 8.4880e-314; but eps.^21 = 0).
The point of eps is that it is the smallest positive number such that
(1 + eps) has a different machine floating-point representation from (1).
Any engine (not just octave) that computes directly with the machine
representation of numbers will find that (1 +/- x) = 1 whenever
abs(x) < eps. "bc", however, uses its own internal number representation
and therefore can avoid this identification.
> I am trying to compute something which, by its nature, it seems
> Octave has a litle trouble with. In particular, I wish to compute
>
> M
> ---
> \
> f(M,p) = / bincoeff(M,i) * ((-1)^(i+1)) / (1-p^i)
> ---
> i=1
>
> where M is a natural number and 0<p<0.25 .
Here, therefore, (1 - p^i) will become an exact 1 as soon as i>17. There
is nothing you can to prevent this because it occurs in the floating-point
processor itself: it has nothing to do with octave as such. The error is
of the order of p^i. However, "bincoeff(M,i)" can get very large:
bincoeff(50, 18) = 1.805e+13
bincoeff(50, 25) = 1.264e+14
and so the accumulation of differences of small errors multiplied by large
numbers can make a substantial difference (though I am a bit surprised at
the large differences you in fact get: I think it would be worth examining
the flow of errors in detail, in this case).
> For example, Octave tells me f(25, 0.1) = 2.177866. Good.
> But, Octave gives me this:
>
> f(47, 0.1) = 2.419922 f(48, 0.1) = 1.427734
> f(49, 0.1) = 4.437500 f(50, 0.1) = -2.554688
>
> Now p^i is particularly small as i grows, so I suspect[ed] that Octave
> was just calling it 0 (zero) once i became sufficiently large.
See above. It's not that p^i = 0, but that (1 - p^i) = 1.
> [ See:
> octave:36> 1-(0.1 ^ 16) ans = 1.00000000000000e-00
> octave:37> 1-(0.1 ^ 17) ans = 1.00000000000000e+00
>
> Now tell me why the answers for 1-(0.1 ^ 16) and 1-(0.1 ^ 17) are
> slightly different? ]
Because 0.1^16 > eps, so 1-(0.1 ^ 16) < 1,
while 0.1^17 < eps, so 1-(0.1 ^ 17) = 1.
See:
1 - (1-(0.1 ^ 16)) = 1.11022e-16
1 - (1-(0.1 ^ 17)) = 0.00000e+00
[also see the remark preceding program f3(M,p) below].
> Hence I performed the same calculation using the "arbitrary precision
> calculator", 'bc' (version 1.03 (Nov 2, 1994)).
>
> f(47, 0.1) = 2.420551 f(48, 0.1) = 2.428558
> f(49, 0.1) = 2.436431 f(50, 0.1) = 2.444177
>
> I'm not so concerned that the Octave results, when correct, differ
> slightly from the bc results. I *am* concerned that, for some inputs,
> Octave doesn't give the correct answer.
>
> ---> What am I doing wrong/what should I do?
In all computations of this kind, when using a fixed-precision engine,
you have to be alert for this kind of thing (for instance, it can also
happen with things like Bessel functions of imaginary order (i.e. "of the
second kind") if you try to compute them using the infinite series. The
series method works fine for "first kind" BFs (of real order), but massive
numerical errors arise if you naively infinite series for "second kind"
BFs).
In such cases, the original mathematical representation has to be re-cast
for the purposes of computation into a form where, when the program is
run, the rounding errors due to finite machine precision do not get
magnified and accumulate so as to give nonsense results. Though you don't
give your octave program, I suspect you simply "programmed the formula"
thereby exposing your computation to the accumulation of errors described
earlier. What you should do, therefore, is recast the problem.
With a bit of algebra (drop a line privately if you want the details),
f(M,p) = 1 + SUM[r=1:r=inf] ( 1 - (1 - p^r)^M )
and this (pretty clearly) is a convergent series. A naive octave program
to compute this would be:
function [f,r] = f2(M,p,tol)
f=1; r=1; if nargin<3, tol=1e-8; endif ; delta=1;
while (delta > tol)
delta = u = 1 - (1 - p.^r).^M ;
f = f + u; r=r+1;
endwhile
r = r-1;
endfunction
When you run this, you get
octave-2:2> [f,r] = f2(25,0.1)
f = 2.1779 r = 10
octave-2:3> [f,r] = f2(47,0.1)
f = 2.4206 r = 10
octave-2:4> [f,r] = f2(48,0.1)
f = 2.4286 r = 10
octave-2:5> [f,r] = f2(49,0.1)
f = 2.4364 r = 10
octave-2:6> [f,r] = f2(50,0.1)
f = 2.4442 r = 10
which agrees with your results from "bc". Less naively, I would want to be
more careful about the "u = 1 - (1 - p.^r).^M" statement, since it is
itself subject to the "1-x = 1" phenomenon when x<eps: for large M this
could matter. Something like the following could be envisaged, but with a
more detailed consideration of what errors might be present in the final
result (I know this is not the best way to account for all possible cases;
remember that, even if x>eps, (1-x) may have quite a large relative error
(compare the result for (1 - (1 - 0.1^16) above, which is about 11% out)):
function [f,r] = f3(M,p,tol)
f=1; r=1; if nargin<3, tol=1e-8; endif ; delta=1;
while (delta > tol)
if (p0 = p.^r) < 1000*eps, delta = u = M*p0 ;
else delta = u = 1 - (1 - p.^r).^M ;
endif
f = f + u; r=r+1;
endwhile
r = r-1;
endfunction
Hoping this is of some use,
Ted. (address@hidden)