help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Uniform partition of an interval


From: Dirk Laurie
Subject: Re: Uniform partition of an interval
Date: Wed, 31 May 2000 12:07:57 +0200

J.C. Gonzalez skryf:
> Dirk Laurie wrote:
> > and people will write things like 'y=1.8:0.05:1.9'.  We should agree what
> > that should do.  Intuitively one feels that a:h:b with h>0 should be
> > equivalent to:
> >   y=[]; x=a;
> >   while x<=b, y=[y x]; x += h; end
> > And indeed, if I run the above in Octave on my i686 machine, I get
> > [1.8000 1.8500].  Yet it is unsatisfactory, because with pencil and
> > paper, or on a decimal machine, or on some binary machines, I would have
> > got [1.8000 1.8500 1.9000].
> > 
> > One can get round the problem by saying it should be equivalent to:
> >   r=(b-a)/h; n=round(r);
> >   if h*abs(n-r)>max(a,b)*eps, n=floor(r); end
> >   y=a+h*(0:n);
> > 
> > But doing so would treat one case of a pervasive problem: the
> > non-intuitiveness of floating-point comparison.  A good cure should work
> > in other places too.
> > 
> > I think Octave should borrow an idea from the grandfather of interactive
> > matrix languages, namely APL.  This language has a built-in variable which
> > in Octave we would call 'comparison_tolerance'.  Then we could write:
> > 
> 
> I agree, but ... shouldn't we use better "linspace" ?

My point is *not* 
"what is the best way to make 1.8:0.05:1.9 deliver [1.8 1.85 1.9]"

My point is: Octave should have a technique that allows all floating-point
tests to be tolerant.  Then the previous question does not even arise.

Dirk



-----------------------------------------------------------------------
Octave is freely available under the terms of the GNU GPL.

Octave's home on the web:  http://www.che.wisc.edu/octave/octave.html
How to fund new projects:  http://www.che.wisc.edu/octave/funding.html
Subscription information:  http://www.che.wisc.edu/octave/archive.html
-----------------------------------------------------------------------



reply via email to

[Prev in Thread] Current Thread [Next in Thread]