gcl-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: +-Inf and NaN


From: Raymond Toy
Subject: Re: +-Inf and NaN
Date: Wed, 21 Feb 2024 15:14:34 -0800



On Wed, Feb 21, 2024 at 2:13 PM Richard Fateman <fateman@gmail.com> wrote:
One rationale for NaNs is based on the assumption that you might have pipelined/ vector/ etc
computers where an "interrupt" in impossible -- it  may happen at a time and place that no longer exists.
So the NaN is carried along and as Stavros says, it colors all future results as NaNs.
Since the NaN-ness of the data is encoded in the exponent, the fraction part can be a payload of
some sort, explaining what happened.  If that is possible.   e.g. the program counter at the time...
I don't think anyone does that.   And AFAIK, all cpus except x87 and alpha had precise traps so you knew exactly what instruction caused the interrrupt.  IIRC, for x87, the traps happened at the NEXT FPU instruction.  But for x86 sse and friends, the traps happen at the instruction that caused it.

On another topic, it was my understanding that stuff like tensorflow cpu float precision was way way
lower than IEEE.  Like 10 bit mantissa.  I don't know about the various GPU instruction sets, but
it seems like way overkill for graphics to have double (or even full single) precision.  But who
Probably half precision with 5 bits for the exponent and 11 bits for the mantissas (with a hidden bit).  See https://en.wikipedia.org/wiki/Half-precision_floating-point_format

uses GPUs for graphics. 

My coworkers across the hall worked on WebGPU, so yeah, they spent a lot of time using the GPU to do graphics. :-)

--
Ray

reply via email to

[Prev in Thread] Current Thread [Next in Thread]