|
From: | Raymond Toy |
Subject: | Re: +-Inf and NaN |
Date: | Wed, 21 Feb 2024 15:14:34 -0800 |
One rationale for NaNs is based on the assumption that you might have pipelined/ vector/ etccomputers where an "interrupt" in impossible -- it may happen at a time and place that no longer exists.So the NaN is carried along and as Stavros says, it colors all future results as NaNs.Since the NaN-ness of the data is encoded in the exponent, the fraction part can be a payload ofsome sort, explaining what happened. If that is possible. e.g. the program counter at the time...
On another topic, it was my understanding that stuff like tensorflow cpu float precision was way waylower than IEEE. Like 10 bit mantissa. I don't know about the various GPU instruction sets, butit seems like way overkill for graphics to have double (or even full single) precision. But who
uses GPUs for graphics.
[Prev in Thread] | Current Thread | [Next in Thread] |