|
From: | Paul Eggert |
Subject: | Re: Removing some workarounds for big integers |
Date: | Tue, 4 Aug 2020 23:11:31 -0700 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 |
On 8/1/20 1:09 PM, Philipp Stephani wrote:
Am Mo., 22. Apr. 2019 um 20:45 Uhr schrieb Paul Eggert <eggert@cs.ucla.edu>:On 4/22/19 9:59 AM, Philipp Stephani wrote:+#define INTEGER_TO_INT(num, type) \ + (TYPE_SIGNED (type) \ + ? ranged_integer_to_int ((num), TYPE_MINIMUM (type), TYPE_MAXIMUM (type)) \ + : ranged_integer_to_uint ((num), TYPE_MINIMUM (type))) ^^^^^^^^^^^^ This should be TYPE_MAXIMUM.Thanks, fixedMore important, INTEGER_TO_INT's type conversion messes up and can cause a signal on picky platforms.How so?
The type conversion is messed up because on conventional platforms INTEGER_TO_INT returns a value of uintmax_t, which means that an expression like 'INTEGER_TO_INT (n, t) < 0' will always be false, even if N is negative and T is a signed type.
The "picky platform" is one where conversion from unsigned to signed signals when the value is out of range for the signed type; this behavior is allowed by POSIX and the C standard and I imagine some debugging implementations might check for it. On these implementations,
To work around this problem, the macro could have another argument, being the lvalue destination; that would avoid these problems. However, it'd be more awkward to use. At some point it's easier to avoid the macro and use the underlying functions.
[Prev in Thread] | Current Thread | [Next in Thread] |