lmi
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lmi] [PATCH] Add a cast to fix compilation in 64 bits


From: Greg Chicares
Subject: Re: [lmi] [PATCH] Add a cast to fix compilation in 64 bits
Date: Mon, 13 Mar 2017 23:12:07 +0000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Icedove/45.6.0

On 2017-03-13 13:39, Vadim Zeitlin wrote:
> On Mon, 13 Mar 2017 04:36:10 +0000 Greg Chicares <address@hidden> wrote:

This crossed in the mail with my reply to myself. I had made tentative
modifications in my local tree, and when I did extensive testing of:
  Disable gcc -Wcast-qual warning for wx in the code, not makefile
that covered this change as well, so I went ahead and committed it at
the same time. But now let's look back...

> GC> Okay, now that it compiles, how should we rewrite it for clarity?
> GC> 
> GC> Even after all these years, I still think in APL, so my first idea is
> GC>   data ← original_length ↑ shorter_length ↑ data
> GC> thus:
> GC> 
> GC>     ledger_map_t const& l_map_rep = ledger_map_->held();
> 
>  This is minor, but I don't think we need this variable, we could just iterate
> directly over the RHS.

This pattern
  /ledger_map_t.*& l_map_rep = ledger_map_->held/
occurs many times in the same file, in these other functions as well:
  Ledger::SetRunBases()
  Ledger::PlusEq()
  Ledger::SetOneLedgerVariant()
  Ledger::GetMaxLength()
  Ledger::AutoScale()
  Ledger::CalculateCRC()
  Ledger::Spew()
The reason I kept it was to preserve commonality, making it easier to refactor
this entire set of functions someday.

> GC>     using T = decltype(ledger_invariant_->InforceLives)::size_type;
> GC>     T original_size = ledger_invariant_->InforceLives.size();
> GC>     T lapse_year = T(0);
> GC>     for(auto const& i : l_map_rep)
> GC>         {
> GC>         lapse_year = std::max(lapse_year, 
> static_cast<T>(i.second.LapseYear));
> GC>         }
> GC>     if(lapse_year < original_size)
> GC>         {
> GC>         ledger_invariant_->InforceLives.resize(lapse_year);
> GC>         ledger_invariant_->InforceLives.resize(original_size);
> GC>         }
> GC> 
> GC> (I don't see a much less annoying way to get size_type; ptrdiff_t isn't
> GC> guaranteed to work forever, and I don't ever want to have to correct
> GC> this type again.)
> 
>  What about this:
> 
>       auto original_size = ledger_invariant_->InforceLives.size();
>       decltype(original_size) lapse_year{0};

I did consider that, but...

> ? This would require changing the assignment to lapse_year inside the loop
> to either cast lapse_year to double or use "decltype(lapse_year)" instead
> of "T" in the currently written cast,

...it seemed that it would just move complexity around rather than reduce it.
However:

> but I still prefer using "auto var =
> ..." to "using T = decltype(...); T var = ..." if only because it avoids
> repeating "...".

Reexamining what I committed:

    using T = decltype(ledger_invariant_->InforceLives)::size_type;
    T original_length = ledger_invariant_->InforceLives.size();

in that light, I see an obstacle to understanding that I'll emphasize by
blanking out what is not visual similar between the two lines:

          T            ledger_invariant_->InforceLives   size     ;
    T                   ledger_invariant_->InforceLives size  ;

After moving my eyes repeatedly between the (unexpurgated) two lines to
make sure that the long common part in the middle is identical, I develop
a bias that the rest is identical, which is confirmed when I see "size"
twice; but "size" and "size_type" are not the same.

I think this is an improvement:

-    using T = decltype(ledger_invariant_->InforceLives)::size_type;
-    T original_length = ledger_invariant_->InforceLives.size();
+    auto original_length = ledger_invariant_->InforceLives.size();
+    using T = decltype(original_length);

because it's less confusing, shorter, and simpler. The type T we want is
simply the type that size() returns. It's less clear to find the type of
the thing upon which we're going to call size(), and then ask for that
type's ::size_type type-alias.

>  Or perhaps lapse_year should use the "correct" type in the first place,
> i.e. the same as LedgerVariant::LapseYear, i.e. "double". Then the call to
> std::max wouldn't require any casts and we'd just need to cast it once to
> the decltype(original_size) at the end.

Indeed. That suggests a further refactoring, which I'll push presently.

> GC> Is that really likely to be far short of optimally efficient?
> 
>  Frankly, I don't think we care about efficiency at all here. AFAICS the
> typical number of elements in this vector is just "a few" and maximal
> number if probably not more than a hundred (but if it were a thousand, it
> still wouldn't change anything). So for me what should count most here is
> the clarity of the code.

Yes. The length could plausibly be 121 for arcane problem-domain reasons,
but it's never going to be nearly a thousand.

> GC> resize() doesn't reduce the vector's capacity, so there's no memory
> GC> allocation. Sure, we're destroying the last (original_size-lapse_year)
> GC> elements and default-inserting the same number, but notionally at least
> GC> that's no more complicated than explicitly zeroing the elements.
> 
>  But is this really more clear than using std::fill()? I don't think so,

The technique
  A←(⍴A)↑N↑A
is so obvious in APL that it's an idiom, and std::vector::resize() is just
another way of spelling '↑' (in a single dimension). I guess I'm not going
to understand why it hasn't become idiomatic STL usage.

> e.g. I'd definitely insert a comment saying that the 2 resize() calls above
> are used in order to zero the existing elements and not to resize the
> vector at all.

Okay, done.

> So I'd simply write
> 
>       if(lapse_year < original_size)
>           {
>           std::fill(lives.begin() + lapse_year, lives.end(), 0.0);
>           }
> 
> which is IMHO clear enough

The last time I used std::fill(), I got a segfault.

The first time I wrote this resize...resize code, I got the shortened
length wrong--it was off by one. It didn't crash; it just gave wrong
answers, which regression tests found immediately.

Why use an ill-tempered tool when a robust one is available?

Why operate on ranges when we can operate on the container?




reply via email to

[Prev in Thread] Current Thread [Next in Thread]