lmi
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lmi] Dramatic speedup [Was: Inversion of control]


From: Greg Chicares
Subject: [lmi] Dramatic speedup [Was: Inversion of control]
Date: Wed, 19 Sep 2018 14:59:18 +0000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1

On 2018-09-17 14:15, Vadim Zeitlin wrote:
> 
>  In the "functional style" you don't have a virtual print_a_data_row() to
> override in the first place. Instead, you have just a print() function,
> which takes the same parameters as paginate::init() takes now as well as a
> number of callbacks corresponding to the different virtual methods.
> 
>  So the code would basically look like

[...snip lambdas...]

> Note that this requires passing position and year as parameter to the
> functions and returning the new position from them because they, being
> functions and not objects, can't have any state now. But IMHO this is not a
> drawback in this case as the state is simple and it's easy to manage it
> inside print_with_page_breaks() itself rather than in some object
> (functional programming fans would tell you that it's never a drawback, but
> I'd be content with a less sweeping statement).
Never, perhaps, a drawback in a language designed to be functional.
But consider these startling measurements:

milliseconds to produce a PDF illustration
 - using 'all-samples.cns' as shared and recently discussed here
 - "old" = 2018-07-17
 - I threw away the first timing, and report the mean of the next five here

           HEAD     old   ratio
  gpp       467     846    55%
  ipp       785    1527    51%
  naic      579     ---    ---
  finra     718    1309    55%

Your changes applied yesterday made it twice as fast. How did that happen?

The b2ec7acde commit message gives part of the answer:
    This avoids parsing the same HTML twice, which is relatively
    time-consuming: for the total illustration generation time of ~1000ms,
    parsing HTML took ~200ms before this change and ~120ms after it,
    resulting in 8% speed up.
but the total speedup is far more than eight percent.

My theory is that source code for an FP language deliberately disregards
efficiency (ostensibly doing the same thing, over and over, each time a
function is called with the same parameters), but compiles into binaries
that are much more efficient than might appear from the source code
(because of runtime memoization); and that your changes applied yesterday,
in moving away from a functional style, effected a memoization (in HTML
parsing as noted above, for example).

Thus, storing intermediate results in C++ class members is like thunking.
We have to do it by hand, in the source. But it makes the source easier
to understand, and the binaries faster.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]