lmi
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lmi] testing actuarial_table performance


From: Václav Slavík
Subject: Re: [lmi] testing actuarial_table performance
Date: Wed, 30 May 2012 18:17:00 +0200

Hi,

On 30 May 2012, at 12:32, Greg Chicares wrote:
> It's safe to assume that using xml tables will make the guideline-premium
> server much slower. It runs only a single scenario at a time, and performs
> only a small part of the illustration calculations. But that's not a big
> enough concern to obstruct the goal of migrating to xml: that server is
> not currently used in production; and it uses only a few small tables,
> which could be hard-coded if we want (e.g., table_42() in the unit test
> for tables is already hard coded, and adding a few dozen like it wouldn't
> take much space).

If I understand it correctly, the server is a short lived process, correct? We 
could still cache the data on disk e.g. as a memory-mapped dump if the 
performance proves to be a problem. Or at least automate the translation of XML 
actuarial table into corresponding C++ code ("hard coded table")... I can look 
into xmlwrapp performance too if you want.

> Physically, 'data_' is a std::vector. Correct me if I'm wrong, but I think
> that *notionally* it's a matrix in the 'e_table_select_and_ultimate' case.
> This is important because, if it's true, then we're caching the entire
> contents of the table: all rows and columns of the notional matrix.

Yes, exactly.

I'll focus on fixing the performance first, then. The two obvious approaches I 
can see are

(a) cache globally (as in this prelim. patch); or

(b) remove actuarial_table_rates*() functions, replace them with 
actuarial_table method calls, managed lifetime of of actuarial_table instances 
in the callers

Do you have any preference? The first one is certainly much simpler and 
maximizes "cache hits", but it never frees memory [1], but it's somewhat ugly 
compared to (b). I didn't investigate this deeply enough to understand how 
doable (b) is.

Regards,
Vaclav

[1] Although this could be addressed with periodical pruning, of course.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]