lmi
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lmi] rate_table_tool: merge a whole directory


From: Greg Chicares
Subject: Re: [lmi] rate_table_tool: merge a whole directory
Date: Wed, 4 Jan 2017 00:22:47 +0000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Icedove/45.4.0

On 2016-11-23 01:29, Greg Chicares wrote:
> Proprietary rate tables are distributed along with our free software.
> These tables can be stored in two interconvertible formats:
>  - binary: not human-readable, which is good for distribution to
>    outsiders who should be discouraged from extracting proprietary
>    data, but inconvenient for internal maintenance; and
>  - human-readable text, which is better for maintenance but not for
>    external distribution.
> Our plan is to store tables as text in a proprietary git database,
> and use 'rate_table_tool' to convert them to binary for distribution.
> 
> For this purpose, it seems like a good idea to make 'rate_table_tool'
> accept '--merge=/some/directory'.

It is even better (commit 4cc9f6659215871b12e821e567be9bac3cbf8fa8)
to make that command add tables in sorted order. We have about 600
tables in a proprietary git repository, which are combined into a
six-megabyte binary database. If Kim and I each regenerate that
blob after committing a tiny change, then our blobs should now be
identical, and we can confirm that by comparing md5sums.

This is important because we're aware of numerous flaws in historical
tables, which we've forborne to fix because any change was cumbersome
and difficult to verify, and required sharing multiple-megabyte email
attachments. Now a small change requires only a small git bundle and
a couple of md5sums, so piecemeal corrections are safe and easy.

BTW, we spent a good deal of effort cleansing old proprietary tables
so that they all pass our recently beefed-up validation, but never
discussed whether tables downloaded from the SOA validate. They're
all okay for our purposes:

- The original 'qx_cso' validates perfectly. Evidently great care
was taken in its preparation. Only four people contributed to it,
and two of them include their compuserve IDs (roughly, a US parallel
of minitel)--so this work would have been done in the 1990s, and
presumably they all used the same early version of the SOA program.

- 'qx_ann' is less perfect, e.g.:
      Verification failed for table #819: After loading and saving
        the original table binary contents differed.
      Table #910 specifies 5 decimals, but 6 were necessary
  However, we use only tables 825 and 826, which do validate.

- Validating 'qx_ins' also produces some "binary contents differed"
  diagnostics, but we use it only for the 'sample' product, which
  is not used in production.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]