lmi
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lmi] Generating pages with tables in the new PDF generation code


From: Greg Chicares
Subject: Re: [lmi] Generating pages with tables in the new PDF generation code
Date: Sun, 27 Aug 2017 16:13:53 +0000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1

On 2017-08-27 12:50, Vadim Zeitlin wrote:
> On Sun, 27 Aug 2017 12:23:16 +0000 Greg Chicares <address@hidden> wrote:
> 
> GC> On 2017-08-17 16:13, Vadim Zeitlin wrote:
> GC> [...]
> GC> > get the direct-pdf-gen branch from my GitHub repository (you don't
> GC> > need a GitHub account for this) and build it yourself
> GC> 
> GC> I'm trying to devise a high-level strategy for merging this into lmi 
> master.
> 
>  Sorry, I should have been more clear: this is not the branch you're
> supposed to merge! This branch is based on my local "tt" branch which
> contains both changes not yet merged to master and some that are not ever
> supposed to go into it. If you'd like to have something mergeable, let me
> rebase this branch on master -- and then you would be able to merge it much
> more easily.

OTOH, there's really nothing objectionable about adding another
'config_*.hpp' file. We already have two to support other non-free
compilers that were useful in the past, and even some gnu projects
contain files that exist solely to support non-free compilers.

To take another example, the 'wx_checks.cpp' changes might be
something I could get comfortable with, and using a precompiled
wx library might be an important benefit. But we probably should
spin that off as an independent task.

> GC> I did this:
> GC> 
> GC> mkdir --parents /opt/lmi/pdf/
> GC> cd /opt/lmi/pdf/
> GC> git clone https://github.com/vadz/lmi.git --branch direct-pdf-gen 
> --single-branch
> GC> cd lmi
> 
>  I know you're allergic to branches, but this is really, really not the
> optimal way to do this. Instead of creating a whole new repository, just
> create the branch in your existing git repository using
> 
>       % git fetch https://github.com/vadz/lmi.git direct-pdf-gen
>       % git branch direct-pdf-gen FETCH_HEAD
> 
> (the first command makes the latest commit fetched by it available under
> this special FETCH_HEAD name and the second one just creates a local branch
> with the same name as remote one -- this is not, strictly speaking,
> necessary, but as FETCH_HEAD is implicitly updated by any "git pull" you
> do, it can be confusing not to give it a more permanent name).

Whether it's optimal depends on the goal. If the goal is simply
to get this merged, then an automated merge command is optimal.

My goal is to conserve stability while evolving, by reviewing
and understanding each change before committing it. The actual
act of committing is trivially mechanical, and might take one
percent of the total time, so it doesn't need to be optimized.
What I do want to optimize is the thinking process, and in my
experience the best way is to break a large whole into more
tractable pieces that I can integrate one at a time with
comprehensive testing.

>  And once you have this branch locally, you can just do
> 
>       % git merge direct-pdf-gen
> 
> which is what I would strongly prefer, because it would keep the history of
> my commits, which can be very useful when returning to this code later. But
> this would create a real merge and if you absolutely want to avoid this
> (although I still believe that this wouldn't have any drawbacks whatsoever
> for you), you can do
> 
>       % git merge --squash direct-pdf-gen
>       % git commit
> 
> which would merge all my changes as a single commit, squashing them. Again,
> this loses history and would be unfortunate IMO, but if you don't want to
> make a real merge, squash-merge like above is still much better than
> applying the changes manually.

Eradicating history is not a goal: it's an anti-goal, which we should
avoid if we can do so without undue convenience. For example, if you
can write a simple command that would import all of the new files at
once into lmi HEAD, preserving their history, that's fine with me
as long as
 - this doesn't break anything in any way, and
 - I don't have to spend hours learning about git branches.

OTOH, 64 hunks changed in 'group_quote_pdf_gen_wx.cpp', and we can't
integrate that into HEAD without careful review and testing, lest we
destabilize the production system. And it would take considerable
analysis for me to get comfortable with the change to 'ledger.hpp':
it looks pretty radical on the face of it, though maybe it's just the
natural way to get rid of 'ledger_xml_io.cpp'--I can't tell at a
quick glance. The changes to existing production code are the most
worrisome because they may potentially destabilize features other
than PDF generation; that's why I'd like to treat them separately.

At any rate, the two strategies I see are:

(1) Let all PDF development continue to its conclusion in your
personal repository, then test that thoroughly, and finally merge
it all at once; or

(2) Incrementally adopt changes from your personal repository into
lmi HEAD, proceeding one step at a time until no step remains.

Of those two, isn't the second clearly preferable?

>  But to return to the main question, if you'd like to test merging the new
> code right now, please let me know and I'll rebase my branch on master. My
> own plan was to do it later, when I consider it to be really ready for the
> merge -- which is not the case yet.

I'm in no hurry to merge the whole thing. You've already provided
a binary for us to test, and we'll certainly have observations to
share along the way. Meanwhile, I don't need to build it myself:
there's no point in that, because I'm sure it will build.

If we adopt strategy (2) above, then there's a lot of code to
integrate, and it's best for me to start now. Or do you have a
a different strategy to recommend that doesn't require changing
to a bazaar philosophy?



reply via email to

[Prev in Thread] Current Thread [Next in Thread]