[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [lmi] PCH
From: |
Greg Chicares |
Subject: |
Re: [lmi] PCH |
Date: |
Thu, 17 Dec 2015 18:04:45 +0000 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Icedove/31.3.0 |
On 2015-12-17 16:48, Vadim Zeitlin wrote:
> On Thu, 17 Dec 2015 05:14:31 +0000 Greg Chicares <address@hidden> wrote:
>
> GC> On 2015-12-17 02:28, Greg Chicares wrote:
> GC> > gcc-3.4.5 was released 2005-11-30, just over ten years ago. It's high
> GC> > time we upgraded. It's also time to forsake the moribund mingw.org and
> GC> > migrate to MinGW-w64
> GC>
> GC> My first impression from looking at our unit tests is that MinGW-w64's
> GC> snprintf() and rounding functions may be of lower quality than
> GC> mingw.org's. We might want to import those parts of the mingw.org
> GC> sources into lmi.
>
> This is really strange, don't both of them just use snprintf() from the
> Microsoft CRT?
No, mingw.org completely replaced it with something better. With whatever
MinGW-w64 uses, lmi's 'numeric_io_test' fails, which strongly suggests
that lmi's numeric results would change and potentially become incorrect.
> GC> The build time changes dramatically. Here's a recent clean build with
> GC> gcc-3.4.5:
> GC> dual E5520, HDD = 2TB WD caviar black, 3 GBPS SATA II motherboard
> GC> 32-bit msw-xp guest in qemu-kvm
> GC> 3:19 with '--jobs=16' vs. 17:55 without parallelism
> GC> Contrast that with this clean build I just ran on the same system with
> GC> gcc-4.9.2, using the same command:
> GC> $time make $coefficiency install check_physical_closure >../log 2>&1
> GC> 1078.15s user 808.14s system 193% cpu 16:12.84 total
>
> This is horrible but at this point not using precompiled headers becomes
> just an exercise in self-flagellation. MSVS compiles even faster than g++
> 3.4.5, but I still use PCH with it. And compiling with PCH and g++ 5 takes
> much, much less than 1000 seconds on a much slower system. Can I please
> resurrect the patch adding PCH support to lmi once the build works?
I thought you withdrew it because there was no benefit with g++-3.4.5 .
> In addition to the performance gains, it would also save me some time I
> have to spend to carefully manually remove the PCH-related code from any
> new files I add on my working branches before submitting them or, on the
> contrary, to add it to any new files on the trunk before I can build them
> with MSVS.
It would certainly be best to avoid that extra labor.
I want to get everything to build before making a sweeping change like this.
"First make it right, then make it fast." This could come soon after. OTOH,
maybe it's simpler than I had feared...
> So I'd really like to be able to do it, especially as the changes are
> really minor and are basically just this, for all files:
>
> ---------------------------------- >8 --------------------------------------
> diff --git a/emit_ledger.cpp b/emit_ledger.cpp
> index 71d3de1..a784d23 100644
> --- a/emit_ledger.cpp
> +++ b/emit_ledger.cpp
> @@ -21,8 +21,8 @@
>
> // $Id$
>
> +#include LMI_PCH_HEADER
> #ifdef __BORLANDC__
> -# include "pchfile.hpp"
> # pragma hdrstop
> #endif // __BORLANDC__
>
> ---------------------------------- >8 --------------------------------------
Is that exactly right? Today that file contains:
#ifdef __BORLANDC__
# include "pchfile.hpp"
# pragma hdrstop
#endif // __BORLANDC__
and that patch would replace it with:
#include LMI_PCH_HEADER
#ifdef __BORLANDC__
# pragma hdrstop
#endif // __BORLANDC__
But in the extremely likely case that you're not using borland for lmi,
# include "pchfile.hpp"
is harmless because of its __BORLANDC__ include guards.
OTOH, I guess it might be time to drop support for borland, because I
haven't used it in a year or two. So could we do the following instead?
+#include LMI_PCH_HEADER
-#ifdef __BORLANDC__
-# include "pchfile.hpp"
-# pragma hdrstop
-#endif // __BORLANDC__
Then nothing is lost as long as we continue not to use borland, and
I can temporarily insulate myself from any possible damage by defining
#define LMI_PCH_HEADER some_empty_header
for the time being. Can you propose an actual macro that works? IIRC,
I tried "#include SOME_MACRO" years ago, and it failed with some
compiler (borland perhaps). Having just learned that #elif is not the
same as #else #if, I no longer trust my preprocessor skills.
I'm sure there's more to it than this (you must be defining the macro
LMI_PCH_HEADER some way, somewhere), but it seems that the patch above
would help you immediately without any risk of harm to me, as long as
we can make "#include LMI_PCH_HEADER" a NOP.
> I will do the tests to with the PCH to let you know how much faster
> exactly does this make the build go.
Thanks. I may have to stop using my pentium4 museum piece for testing
clean builds if we can't do something about this awful speed issue.
- [lmi] Upgrading to gcc-4.9.2, Greg Chicares, 2015/12/16
- Re: [lmi] Upgrading to gcc-4.9.2, Greg Chicares, 2015/12/17
- Re: [lmi] Upgrading to gcc-4.9.2, Greg Chicares, 2015/12/17
- Re: [lmi] Upgrading to gcc-4.9.2, Greg Chicares, 2015/12/17
- Re: [lmi] Upgrading to gcc-4.9.2, Greg Chicares, 2015/12/17
- Re: [lmi] Upgrading to gcc-4.9.2, Greg Chicares, 2015/12/17
- Re: [lmi] Upgrading to gcc-4.9.2, Greg Chicares, 2015/12/17
- Re: [lmi] Upgrading to gcc-4.9.2, Vadim Zeitlin, 2015/12/17
- Re: [lmi] Upgrading to gcc-4.9.2, Greg Chicares, 2015/12/17
- Re: [lmi] Upgrading to gcc-4.9.2, Greg Chicares, 2015/12/18
- Re: [lmi] Upgrading to gcc-4.9.2, Greg Chicares, 2015/12/17
- Re: [lmi] Upgrading to gcc-4.9.2, Greg Chicares, 2015/12/17