[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [lmi] Continuing deboostification with removing dependency on Boost.
From: |
Greg Chicares |
Subject: |
Re: [lmi] Continuing deboostification with removing dependency on Boost.Regex |
Date: |
Sun, 30 May 2021 18:04:16 +0000 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 |
On 5/30/21 1:08 PM, Vadim Zeitlin wrote:
> On Sun, 30 May 2021 11:43:50 +0000 Greg Chicares <gchicares@sbcglobal.net>
> wrote:
[...]
> GC> +concinnity_check_files := $(addsuffix
> --concinnity-check,$(prefascicle_dir)/.)
> GC>
> GC> +.PHONY: %-concinnity-check
> GC> +%-concinnity-check:
> GC> + @-$(PERFORM) $(TEST_CODING_RULES) $*
> GC>
> GC> .PHONY: check_concinnity
> GC> -check_concinnity: source_clean custom_tools
> GC> +check_concinnity: source_clean custom_tools $(concinnity_check_files)
> GC>
> GC> ...thereby using 'make' to take care of parallelism, without
> GC> making 'test_coding_rules$(EXEEXT)' multithreaded?
[...]
> Yes, this is indeed a possible solution and I've actually thought about
> doing something like this myself immediately _after_ posting my previous
> message, but I wouldn't say it's perfect.
>
> One reason for it is that relaunching a new copy of the program seems
> inefficient and it's definitely going to be noticeably slower when using
> Wine (whose process startup overhead is not negligible at all) and might be
> even noticeable when using native processes: even if launching them is
> fast, doing it half a thousand times more than necessary still seems
> wasteful.
Okay, 500 processes will cost (1+θ) times as much as one process with
500 threads, and I'm inclined to think that θ is greater than zero;
but by how much?
Putting msw aside, how big is θ if we fork() a process 500 times?
[I don't think we can answer such questions without measuring.
I ask simply because the questions seem intriguing.]
> The other reason is that I'd like to be able to run test_coding_rules
> quickly manually, or from CI scripts, too, and doing something in the
> makefile is not going to help with this at all.
Let ζ be the overhead added by 'make', and η be the savings
realized by using sentinel files as above. Is ζ - η generally
positive, or negative? [Just another "wonder question".]
> But, as I've also realized after sending the previous message, in
> principle both of these problems could be addressed by using GNU Parallel
> (https://www.gnu.org/software/parallel/), so maybe we should just do this.
GNU parallel would let us run 500 jobs with as much parallelism
as our hardware allows; but wouldn't it still launch 500 processes?
and wouldn't one multithreaded process be faster?