[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Mid-December update on

From: Christopher Baines
Subject: Re: Mid-December update on
Date: Thu, 16 Dec 2021 12:48:34 +0000
User-agent: mu4e 1.6.6; emacs 27.2

zimoun <> writes:

> Hi Chris,
> On Thu, 16 Dec 2021 at 00:20, Christopher Baines <> wrote:
>> zimoun <> writes:
>>> Do you think that Bordeaux could run
>>>    <>
>> The Guix Build Coordinator just builds derivations. I haven't got it to
>> build a manifest before, but that's possible I guess.
> I am not sure to understand since Cuirass also builds derivations and
> the purpose of this source-manifest.scm is to let Cuirass ingest all
> sources,
>     <>
>> I think it's unnecessary though, since I believe derivations for all
>> origins of all packages are already being built, that happens through
>> just asking the coordinator to build derivations for all packages, you
>> don't need to specify "source" derivations separately.
> Your assumption is wrong, IMHO.  We have many failed examples and at the
> end Ludo wrote this source-manifest.scm to be sure that all is ingested
> for sure.

What assumption?

I believe the reason the source-manifest.scm thing is useful to use with
Cuirass is that it has something to do with Cuirass copying the outputs
from those builds back to where they're served from, and maybe
registering GC roots as well. I could be wrong though.

I'm saying that this additional thing is unnecessary when using the Guix
Build Coordinator to build packages, since at least with the
configuration for, it'll build the sources, store
and serve them without any extra effort.

>>> ?  Having a redundancy about all origins would avoid breakage.  For
>>> instance, because Berlin was down yesterday morning, “guix pull” was
>>> broken because the missing ’datefuge’ package – disappeared upstream.
>> I would hope that has a substitute for that, could
>> you check the derivation against, and see if there's a
>> build? Use a URL like:
>> There is one issue though, doesn't provide content
>> addressed files in the same way guix publish does. I hope to add that
>> through the nar-herder, and once that's added, can
>> hopefully be added to the list of content addressed mirrors:
>> That would mean that the bytes for a tar archive for example would be
>> available by the sha256 hash, not just as a nar. I'm not sure to what
>> extent this would help, but it's probably useful.
> Thanks for explaining the details.  From a pragmatical point of view as
> end-user, “guix pull” must Just Work whatever the plumbing.
> For instance, some of us spent energy to “evangelize” in scientific
> communities how Guix is awesome.  Next morning, people give a look and
> bang!  Because a tiny and short outage.  Obviously, “shit happens”(*)
> and it is really really really sparse but too late the damage is there.
> My feeling here is that both build farms work independently instead of
> coordinate the usage of resources and exploit this strength to have two.

I too want to coordinate, although I think that having two independent
build farms is actually good for reliability.

>>> I remember discussions about CDN [2,3,4,5,6].  I do not know if it
>>> solves the issue but from my understanding, it will improve at least
>>> performance delivery.  Well, it appears to me worth to give a try.
>>> 2: <>
>>> 3: 
>>> <>
>>> 4: <>
>>> 5: <>
>>> 6: <>
>> Effectively this is moving towards building a CDN. With the nar-herder,
>> you could deploy reverse proxies (or edge nodes) in various
>> locations. Then the issue just becomes how to have users use the ones
>> that are best for them. This might require doing some fancy stuff with
>> GeoIP based DNS, and somehow sharing TLS certificates between the
>> machines, but I think it's quite feasible.
> Considering the human resource vs the money resource, it appears to me
> better to invest the human energy into things that do not exist and rely,
> for now, on existing solutions; even if the project has to pay money.
> For what my opinion is worth here.

I'm not against paying for some CDN, although I'd prefer not to pay, or
pay less.

>>> To me, one first general question about backup coordination is to define
>>> a window for time:
>>>  - source: forever until the complete fallback to SWH is robust;
>>>  - all the substitutes to run “guix time-machine --commit=<> -- help ”
>>>    for any commit reachable by inferior: forever;
>>>  - package substitute: rule something.
>> The idea I've been working with so far is simply to store everything
>> that's built, forever.
> For sure, anyone wants that at the end.  My point is raising the
> priority of the intermediary steps.
>> Currently, that amounts to 561,043 nars totaling ~2.5TB's.
>> How feasible this is depends on a number of factors, but I don't have
>> any reason to think it's not feasible yet.
> That’s exactly my point!  It is not about feasibility – all is doable
> with enough time and energy – but instead it is about controlling the
> factors to “robustify“ what the project consider highly important – as
> keep all sources or never break ‘guix pull’ – whatever the status of
> infrastructure.
> Again, thanks for all the work.  Becausse, in any case, for sure, the
> situation is daily improving. :-)
> Cheers,
> simon

Attachment: signature.asc
Description: PGP signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]