[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: CI status

From: Maxim Cournoyer
Subject: Re: CI status
Date: Sun, 19 Dec 2021 21:16:20 -0500
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/27.2 (gnu/linux)

Hi Mathieu,

Mathieu Othacehe <> writes:

> Hello,
> You must have noticed that the CI is currently struggling a bit. Here is
> a small recap of the situation.
> * The IO operations on Berlin are mysteriously slow. Removing files from
>   /gnu/store/trash is taking ages. This is reported here:
>   We have to kill the garbage collect frequently to keep things
>   going. The bad side is obviously that we can't do that forever, as we
>   only have 9.3T and decreasing, while we aim to stay at 10T available.
> * The PostgreSQL database behind also became super slow
>   and I decided to drop it. I don't know if there's a connection with
>   the above point. I'm missing the appropriate tools/knowledge to
>   monitor the IO & file-system performances.
> * The php package isn't building anymore, reported here:
> This means that we cannot
>   reconfigure zabbix. I removed it from the berlin configuration
>   temporarily.
> * The cuirass-remote-server Avahi service is no longer visible when
>   running "avahi-browse -a". I strongly suspect that this is related to
>   the static-networking update, even if I don't have a proof for
>   now. This means that the remote-workers using Avahi for discovering
>   (hydra-guix-*) machines can no longer connect. The
> list is thus quite empty.
> * Facing those problems, I tried to rollback to a previous system
>   generation, but this is bringing even more issues, as for instance the
>   older Cuirass package, is struggling with the new database structure and
>   other niceties. I think out best course of action is to stick to
>   master and fix the above problems.

Ooof, thanks a lot with reporting (and fixing) the above problems on top
of baby sitting Cuirass while things stabilize...  I'll try to keep an
eye on Berlin IO activity to see if there are any offender consuming an
abnormal amount of IO.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]