Right. Perhaps the onus is on the developers (i.e. us) to make things
a bit easier, then?
To be honest, I barely understand how the GNUnet project is put
together on a source code level, let alone how packaging is done. One
of the things I'm going to do with the Sphinx docs is provide a
high-level overview of how the main repo is structured.
On the subject of complexity, I attempted to disentangle that awful
internal dependency graph a while ago, to get a better idea of how
GNUnet works. I noticed that it's possible to divide the subsystems up
into closely-related groups:
* a "backbone" (CADET, DHT, CORE, and friends),
* a VPN suite,
* a GNS suite,
* and a set operations suite (SET, SETI, SETU).
A bunch of smaller "application layer" things (psyc+social+secushare,
conversation, fs, secretsharing+consensus+voting) then rest on top of
one or more of those suites.
I seem to recall that breaking up the main repo has been discussed
before, and I think it got nowhere because no agreement was reached on
where the breaks should be made. My position is that those
"applications" (which, IIRC, are in various states of "barely
maintained") should be moved to their own repos, and the main repo be
broken up into those four software suites.
As Maxime says, GNUnet takes a long time to compile (when it actually
does - I'm having problems with that right now), and presumably quite
a while to test too. The obvious way to reduce those times is to
simply *reduce the amount of code being compiled and tested*. Breaking
up the big repo would achieve that quite nicely.
More specifically related to packaging, would it be a good idea to
look into CD (Continuous Delivery) to complement our current CI setup?
It could make things easier on package maintainers. Looks like Debian
has a CI system we might be able to make use of, and all we'd need to
do is point out the test suite in the package that goes to the Debian
archive.