gnu-linux-libre
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[GNU-linux-libre] Against immediate authoritarian-style FSDG enforcement


From: Denis 'GNUtoo' Carikli
Subject: [GNU-linux-libre] Against immediate authoritarian-style FSDG enforcement for third party package managers Was: Can we slow down?
Date: Wed, 26 Jul 2023 18:22:39 +0200

On Mon, 24 Jul 2023 06:57:22 -0400
bill-auger <bill-auger@peers.community> wrote:
> there is an important difference - there is only one FSDG distro with
> any desire to distribute non-free firmware; and they keep it in a
> separate repo, which is considered to be not part of the libre distro
Which distribution is that?

> - that separation was done before the FSF endorsed the distro - there
> is no imperative to liberate firmwares; because distros have yet
> always complied voluntarily on that criteria
I think we still badly need to liberate firmwares otherwise it will
become harder and harder to get and/or use hardware that work with these
distributions.

> the TPPMs situation is very different - distros which are not
> interested in them, already do not distribute them; and so they have
> no imperative to liberate any

I don't think this is true because at least some distributions really
work to fix issues:
- In Parabola docker and phoronix-test-suite were liberated. I worked
  on both. In Parabola, some people also work on icecat and make sure
  that it doesn't refer to nonfree addons repositories. Lot of software
  has also been fixed (like virt-manager) or removed (like fwupd,
  gnome-firmware, etc), etc.

- In Guix phoronix-test-suite was liberated too with a more
  maintainable patch than Parabola. Note that I wasn't involved in
  packaging phoronix-test-suite in Guix beside bug reporting.

But they tend to go after the easy fixes. For instance they usually
don't remove programming language repositories that can break users
setup or distribution packages.

So my supposition is that the amount of work is too big for
distributions to fix everything in a way that doesn't impact too much
users, so things progress slowly on a package by package basis,
prioritizing the easy fixes, and that also depends on the people and
their interest.

So we either need more time, or more help, or both.

> if we want to invoke the "commitment to correct mistakes" criteria,
> the first step would be to convince distros that most TPPMs are unfit
> - so how do we do that, when when distros are free to refute any
> alleged "mistakes" - distros are even free to re-commit past mistakes
> and re-open the original freedom bug report which had once prompted
> an acknowledged correction, essentially admitting that the "mistake"
> is now intentional - 
What happen right now is that this is usually recognized for software
that is relatively easy to fix and doesn't have big impact on users.
For instance when adding new packages in Guix. This is also recognized
in Parabola as several packages were fixed or removed in
non-controversial ways (like fwupd).

So here the fact that the low hanging fruits are tackled before is not
inconsistent with this commitment to correct mistakes, especially given
the scale of the problem.

And if GNU starts helping and providing ready-to-use solutions that
have extremely minimal impact on users workflows, then I don't see why
distributions would refuse to use that. In fact most of them already
use ready-to-use solutions for the browsers and for kernels (there are
small exceptions with both but that doesn't call into question the
general trend which is good).

If we go the extremely authoritarian way (for instance order all distros
to remove every non-compliant third party package managers with very few
delay (like 1 month) or they would be removed from the list) then it
would have very big human cost, and at the end it might turn out to be
counter-productive. A lot of the time would be spent arguing in likely
violent discussions, people will burn out due to too much discussions
and/or trying to fix everything, etc).

So if we go this route in practice we might very well loose our chance
to fix that issue, and it will likely be order of magnitude worse than
the current situation which progress very slowly but might improve if
GNU steps up to help and manage to get good enough work done too.

So if there is some authority it must be more accommodating and light
and what I'm proposing here has a way better chance of working than an
extremely authoritarian solution.

The issue here is what to do when the problem is big and we have very
few resources to tackle it. Resources aren't going to appear like that
out of nowhere. And forcing people that are already very busy cannot
work either to get a problem that big being fixed.

So here we can instead try to find ways to organize better to share
work. We can also try to convince new people to help because the
problem is important. People using certain tools or languages can also
help a bit with the things they care about.

And the way of interpreting the FSDG for third party package managers
that ship nonfree software and of dealing with things like ScummVM isn't
incompatible with that at all, as in fact it enables better
coordination.

For instance we could propose "fixes" for specific software that use
third party package managers as well, and point to example of how to
fix (patches, forks, etc).

And another way to fix all this mess can also be to spend a bit of time
on it to fix maybe 1 or 2 packages from time to time, and encourage
people to do the same, and maybe ask for help as well when working on
that to get things done with less personal time.

This way if/when GNU starts fixing some programming languages package
managers, we'd already have tackled many of the easy packages, and
things will start looking way way more achievable.

> that is the FSDG we have now - distros are required to follow the
> guidelines only until endorsed; and there is no authority to decide
> what constitutes "a mistake" later on, or to ensure that mistakes
> will ever become corrected
As I understand if distributions willingly go against the criteria in a
very visible way, they can get contacted for discussions and then be
removed if no solution is found. But I didn't see that really
happening recently.

> this is essential - users of an FSDG distro should be reasonably
> certain that it actually follows the FSDG, and that the criteria are
> precise enough to be applicable - if the FSDG can not assure that
> with any confidence, it is not doing very much of value for anyone
> the value of the FSDG is not for the FSF as a showcase, nor for
> distro maintainers as a trophy - its primary value is for users -
> just as users need FSDG distros to avoid the hassle and uncertainty
> of curating their own libre software collection perpetually, users
> need the FSDG to assure them which distros will actually do that job
> for them perpetually
At the beginning, to use GNU you needed other nonfree software (a
nonfree kernel at least), computers also had nonfree BIOSes, etc. So at
the time if everybody thought that it was not good enough so it was not
worth using, we would not be there today. Instead people worked to fill
the gaps, to recruit new people to help filling more gaps and so on.

So if we look at the FSDG, it also enables us to organize toward
solving issues like the third party package managers situation we're
discussing right now. Without that we wouldn't be discussing that at
all.

So if we fail with third party package managers we then can (and need
to) warn users about it, telling them that solving it is a work in
progress and pointing to resources of known good third party package
managers (ELPA GNU, ELPA non-gnu, Guix, maybe hackage) and list fixed
programs and/or the limitations of the fixes. 

We already have all the building blocks for that at least for Parabola
where we have an explanation of the limits of the distribution in the
wiki, and Libreplanet has articles on third party repositories, so we
could also include the only known good packages/repositories. 

Doing that in the Libreplanet wiki also enables users to also be safe
when using Guix in Parabola, or to choose the distribution that
protects them best.

Another practical aspect here is that FSDG distributions enables us to
understand the status of free software.

Take some RYF compliant laptop + an FSDG distribution and it enables
you to understand what works and doesn't work with free software. 

Even just running an FSDG distribution on a given hardware and
reporting the results give you extremely useful insights. H-node is
based on that, but the knowledge about what works or doesn't get spread.

For instance the fact that "ATI/AMD" GPUs have no 3D acceleration with
free software is well known within people gravitating toward FSDG
distributions.

And here I also speak from experience: As part of a volunteer job, I
often have to review the freedom of hardware I don't have, and because I
can't run an FSDG distribution on it (because I don't have the
hardware) I end up not really being able to do the job correctly and
instead I tell that this hardware is probably fine, it can probably
boot with free software, etc. Though when I find nonfree software
dependencies I know for sure it's not fine.

And note that few people can also do that kind of job, because you
need to read source code and know where to look, so if FSDG
distributions disappear, that knowledge (what works and doesn't work)
will also disappear over time because almost nobody will read source
code to find things out and reading source code is very error prone
anyway. And without that insight, it would be really hard to understand
where we are with free software.

And we live at times where free and nonfree software are mixed more
and more together, and the long term effect of that is the watering down
of free software where it becomes indistinguishable from nonfree
software, but the FSDG sets up very clear criteria to combat that, so
it has strategic importance as well.

And so far, as I understand, the FSDG hasn't been made more and more
lax up to the point where it becomes meaningless, so at least that is
good. 

But to keep up with that we also need to find ways as the free software
movement as a whole, to also invest resources to fix issues like
nonfree firmwares, third party package managers.

And I think that at the end getting things fixed in one way or another
is order of magnitude more important than punishment or enforcement
because what matters is freedom here not punishing non-compliance
regardless of the cost for freedom.

> to wait until each or all TPPMs have been liberated, before expecting
> distros to do anything at all about them, is only to postpone the
> inevitable moment of truth - the situation will be no different when
> the time comes to convince distros to adopt the proposed liberated
> TPPMs - we would still need to convince distros that most TPPMs are
> unfit; and that would still need to be done with some authority, in
> order to be compelling - i dont see any value in postponing that
> event until after help is no longer needed, when the same could be
> done now while help is needed
We have everything we need to fix them one by one. Why not doing that?
It's even already been done in practice when the fixes are good enough.

> i would rather put that horse squarely before the cart now, rather
> than later - it is unreasonably optimistic to be building any new
> carts for that horse to pull, when it is so uncertain whether or not
> the horse can stand on its own legs, let alone deliver carts
> successfully to any destination
Free software would not exist if we put the horse before the cart. And
anyway it would be impossible to do. So putting the horse before the
cart is not always desirable nor even possible.

Free software started by writing applications like Emacs, then what was
needed to compile these applications, then an OS was made, etc

But the issue is that even if instead people started designing
hardware, they would need software for that, so we end up in a circular
dependency situation from the start. 

And we need to find contributors too, and these contributors are
usually a small subset of users, so we cannot get resources magically
without getting users too. 

So the way to break the circle is very well described in a paper
called "hacking as transgressive infrastructuring"[1] which for some
people might states the obvious but is fun to read nevertheless.

Basically the idea is that you tackle the easiest first (like writing a
free software application) and that step by step you gain insight into
how the rest of the software or hardware stack works which then enable
other people to reuse that knowledge to free other parts of the stack.

For instance if you write a free software application and learn some
(secret) API along the way, it's also possible to re-implement that API
too. So step by step you can go lower and lower and free more and more
parts of the stack, one component at the time.

Form time to time, it's even possible to start revisiting part of the
stack like with Guix that is a very different kind of distribution than
the usual GNU/Linux distributions, which fixes many of the
shortcomings of regular distributions, though managing to pull off
something like that is not easy as you need a user base to get enough
contributors for making the project sustainable.

References:
-----------
[1]http://mkorn.binaervarianz.de/pub/korn-cscw2016.pdf

Denis.

Attachment: pgpjAsboeIYvY.pgp
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]