emacs-tangents
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Collaborative training of Libre LLMs (was: Is ChatGTP SaaSS? (was: [NonG


From: Ihor Radchenko
Subject: Collaborative training of Libre LLMs (was: Is ChatGTP SaaSS? (was: [NonGNU ELPA] New package: llm))
Date: Sat, 09 Sep 2023 10:28:35 +0000

Richard Stallman <rms@gnu.org> writes:

> 1. In Wikipedia, a contributor voluntarily chooses to participate in editing,
> Editing participation is separate from consulting the encyclopedia.  This
> fits the word "collaborating.
>
> By contrast, a when the develoers of ChatGTP make it learn from the
> user, that "contribution" is neither voluntary nor active.  It is more
> "being taken advantage of" than "collaborating".

It is actually voluntary now - according to
https://techunwrapped.com/you-can-now-make-chatgpt-not-train-with-your-queries/,
one can disable or enable training on user queries.
By default, it is enabled though.

> 2. Wikipedia is a community project to develop a free/libre work.  (It
> is no coincidence that this resembles the GNU Project.)  Morally it
> deserves community support, despite some things it handles badly.
>
> By contrast, ChatGTP is neither a community project nor free/libre.
> That's perhaps why it arranges to manipulate people into "contributing"
> rather than letting them choose.

Indeed, they do hold coercive power as people have no choice to copy run
the model independently.

However, I do not care much about OpenAI corporate practices - they are
as bad as we are used to in other bigtech SaaSS companies. What might be
a more interesting question to discuss is actual genuine collaborative
effort training a libre (not ChatGTP) model.

Currently, improving models is rather sequential process. If there is
one publicly available model, anyone can download the weights, train
them locally, and share the results. However, if multiple people take a
single _same_ version of the model and train it, the results, AFAIK,
cannot be combined.

As Andrew mentioned, the approach with "patching" a model is quite
interesting idea - if such "patches" may be combined, we can
get rid of the above concern with collaborative _ethical_ development of
models.

However, if the "patching" technology can only serve a single "patch" +
main model, there is a problem. Improving libre neural networks will
become difficult, unless people utilize collaborative server to
continuously improve a model.

Such collaborative server, similar to ChatGPT, will combine "editing"
(training) and "consulting" together. And, unlike Wikipedia, these
activities are hard to separate.

This raises a moral question about practical ways to improve libre
neural networks without falling into SaaSS practices.

As a practical example, there is https://github.com/khoj-ai/khoj/ Libre
neural network interface in development (it features Emacs support).
They recently started https://khoj.dev/ cloud aiming for people who
cannot afford to run the models locally. This discussion might be one of
the ethical considerations of using such cloud.

I CCed khoj devs.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]