emacs-tangents
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [NonGNU ELPA] New package: llm


From: Ihor Radchenko
Subject: Re: [NonGNU ELPA] New package: llm
Date: Fri, 01 Sep 2023 09:53:22 +0000

chad <yandros@gmail.com> writes:

> For large AI models specifically: there are many users for whom it is not
> practical to _actually_ recreate the model from scratch everywhere they
> might want to use it. It is important for computing freedom that such
> recreations be *possible*, but it will be very limiting to insist that
> everyone who wants to use such services actually do so, in a manner that
> seems to me to be very similar to not insisting that every potential emacs
> user actually compile their own. In this case there's the extra wrinkle
> that the actual details of recreating the currently-most-interesting large
> language models involves both _gigantic_ amounts of resources and also a
> fairly large amount of not-directly-reproducible randomness involved. It
> might be worth further consideration.

Let me refer to another message by RMS:

    >>   > While I certainly appreciate the effort people are making to produce 
    >>   > LLMs that are more open than OpenAI (a low bar), I'm not sure if 
    >>   > providing several gigabytes of model weights in binary format is 
really 
    >>   > providing the *source*. It's true that you can still edit these 
models 
    >>   > in a sense by fine-tuning them, but you could say the same thing 
about a 
    >>   > project that only provided the generated output from GNU Bison, 
instead 
    >>   > of the original input to Bison.
    >> 
    >> I don't think that is valid.
    >> Bison processing is very different from training a neural net.
    >> Incremental retraining of a trained neural net
    >> is the same kind of processing as the original training -- except
    >> that you use other data and it produces a neural net
    >> that is trained differently.
    >> 
    >> My conclusiuon is that the trained neural net is effectively a kind of
    >> source code.  So we don't need to demand the "original training data"
    >> as part of a package's source code.  That data does not have to be
    >> free, published, or available.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]