emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [NonGNU ELPA] New package: llm


From: Richard Stallman
Subject: Re: [NonGNU ELPA] New package: llm
Date: Mon, 21 Aug 2023 21:06:14 -0400

[[[ To any NSA and FBI agents reading my email: please consider    ]]]
[[[ whether defending the US Constitution against all enemies,     ]]]
[[[ foreign or domestic, requires you to follow Snowden's example. ]]]

  > While I certainly appreciate the effort people are making to produce 
  > LLMs that are more open than OpenAI (a low bar), I'm not sure if 
  > providing several gigabytes of model weights in binary format is really 
  > providing the *source*. It's true that you can still edit these models 
  > in a sense by fine-tuning them, but you could say the same thing about a 
  > project that only provided the generated output from GNU Bison, instead 
  > of the original input to Bison.

I don't think that is valid.
Bison processing is very different from training a neural net.
Incremental retraining of a trained neural net
is the same kind of processing as the original training -- except
that you use other data and it produces a neural net
that is trained differently.

My conclusiuon is that the trained neural net is effectively a kind of
source code.  So we don't need to demand the "original training data"
as part of a package's source code.  That data does not have to be
free, published, or available.

  > In practice though, I think if Emacs were to support communicating with 
  > LLMs, it would be good if - at minimum - we could direct users to an 
  > essay explaining the potential ethical/freedom issues with them.

I agree, in principle.  But it needs to be an article that
the GNU Project can endorse.




-- 
Dr Richard Stallman (https://stallman.org)
Chief GNUisance of the GNU Project (https://gnu.org)
Founder, Free Software Foundation (https://fsf.org)
Internet Hall-of-Famer (https://internethalloffame.org)





reply via email to

[Prev in Thread] Current Thread [Next in Thread]