[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
announce: guile_llama_cpp 0.1 release
From: |
Andy Tai |
Subject: |
announce: guile_llama_cpp 0.1 release |
Date: |
Sun, 2 Jun 2024 21:45:28 -0700 |
# guile_llama_cpp
GNU Guile binding for llama.cpp
This is version 0.1, Copyright 2024 Li-Cheng (Andy) Tai, atai@atai.org
Available as
https://codeberg.org/atai/guile_llama_cpp/releases/download/0.1/guile_llama_cpp-0.1.tar.gz
Guile_llama_cpp wraps around llama.cpp APIs so llama.cpp can be
accessed from Guile scripts and programs, in a manner
similar to llama-cpp-ython allowing the use of llama.cpp in Python programs.
Currently a simple Guile script is provided to allow simple "chat"
with a LLM in gguf format.
## setup and build
guile_llama_cpp is written in GNU Guile and C++ and requires
Swig 4.0 or later, GNU guile 3.0, and llama.cpp (obviosuly)
installed on your system.
>From sources, guile_llama_cpp can be built via the usual GNU convention,
export LLAMA_CFLAGS=-I<llama_install_dir>/include
export LLAMA_LIBS=-L<llama_install_dir>/lib -lllama
./configure --prefix=<install dir>
make
make install
Once in the future llama.cpp provides pkg-config support, the first
two "export" lines can be omitted.
If you are running GNU Guix on your system, you can get a shall with
all needed dependencies set up with
guix shell -D -f guix.scm
and then use the usual
configure && make && make install
commands to build.
## run
To use guile_llama_cpp to chat with a LLM (Large Language Model), you
need to first download a LLM in gguf format.
See instructions on the web such as
https://stackoverflow.com/questions/67595500/how-to-download-a-model-from-huggingface
As an example, using a "smaller" LLM "Phi-3-mini" from Microsoft; we
would first download the model in gguf format via wget:
wget
https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf/resolve/main/Phi-3-mini-4k-instruct-q4.gguf
then you can chat with it, in the build directory:
./pre-inst-env simple.scm "What are the planets?"
Phi-3-mini-4k-instruct-q4.gguf
The general form to do a chat with a model is to invoke the script
scripts/simple.scm
simple.scm prompt_text model_file_path
in the build directory, pretend the command with
./pre-inst-env
as it sets up the needed paths and environment variables for proper
guile invocation.
Currently, the chat supported is limited; you would see the replies
from the LLM cut of after a sentence or so.
The outputs length issue will be further addressed in future releases.
## roadmap
* support for continuous chat, with long replies
* support for expose the LLM as a web end point, using a web server
built in Guile, so
the LLM can be done via a web interface, to allow chatting with remote users
* support for embedding LLMs in Guile programs for scenarios like LLM
driven software
agents
## license
Copyright 2024 Li-Cheng (Andy) Tai
atai@atai.org
This program is licensed under the GNU Lesser General Public License, version 3
or later, as published by the Free Software Foundation. See the license
text in the file COPYING.
gde_appmenu is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General
Public License for more details.
Hopefully this program is useful.
- announce: guile_llama_cpp 0.1 release,
Andy Tai <=