This simple project puts the power of freely available Large Language Models on-mesh.
Just connect any Meshtastic hardware to your (decently-powered) PC and you will have a completely off-grid, on-mesh LLM chatbot that works without any Internet connection.
Default model prompt is written in Italian, because this was made primarily to provide my local mesh network with a fun and useful tool for propagation testing and general experimentation. Feel free to change it as you wish, but please don't remove the instructions about keeping messages short. The mesh doesn't need to be recklessly flooded by AI slop!
Prepare a virtual environment and install the required packages:
python3 -m venv .venv
source .venv/bin/activate
pip install ollama meshtastic
Then, install Ollama. After installation, customize the system prompt in the Modelfile, then create the "juniper" model:
ollama create juniper -f Modelfile.juniper
In juniper.service you have a systemd template which you can use, that also preloads the model to avoid first-message delays.
If you want to add IP tunnelling to give your node an on-mesh IP address for other purposes:
pip install pytap2- add
--tunnelto the command line when you run juniper.py
Have fun!