Skip to content

Sparky4567/Local_LLM--Chat-with-local-llm-using-python-

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

README

Link to install Ollama: https://ollama.com/download

Link to Ollama latest models library: https://ollama.com/library

Link to download uv package manager: https://docs.astral.sh/uv/getting-started/installation/#standalone-installer

My personal philosophy is simple - if it works on my machine, it should work on yours, too.

Follow instructions to configure appropriate version for your operating system and use a command below to run the example. (uv needs to be installed just once)

uv run main.py

Use (just an example) to pull your desired model.

ollama pull tinyllama:latest

Change DEFAULT_LLM_MODEL variable value in config/settings.py file.

Example:

DEFAULT_LLM_MODEL = "tinyllama" or DEFAULT_LLM_MODEL = "tinyllama:latest" (if you have multiple models)

About

Chat with your local llm using a simple Python script

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages