This repository provides a streamlined setup to run Ollama's API locally with a user-friendly web UI. It leverages Docker to manage both the Ollama API service and the web interface, allowing for easy deployment and interaction with models like llama3.2:1b.
- Run Ollama's API locally for private use.
- Simple Docker setup for quick deployment.
- Supports models like
llama3.2:1band larger versions. - Web UI for interacting with the models directly from your browser.
-
Clone the repository.
-
Build the Docker images and start the containers using:
docker-compose up --build
-
Download the desired model using a
curlcommand:curl -X POST http://localhost:11434/api/pull -H "Content-Type: application/json" -d '{"model": "llama3.2:1b"}'
Or use a larger model:
curl -X POST http://localhost:11434/api/pull -H "Content-Type: application/json" -d '{"model": "llama3.2"}'
-
Open http://localhost:3001/webui in your browser to interact with the web UI.
For further details on the Ollama API, visit the official Ollama API documentation.
- Ollama API: The core API providing machine learning models locally.
- Llama3.2: AI model for text generation.
- Docker Setup: Simplified containerized deployment.
- Web UI for AI Models: Easy interaction through the web interface.
- Local AI Deployment: Run AI models privately on your system.
- Model Download: Commands for downloading and using different AI models.
- Machine Learning: Deploy state-of-the-art machine learning models locally.
- Artificial Intelligence: Use AI models like Llama3.2 efficiently.
- Quick Deployment: Get your API and UI up and running quickly.
