▄████ ██░ ██ ▒█████ ██████ ████████
██▒ ▀█▒▓██░ ██▒▒██▒ ██▒ ██ ▒ ██
▒██░▄▄▄░▒██▀▀██░▒██░ ██▒ ▓██▄ ██
░▓█ ██▓░▓█ ░██ ▒██ ██░ ▒ ██▒ ██
░▒▓███▀▒░▓█▒░██▓░ ████▓▒░▒██████▒▒ ██
░▒ ▒ ▒ ░░▒░▒░ ▒░▒░▒░ ▒ ▒▓▒ ▒ ░ ░
░ ░ ▒ ░▒░ ░ ░ ▒ ▒░ ░ ░▒ ░ ░
░ ░ ░ ░ ░░ ░░ ░ ░ ▒ ░ ░ ░
░ ░ ░ ░ ░ ░ ░
Your personal AI companion in the terminal. Always on. Always local. No corp surveillance.
Ghost is a command-line AI assistant powered by Ollama, bringing the spirit of cyberpunk AI companions from Shadowrun, Cyberpunk 2077, and The Matrix into your daily workflow.
Prerequisites:
- Ollama installed and running
- At least one model pulled (e.g.,
ollama pull llama3)
Install:
go install github.com/theantichris/ghost@latestOr grab prebuilt binaries from the releases page.
Run your first query:
ghost "Explain recursion in simple terms"- Intelligence on demand: Ask questions, get explanations, analyze data
- Data stream analysis: Pipe logs, files, or any text directly into Ghost
- Visual recon: Feed images to vision models for analysis and description
- Format flexibility: Output as plain text, JSON, or styled Markdown
# Query the net for intel
ghost "how do I crack open a encrypted data stream?"
# Scan your logs for anomalies
cat system.log | ghost "what's lurking in here?"
# Extract structured data
ghost "give me a list of common netrunner tools" -f json | jq .
# Generate formatted dossiers
ghost "write a guide to bypassing corp firewalls" -f markdown > intel.md
# Visual recon (requires vision model)
ghost "analyze this security feed" -i camera-feed.png
# Compare surveillance data
ghost "what changed in the facility?" -i before-raid.png -i after-raid.pngConfigure Ghost via command-line flags, environment variables, or config file.
-m, --model: Model to use (e.g.,llama3)-V, --vision-model: Vision model for images (defaults to main model)-i, --image: Image file path (can be used multiple times)-f, --format: Output format:text,json, ormarkdown-u, --url: Ollama API URL (default:http://localhost:11434/api)-c, --config: Config file path (default:~/.config/ghost/config.toml)
export GHOST_MODEL=llama3
export GHOST_VISION_MODEL=llama3.2-vision
export GHOST_URL=http://localhost:11434/apiCreate ~/.config/ghost/config.toml:
model = "llama3"
url = "http://localhost:11434/api"
[vision]
model = "llama3.2-vision"MIT