Terminal-based AI assistant powered by Gemini 2.5 with Copilot mode, file analysis, code generation/improvement, and persistent conversations.
Author: Mohan Sharma
License: MIT
Dependencies: bash, curl, jq
- π Secure API key storage
- π§ Chat with Gemini 2.5 (Flash model)
- π Persistent conversation history
- β¨ Copilot mode: generate, explain, or improve code
- π Analyze large files in chunks
- π Configurable model, temperature, and max tokens
- π Save/load instructions for system behavior
- π§Ή Automatic summarization of long chat history
curl -o gmcli.sh https://raw.githubusercontent.com/mrajauriya/gmcli/main/gmcli.sh
chmod +x gmcli.sh
./gmcli.shMake sure you have
jqandcurlinstalled.
Install them on Termux:
pkg install jq curlOr on Debian/Ubuntu:
sudo apt install jq curlOn first run, enter your Gemini API key. You can also edit the config later:
echo 'YOUR_API_KEY_HERE' > ~/.smrtask_gemini/config.shTo change default system instructions:
echo 'You are a helpful assistant.' > ~/.smrtask_gemini/instructions.txtRun the script:
./gmcli.shThen interact via menu:
- Chat with Gemini
- View or reset conversation
- Copilot coder mode (generate/explain/improve code)
- Analyze files
- Configure model, tokens, temperature, and instructions
- Generate Code β Provide a description and filename
- Explain Code β Analyze an existing file line-by-line
- Improve Code β Give modification instructions for existing files
All settings, history, and instructions are stored in:
~/.smrtask_gemini/
βββ config.sh # API key & model config
βββ history.json # Chat memory
βββ instructions.txt # Custom system instructions
By default, uses:
- Model:
gemini-2.5-flash-preview-04-17 - Max Tokens:
8192 - Temperature:
0.7 - Top-K:
1 - Top-P:
0.95
These can be adjusted in the script menu.
Persistent across sessions and summarized automatically if it grows too long. Summarization keeps the assistant aware of previous context efficiently.
Delete the config directory:
rm -rf ~/.smrtask_geminiApache License Β© Mohan Sharma
Made with β€οΈ in Terminal