Skip to content

kusl/GeminiClient

Repository files navigation

🤖 Gemini Client Console

A powerful, interactive command-line client for Google's Gemini AI API featuring Hyper-Contextual Grounding, multi-turn conversations, real-time streaming, and XDG-compliant logging.

🔑 Quick Start - API Key Required!

⚠️ IMPORTANT: You need a Google Gemini API key to use this application!

Getting Your API Key

  1. Get a FREE API key from Google AI Studio: https://aistudio.google.com/apikey
  2. Click "Get API Key" and follow the instructions.
  3. Copy your API key (starts with AIza...).

Setting Your API Key

The application supports multiple configuration methods (in priority order):

  1. User Secrets (Recommended for development):
    dotnet user-secrets set "GeminiSettings:ApiKey" "YOUR_API_KEY"
    

2. **Environment Variables**:
```bash
export GeminiSettings__ApiKey="YOUR_API_KEY"

  1. appsettings.json in the executable directory:
{
  "GeminiSettings": {
    "ApiKey": "YOUR_API_KEY_HERE",
    "BaseUrl": "https://generativelanguage.googleapis.com/",
    "DefaultModel": "gemini-2.5-flash"
  }
}

📥 Installation

Linux One-Liner Install

curl -fsSL https://raw.githubusercontent.com/kusl/GeminiClient/main/install-gemini-client.sh | bash

Download Pre-built Binaries

Download the latest release for your platform from the Releases page.

Platform Download Architecture
Windows gemini-client-win-x64.zip 64-bit Intel/AMD
Linux gemini-client-linux-x64.tar.gz 64-bit Intel/AMD

🚀 Features

🧠 Hyper-Contextual Environmental Grounding (New!)

The client completely eliminates "temporal hallucinations" by injecting real-time system data into every request.

  • Zero Hallucinations: The model knows the exact local date, time, and timezone.
  • OS Awareness: It knows it is running on Linux (Fedora), Windows, or macOS and adapts its answers (e.g., providing Bash vs. PowerShell commands).
  • Locale Context: Responses are formatted according to your system's region settings.

💬 Multi-Turn Conversations

Engage in stateful, context-aware conversations. The client remembers your previous exchanges within a session, allowing for natural follow-up questions. Use the reset command to start fresh.

🌊 Real-time Streaming

  • SSE Support: True real-time communication with the Gemini API using Server-Sent Events.
  • Performance Optimizations: Configured with Server GC and Concurrent GC for high-throughput response handling.
  • Live Metrics: Monitor token speed (tokens/s) and first-response latency in real-time.

🤖 Dynamic Model Selection

  • Live Discovery: Fetches available models directly from the Gemini API at startup.
  • Smart Fallbacks: Gracefully handles API errors with a curated fallback list.

📝 Conversation Logging

All prompts, responses, and session statistics are automatically logged to text files.

  • Linux: ~/.local/share/gemini-client/logs/ (XDG compliant)
  • macOS: ~/Library/Application Support/GeminiClient/logs/
  • Windows: %LOCALAPPDATA%\GeminiClient\logs\

💻 Usage

Available Commands

Command Description
exit Quit the application and display session stats
reset Clear conversation context and start fresh
model Change the selected AI model
stats View detailed session statistics
log Open the log folder in your file manager
stream Toggle streaming mode ON/OFF

Building from Source

Prerequisites: .NET 10.0 SDK

# Clone the repository
git clone https://github.com/kusl/GeminiClient.git
cd GeminiClient

# Build the project
dotnet build

# Run the console app
dotnet run --project GeminiClientConsole

🛠️ Project Structure

  • GeminiClient/: Core library with multi-turn API support, SSE streaming, and Environment Context Service.
  • GeminiClientConsole/: Interactive CLI with conversation state management and XDG-compliant logging.
  • Directory.Build.props: Centralized versioning and build optimizations.

📜 License

This project is licensed under the AGPL-3.0-or-later.


Made with ❤️ using .NET 10, Google Gemini AI, and Server-Sent Events

Star this repo if you find it useful!


🔄 Version History

  • v0.0.7 (Rolling) - Added Hyper-Contextual Grounding, multi-turn conversation support, reset command, and XDG-compliant logging.
  • v0.0.6 - Added real-time streaming support with SSE.
  • v0.0.5 - Improved terminal compatibility by removing destructive console clears.
  • v0.0.4 - Initial interactive console client with dynamic model discovery.

Notice: This project contains code generated by Large Language Models such as Claude and Gemini. All code is experimental whether explicitly stated or not. The streaming implementation uses Server-Sent Events (SSE) for real-time communication with the Gemini API.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

No packages published