- Documents-based AI Assistant to help users
- RAG (Retrieval-Augmented Generation) architecture for accurate technical responses
- Hybrid Approach (Local Processing + Cloud-based Vector DB and LLM Service)
- AI: Groq SDK - Llama 3.1 8B Instant Model (Cloud-based)
- Database: Pinecone Vector Database (Cloud-based)
- Embedings: Transformer.js with Xenova/all-MiniLM-L6-v2 Model (Local)
- Frontend: React, TypeScript, Tailwind CSS
- Backend: Electron, Node.js
- Build: Vite, Electron Builder
- Node.js (v16 or higher)
- npm or yarn
- API keys for services (see Configuration below)
-
Clone the repository
git clone https://github.com/swetha-nbase2/chatbot.git
-
Install dependencies
npm install
-
Setup the API Keys
Configure your .env file with API keys
cp .env.example .env
Edit
.envand add your API keys:# Groq API Configuration GROQ_API_KEY = your_groq_api_key_here # Pinecone Configuration PINECONE_API_KEY = your_pinecone_api_key_here PINECONE_ENVIRONMENT = your_pinecone_environment_here PINECONE_INDEX_NAME = your_pinecone_index_name_here # Chat Configuration MAX_CHAT_HISTORY = max_chat_history_here EMBEDDING_DIMENSION = embedding_dimension_here
Create Groq API Key:
- Visit Groq Console
- Create an account and generate an API key
- Add to
.envasGROQ_API_KEY
Create Pinecone API Key:
- Visit Pinecone
- Create an account and get your API key
- Note your environment region
- Add to
.envasPINECONE_API_KEYandPINECONE_ENVIRONMENT
Chat Configuration:
- Recommended 20 Maximum Chat History
- Embedding Dimension of Xenova/all-MiniLM-L6-v2 is 384
-
Process documents (one time)
Create a folder called 'documents' inside the root directory and place the documents/data sheets (supported file formats - .txt, .pdf, .md) that are going to serve as knowledge base for the chatbot.
npm run process-docs
-
Start the app
npm run electron-dev
- One-time document processing: Run
npm run process-docsonce to prepare your knowledge base - Fast app startup: Use
npm run electron-devfor instant startup - Update when needed: Re-run document processing only when you add new documents
- Start the application
- Click the chatbot button (🤖) in the bottom-right corner
- Type your message and press Enter or click Send
- Enjoy AI-powered conversations with context memory!
- Upload necessary documents and data sheets
- Documents are chunked into smaller text segments
- Each chunk is embedded
- Embeddings are stored in vector database
- Knowledge base is ready for user queries
- User submits a query through chat interface
- Query is embedded into vector
- Vector DB retrieves the most relevant context chunks
- Context chunks and user query are combined into structured prompt
- LLM generates natural language response
- Response is returned to user in chat interface
-
Process the Documents
npm run process-docs
-
Clear the Database (Must before processing the new documents)
npm run clear-db
-
Transpile (Must after making changes in electron files)
npm run transpile
-
Start the React Server
npm run dev:react
-
Start the App
npm run electron-dev
This project is licensed under the MIT License.
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
Happy Chatting!