In this project I have built an app that can answer questions from your multiple PDFs using Google's gemini-1.5-flash model.
-
Updated
Sep 12, 2024 - Python
In this project I have built an app that can answer questions from your multiple PDFs using Google's gemini-1.5-flash model.
This repository provides a comprehensive guide on integrating LlamaIndex with Google Gemini to build an effective Retrieval-Augmented Generation (RAG) system for Question and Answer (Q&A) applications.
An advanced AI-powered solution for parsing and analyzing logs to identify patterns and anomalies. This tool provides actionable insights for diagnosing and resolving issues efficiently, simplifying log analysis for quicker and more accurate problem detection and resolution.
An intelligent, domain-specific Medical Assistant Chatbot built with a full-stack Retrieval-Augmented Generation (RAG) pipeline. This application answers complex medical queries by retrieving and synthesizing information directly from trusted PDF documents , ensuring accurate and context-aware responses powered by state-of-the-art LLMs.
Chat with your PDFs using RAG-powered AI that answers questions from your documents built with LangChain, Pinecone, and Groq.
An intelligent, domain-specific Medical Assistant Chatbot built with a full-stack Retrieval-Augmented Generation (RAG) pipeline. This application answers complex medical queries by retrieving and synthesizing information directly from trusted PDF documents , ensuring accurate and context-aware responses powered by state-of-the-art LLMs.
Add a description, image, and links to the google-embeddings topic page so that developers can more easily learn about it.
To associate your repository with the google-embeddings topic, visit your repo's landing page and select "manage topics."