This is a personal knowledge base assistant created using LLMs. The project has been designed and developed meticulously to ensure efficient management and retrieval of vast and intricate information, making it a powerful tool for information acquisition.
This project refers to this tutorial.
- CPU: Intel Core i5 or higher
- RAM: 4GB or higher
- OS: Windows, macOS, Linux
-
Clone the repo:
git clone https://github.com/Zoooooone/LLM_project.git cd LLM_project -
Create a Conda environment and install dependencies:
- Python version 3.9 or higher
conda create -n llm-project python=3.10.11 conda activate llm-project pip install -r requirements.text
- Python version 3.9 or higher
-
Run the project:
./run.sh
or
python -m project.serve.main
This project is based on Langchain, a framework designed to simplify the creation of applications using LLMs. This project's architecture mainly consists of these parts:
- LLM Layer: Encapsulates API calls to OpenAI APIs.
- Data Layer: Includes source data of the personal knowledge database and embedding API.
- Database Layer: A vector database built on the source data, used Chroma.
- Application Layer: The top-level encapsulation of the core features. Further, encapsulated the retrieval Q&A chain provided by Langchain.
- Service Layer: Implementation of service access for this project: a demo with Gradio.
Following diagrams illustrates the interaction flow between the user and this assistant.


