This is the backend API for the RejuveBio Platform AI Assistant.
Before you begin, ensure you have the following installed:
- Python 3.8+
- Poetry (for managing dependencies)
First, clone the repository and navigate to the project folder:
git clone https://github.com/rejuve-bio/AI-Assistant.git
cd AI-AssistantInstall the required dependencies for the project:
poetry installActivate the Poetry-managed virtual environment:
poetry shellThe application uses environment variables to set up its parameters.
Environment Variables
The .env file contains sensitive information like API keys, credentials, and configuration overrides. The .env.example file is provided as a template. You can copy it to a .env file and fill in your actual values.
cp .env.example .envEnsure that the environment variables are set correctly in .env before running the application:
- LLM Model Configuration:
BASIC_LLM_PROVIDER: Choose the provider for lighter tasks (openai or gemini).BASIC_LLM_VERSION: Version for the basic model (gpt-3.5-turbo, gemini-lite, etc.).ADVANCED_LLM_PROVIDER: Choose the provider for advanced tasks (openai or gemini).ADVANCED_LLM_VERSION: Version for the advanced model (gpt-4o, gemini-pro, etc.).
- API Keys:
OPENAI_API_KEY: Your OpenAI API key.GEMINI_API_KEY: Your Gemini API key.
- Neo4j Configuration:
NEO4J_URI,NEO4J_USERNAME,NEO4J_PASSWORD: Connection details for the Neo4j database.
- Annotation Service Configuration:
ANNOTATION_AUTH_TOKEN: Authentication token for the annotation service.ANNOTATION_SERVICE_URL: The URL for the annotation service, which processes queries.
- Flask Configuration:
FLASK_PORT: Port for the Flask server (default: 5002).
- Qdrant configuration:
QDRANT_CLIENT: Port for qdrant client(http://localhost:6333)
Once your environment is configured, you can run the Flask server and use the AI Assistant API.
make sure you set up qdrant local client :
docker run -d \
-p 6333:6333 \
-v qdrant_data:/qdrant/storage qdrant/qdrantFirst, generate and copy your authentication token:
python helper/access_token_generator.pyUse this token in your API requests:
- For Postman: Add header
Authorization: Bearer your_token_here - For cURL: Add
-H "Authorization: Bearer your_token_here"
Run the Flask server with the following command:
python run.pyThis will start the server at http://localhost:5002.
You can send a POST request to the /query endpoint to interact with the AI Assistant.
Example using curl:
curl -X POST http://localhost:5002/query \
-H "Content-Type: application/json" \
-d '{"query": "What enhancers are involved in the formation of the protein p78504?"}'Request Body:
{
"query": "Your natural language query here"
}Response:
A JSON object containing the processed results from the AI assistant, based on the model's analysis.
- OpenAI for providing the GPT models.
- Google for the Gemini models.
- Neo4j for the graph database technology.
- Flask for the lightweight web framework.
First, clone the repository and navigate to the project folder:
git clone [https://github.com/rejuve-bio/AI-Assistant.git](https://github.com/rejuve-bio/AI-Assistant.git)
cd ai-assistantEnsure that the environment variables are set correctly in .env before running the application:
- LLM Model Configuration:
BASIC_LLM_PROVIDER: Choose the provider for lighter tasks (openai or gemini).BASIC_LLM_VERSION: Version for the basic model (gpt-3.5-turbo, gemini-lite, etc.).ADVANCED_LLM_PROVIDER: Choose the provider for advanced tasks (openai or gemini).ADVANCED_LLM_VERSION: Version for the advanced model (gpt-4o, gemini-pro, etc.).
- API Keys:
OPENAI_API_KEY: Your OpenAI API key.GEMINI_API_KEY: Your Gemini API key.
- Neo4j Configuration:
NEO4J_URI,NEO4J_USERNAME,NEO4J_PASSWORD: Connection details for the Neo4j database.
- Annotation Service Configuration:
ANNOTATION_AUTH_TOKEN: Authentication token for the annotation service.ANNOTATION_SERVICE_URL: The URL for the annotation service, which processes queries.
Once your environment is configured, you can run the app and use the AI Assistant API.
docker-compose up --buildExample using curl:
curl -X POST http://localhost:5002/query \
-H "Content-Type: application/json" \
-d '{"query": "your query here"}'docker-compose down