Dark RL provides a high-level interface for interactive, online learning with large language models. The OnlineLLM interface offers a means of performing training and inference in one model efficiently. Empowering LLMs to learn in realtime from user feedback.
Warning
Dark RL is in alpha
- 🧠 Interactive and Online Learning: Continuously fine-tune your models with new data using LoRA, allowing them to acquire new skills without full retraining.
- 🔌 Adapter-Based Skills: Manage different LoRA adapters as distinct "skills" that can be loaded and used for specific tasks.
- 🚀 Unified Architecture: A single model instance handles both training and inference concurrently, using CUDA streams to manage GPU workloads efficiently.
- 🚀 Advanced CUDA Kernels: Specialized CUDA kernels for online learning
- 🎆 MCP integration: Teach an agent to become proficient with any MCP server
- 💡 Simple API: A clean and intuitive API that makes it easy to integrate online learning into your applications.
dark_demo_2x.mp4
Interactive Learning is a human-in-the-loop training process where an AI model learns incrementally from real-time feedback. Instead of training on a static dataset, the model's understanding is refined through a continuous cycle of action, feedback, and correction.
In Dark RL, this is achieved by:
- Observing the model's output for a given prompt.
- Providing corrective examples via the
.learn()method. - Updating a LoRA adapter with this new knowledge.
This approach allows you to "teach" the model new skills, correct its mistakes, and adapt its behavior to specific tasks, much like teaching a human. Because LoRA adapters are small and efficient, this learning process can happen in real-time, making it possible to shape the model's capabilities interactively.
Here's a minimal example of how to use OnlineLLM to generate text and teach the model a new skill.
from dark import OnlineLLM
llm = OnlineLLM("Qwen/Qwen2.5-VL-7B-Instruct")
prompt = "What is the capital of France?"
print(f"User: {prompt}")
response = llm.generate(prompt)
print(f"Assistant: {response}")
# Expected output: Paris
learning_examples = [
{"prompt": "A greeting in Zoggian", "response": "zog"},
{"prompt": "How to say 'hello' in Zoggian?", "response": "zog"},
]
llm.learn(learning_examples, adapter="zoggian-language")
print("\nLearning the Zoggian language...")
prompt_with_skill = "Say 'hello' in Zoggian."
print(f"User: {prompt_with_skill}")
response_with_skill = llm.generate(prompt_with_skill, adapter="zoggian-language")
print(f"Assistant: {response_with_skill}")
# Expected output: zogpip install dark-rlNote
A minimum of 48gb VRAM is required
Dark RL uses a single model instance to handle both training and inference tasks simultaneously. This is made possible through the use of CUDA streams, which allow for the concurrent execution of different GPU operations.
- Inference Stream: Generation tasks (i.e.,
generate,stream) are run on a dedicated inference stream. This ensures that they are executed with high priority and low latency. - Training Stream: LoRA fine-tuning tasks (
learn) are run on a separate stream.
This architecture allows the server to remain responsive to inference requests even while the model is being fine-tuned in the background. An asyncio lock is used to ensure that the model's LoRA weights are swapped safely between tasks, preventing race conditions.
You can easily deploy a Dark RL server on a cloud GPU instance like RunPod. Here’s a basic guide for a machine with a 48GB VRAM card (e.g., an RTX A6000).
-
Choose a RunPod Template:
- Start a new Pod and select the "RunPod Pytorch 2.6" template. This provides a clean environment with Python, PyTorch, and CUDA pre-installed.
- Choose a GPU with at least 48GB of VRAM.
-
Connect to the Pod and Start the Server:
- Once the Pod is running, connect to it via SSH.
- First, install
uvif it's not already available:pip install uv
- Clone the repository and start the server.
uvwill handle creating a virtual environment, installing dependencies, and running thewebsocket_server.py.git clone https://github.com/agentsea/dark.rl.git cd dark.rl uv run python websocket_server.py
-
Expose the Port:
- The websocket server runs on port 8000. In the RunPod dashboard for your Pod, expose this port to make the UI accessible over the internet.
Your Dark RL server is now running and ready for interactive learning.
- Darknet for the amazing style
- Nano-VLLM for their Qwen-3 CUDA kernels
