AI Copilot for Vim/NeoVim
-
Updated
Feb 28, 2025 - Python
AI Copilot for Vim/NeoVim
A high-performance API server that provides OpenAI-compatible endpoints for MLX models. Developed using Python and powered by the FastAPI framework, it provides an efficient, scalable, and user-friendly solution for running MLX-based vision and language models locally with an OpenAI-compatible interface.
Unified management and routing for llama.cpp, MLX and vLLM models with web dashboard.
Build an Autonomous Web3 AI Trading Agent (BASE + Uniswap V4 example)
Experimental: MLX model provider for Strands Agents - Build, train, and deploy AI agents on Apple Silicon.
Reinforcement learning for text generation on MLX (Apple Silicon)
Various LLM resources and experiments
Add MLX support to Pydantic AI through LM Studio or mlx-lm, run MLX compatible HF models on Apple silicon.
Federated Fine-Tuning of LLMs on Apple Silicon with Flower.ai and MLX-LM
A comprehensive toolkit for end-to-end continued pre-training, fine-tuning, monitoring, testing and publishing of language models with MLX-LM
LLM model inference on Apple Silicon Mac using the Apple MLX Framework.
MLX inference service compatible with OpenAI API, built on MLX-LM and MLX-VLM.基于MLX-LM和MLX-VLM构建的OpenAI API兼容的MLX推理服务.
🤖 Run Strands Agents on Apple Silicon with ease—perform inference, fine-tune models, and leverage vision capabilities using Python and LoRA training.
This is the part 2 of a GraphRAG system, in which the user interacts with the data through 2 data structures: vector database (Chroma DB) and graph database (Neo4j). I exploit the probabilism of vector embedding and the determinism of a knowledge graph to minimise hallucination and maximise explainability. The domain: ‘Electronic’ music genre.
OFX File Creator is a compact Python library/CLI that converts CSV/Excel bank exports into valid OFX statements. It normalizes vendor columns, parses dates and amounts, infers TRNTYPE via configurable YAML/JSON rules (optional mlx-lm enrichment), and includes examples, tests, and GitHub Actions CI.
Prompt LLM Bench is a platform that discovers compatible Hugging Face models on-the-fly, runs reproducible multi-model evaluations, and recommends the optimal prompt–LLM pair based on accuracy, latency, and resource efficiency.
Fine-tuning open-source LLMs for Corerefence Resolution task using mlx-lm
Add a description, image, and links to the mlx-lm topic page so that developers can more easily learn about it.
To associate your repository with the mlx-lm topic, visit your repo's landing page and select "manage topics."