Site Engineer is a fully local full-stack web application that transforms a simple prompt into a complete, responsive website — all with the help of a large language model running on your own GPU. No cloud. No API keys. No limits.
🧠 Powered by DeepSeek Coder 6.7B, streamed in real-time via
llama.cpp.
-
⚙️ Full-Stack Application Combines a FastAPI backend with a vanilla JavaScript frontend for seamless interaction.
-
💻 Runs Fully Local Uses llama.cpp's
llama-cli.exeto infer LLM outputs directly on your GPU — no internet, no cost. -
⚡ Real-Time Streaming Output Experience website generation token-by-token as code is streamed into a live editor and preview.
-
🧠 Deep Prompt Engineering Utilizes a custom-tuned system prompt that guides the LLM to produce high-quality, clean, and fully responsive HTML, CSS, and JS code.
-
🚀 GPU Accelerated Inference Support for
--n-gpu-layersensures you get maximum performance out of your hardware.
- Windows OS (currently tested only on Windows)
- Python 3.7+
- Consumer GPU with enough VRAM (6GB+)
- Internet access (only for initial model download)
git clone https://github.com/YOUR_USERNAME/WebSite-Generator.git
cd WebSite-Generatorpip install -r requirements.txt- Create a folder named
models:
mkdir models-
Download the model file:
deepseek-coder-6.7b-instruct.Q4_K_M.gguf -
Move it into the
modelsfolder.
Download or build llama-cli.exe and place it in the root directory.
🛠️ Tip: You can compile it using
cmakeandmakeor download precompiled binaries from the community.
python main.pyYour default browser will open automatically to:
http://127.0.0.1:11434
You’re now ready to generate fully functional websites using a single text prompt!
- You enter a text prompt, like "Portfolio site for a game developer with a dark theme."
- The prompt is sent to the DeepSeek Coder 6.7B model running locally via
llama-cli. - The model streams code token-by-token through FastAPI to the browser.
- A live editor updates HTML/CSS/JS in real-time — with an instant preview!
- ✅ No internet connection required after setup.
- ✅ No OpenAI, no HuggingFace API keys.
- ✅ 100% local. 100% free.
| Layer | Tech |
|---|---|
| LLM Backend | DeepSeek Coder 6.7B (GGUF) |
| Inference | llama.cpp (llama-cli.exe) |
| Server | FastAPI |
| Frontend | HTML, CSS, Vanilla JS |
| Streaming | Server-Sent Events (SSE) |
*Coming Soon....
Got a feature idea or bug report? Feel free to open an Issue or drop a Pull Request!
Created by FR34K — powered by passion, code, and caffeine.
