Skip to content

It is a trial attempt to make a DeepsiteV2 inspired tool which can be ran locally without GPU computation without any Cloud based service.

Notifications You must be signed in to change notification settings

FR34KY-CODER/Site-Engineer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🚧 Site Engineer: AI-Powered Local Website Generator

Site Engineer is a fully local full-stack web application that transforms a simple prompt into a complete, responsive website — all with the help of a large language model running on your own GPU. No cloud. No API keys. No limits.

🧠 Powered by DeepSeek Coder 6.7B, streamed in real-time via llama.cpp.

Site Engineer Screenshot


✨ Key Features

  • ⚙️ Full-Stack Application Combines a FastAPI backend with a vanilla JavaScript frontend for seamless interaction.

  • 💻 Runs Fully Local Uses llama.cpp's llama-cli.exe to infer LLM outputs directly on your GPU — no internet, no cost.

  • Real-Time Streaming Output Experience website generation token-by-token as code is streamed into a live editor and preview.

  • 🧠 Deep Prompt Engineering Utilizes a custom-tuned system prompt that guides the LLM to produce high-quality, clean, and fully responsive HTML, CSS, and JS code.

  • 🚀 GPU Accelerated Inference Support for --n-gpu-layers ensures you get maximum performance out of your hardware.


📦 Setup and Installation

🔧 Requirements

  • Windows OS (currently tested only on Windows)
  • Python 3.7+
  • Consumer GPU with enough VRAM (6GB+)
  • Internet access (only for initial model download)

🧪 Step-by-Step Installation

1. 📁 Clone the Repository

git clone https://github.com/YOUR_USERNAME/WebSite-Generator.git
cd WebSite-Generator

2. 📦 Install Python Dependencies

pip install -r requirements.txt

3. 📥 Download the Model

  • Create a folder named models:
mkdir models

4. ⚙️ Get llama-cli.exe

Download or build llama-cli.exe and place it in the root directory.

🛠️ Tip: You can compile it using cmake and make or download precompiled binaries from the community.


▶️ Run the Application

python main.py

Your default browser will open automatically to:

http://127.0.0.1:11434

You’re now ready to generate fully functional websites using a single text prompt!


🧠 How It Works

  1. You enter a text prompt, like "Portfolio site for a game developer with a dark theme."
  2. The prompt is sent to the DeepSeek Coder 6.7B model running locally via llama-cli.
  3. The model streams code token-by-token through FastAPI to the browser.
  4. A live editor updates HTML/CSS/JS in real-time — with an instant preview!

🛡️ Privacy & Cost

  • ✅ No internet connection required after setup.
  • ✅ No OpenAI, no HuggingFace API keys.
  • ✅ 100% local. 100% free.

📚 Tech Stack

Layer Tech
LLM Backend DeepSeek Coder 6.7B (GGUF)
Inference llama.cpp (llama-cli.exe)
Server FastAPI
Frontend HTML, CSS, Vanilla JS
Streaming Server-Sent Events (SSE)

📸 Demo and ScreenShots

*Coming Soon....


💬 Contribute / Feedback

Got a feature idea or bug report? Feel free to open an Issue or drop a Pull Request!


🚀 Credits

Created by FR34K — powered by passion, code, and caffeine.

About

It is a trial attempt to make a DeepsiteV2 inspired tool which can be ran locally without GPU computation without any Cloud based service.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published