A full-stack application for training custom LoRA models on Replicate and generating AI images with them. Built with FastAPI (backend) and React + TypeScript (frontend).
- 🎯 Easy Dataset Management: Drag-and-drop ZIP uploads with automatic validation (12-20 images)
- 🚀 One-Click Training: Train custom LoRA models on Replicate with real-time status updates
- 🎨 Advanced Generation: Multiple generation modes with fine-tuned controls
- 🖼️ Image-to-Image: Transform existing images using your trained models
- 🔗 Multi-LoRA: Combine multiple LoRA models for unique effects
- 📱 Modern UI: Clean, responsive interface with toast notifications and loading states
- 🔄 Smart Model Management: Automatic model selection and override capabilities
# Clone the repository
git clone <repository-url>
cd flux-personalizer
# Run the complete setup script
./setup.shSee Deploy.md for platform-agnostic deployment instructions.
- Node.js 18+ - Download here or
brew install node - Python 3.10+ - Download here
- Replicate Account - Sign up here and get your API token
cd backend
# Create and activate virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Configure environment
cp .env.example .env
# Edit .env with your Replicate credentials
open .env
# Start the server
uvicorn app:app --host 0.0.0.0 --port 8000 --reloadcd frontend
# Install dependencies
npm install
# Configure environment (optional - defaults work)
cp .env.example .env
# Start development server
npm run dev# Required - Get from https://replicate.com/account/api-tokens
REPLICATE_API_TOKEN=r8_your_token_here
# Required - Your Replicate username
OWNER=your_username
# Required - Base name for your trained models
MODEL_BASENAME=my-lora
# Required - Frontend origin for CORS (updated from CORS_ORIGIN)
ALLOWED_ORIGIN=http://localhost:5173# Required - Backend API URL (updated from VITE_API_BASE_URL)
VITE_API_BASE=http://localhost:8000Security Note: The REPLICATE_API_TOKEN is never exposed to the frontend. All API calls are made through the backend server. The application will show a configuration banner if environment variables are missing or invalid.
- Collect 12-20 high-quality images of your subject
- Images should show different angles, lighting, and poses
- Compress them into a ZIP file
- Choose a unique trigger word (e.g., "bennycat", "mydog")
- Go to the Dataset page
- Drag and drop your ZIP file
- Enter your trigger word
- Navigate to Train page
- Reselect your ZIP file (browser security requirement)
- Adjust training steps if needed (default: 1000)
- Click "Start Training"
- Monitor progress via the provided Replicate URL
- Once training completes, go to Generate page
- Enter prompts using your trigger word: "a photo of bennycat in a garden"
- Adjust generation parameters as needed
- Click "Generate"
- View results in the gallery (last 12 images persist)
- Img2img: Transform existing images using your model
- Multi-LoRA: Combine your model with other public LoRAs
- Model Manager: Switch between different trained models
The FastAPI backend provides these endpoints:
POST /train- Start training with multipart form dataGET /train/{id}- Check training statusPOST /generate- Generate images with JSON payloadPOST /img2img- Image-to-image generation with multipart formPOST /generate-with-extra-lora- Multi-LoRA generationGET /health- Health check endpoint
flux-personalizer/
├── backend/ # FastAPI application
│ ├── app.py # Main application file
│ ├── requirements.txt
│ └── .env.example
├── frontend/ # React + TypeScript app
│ ├── src/
│ │ ├── pages/ # Page components
│ │ ├── lib/ # API client & utilities
│ │ └── main.tsx
│ ├── package.json
│ └── .env.example
└── README.md
- Streaming-safe multipart handling for large file uploads
- Comprehensive error handling with user-friendly messages
- Toast notifications for real-time feedback
- API health monitoring with connection status
- LocalStorage persistence for user preferences and gallery
- Responsive design with modern UI components
- Prompt library with 8+ example templates for different use cases
- Gallery management with download, copy URL, and clear functions
- Accessibility features with proper focus states and ARIA labels
Configuration Banner Appears:
- Copy
backend/.env.exampletobackend/.envand fill in your values - Copy
frontend/.env.exampletofrontend/.env - Restart both development servers after creating
.envfiles
"API Config Error" Status:
- Ensure
VITE_API_BASEis set infrontend/.env - Verify the URL format is correct (e.g.,
http://localhost:8000) - Restart the frontend development server
Backend won't start:
- Check your Python version:
python3 --version - Ensure virtual environment is activated
- Verify
.envfile has correct Replicate credentials - Check that all required environment variables are set
Frontend connection errors:
- Ensure backend is running on port 8000
- Check that
ALLOWED_ORIGINmatches your frontend URL - Verify
VITE_API_BASEin frontend.env(updated fromVITE_API_BASE_URL)
Training fails:
- Verify ZIP contains 12-20 valid image files
- Check Replicate account has sufficient credits
- Ensure trigger word is unique and descriptive
Generation errors:
- Confirm a model is selected in Model Manager
- Check if model training completed successfully
- Verify prompt includes your trigger word
- Use WebP format for faster loading
- Keep training steps between 500-2000 for best results
- Optimize images before zipping (recommended: 512x512px)
- All input fields validated with proper ranges
- Sliders clamped to specified bounds (lora_scale: 0.0-1.2, guidance_scale: 1.0-8.0, etc.)
- Trigger word guidance with warnings when missing from prompts
- File type and size validation for uploads
- Real-time feedback for invalid inputs
- Training poll handles all status: succeeded, failed, canceled, cancelled
- Surfaces Replicate error messages when available
- Automatic model manager updates on successful training
- 10-second polling with proper cleanup
- Comprehensive error logging and user feedback
- Image gallery with hover controls
- Download individual images with custom filenames
- Copy image URL to clipboard with fallback
- Remove individual images from gallery
- Clear all images with confirmation
- Persistent storage (last 12 images)
- Proper form labels and ARIA attributes
- Alt text for all images
- Focus states for interactive elements
- Keyboard navigation support
- Screen reader friendly structure
- 8 example prompts across 3 categories
- Identity lock examples (3 prompts)
- Img2img composition examples (2 prompts)
- Multi-LoRA combination examples (3 prompts)
- Placeholder LoRA references (roadmaus/*)
- Auto-apply settings with templates
- Structured error responses from backend
- Toast notifications for all operations
- Graceful fallbacks for network issues
- Configuration validation banners
- Loading states with progress indicators
- API token never exposed to frontend
- Input sanitization and validation
- CORS properly configured
- Error messages sanitized (no token leakage)
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
This project is open source. Please check the license file for details.