A unified deepfake detection framework for both image and video analysis. This repository includes a TensorFlow-based model leveraging the Xception architecture for shared feature extraction, combined with LSTM for temporal video processing.
- Unified architecture for both image and video deepfake detection
- Based on Xception backbone with additional LSTM layers for temporal analysis
- Supports both single-frame and multi-frame inputs
- Comprehensive logging and model checkpointing
- Modular design for easy maintenance and extension
unified-deepfake-detection/
├── com/
│ └── mhire/
│ ├── data_processing/
│ │ └── data_processing.py
│ ├── training/
│ │ └── training.py
│ └── evaluation/
│ └── evaluation.py
├── main.py
├── requirements.txt
├── README.md
└── LICENSE
- data_processing.py: Handles data loading, preprocessing, and dataset splitting
- training.py: Contains the model architecture and training pipeline
- evaluation.py: Manages model evaluation and metrics calculation
- main.py: Entry point of the application, orchestrates the entire pipeline
The system uses a dual-branch architecture:
- Image Branch: Processes single frames using Xception backbone
- Video Branch: Processes sequences using Xception + LSTM for temporal features
- Unified Output: Combines both branches for comprehensive detection
- Clone the repository:
git clone https://github.com/yourusername/unified-deepfake-detection.git
cd unified-deepfake-detection- Create a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies:
pip install -r requirements.txtThe system expects data in the following structure:
dataset/
├── real/
│ ├── video1/
│ │ ├── frame001.png
│ │ ├── frame002.png
│ │ └── ...
│ └── ...
└── fake/
├── video1/
│ ├── frame001.png
│ ├── frame002.png
│ └── ...
└── ...
- Update the paths in main.py:
base_dir = Path("your/base/directory")
data_path = base_dir / "Datasets/FaceForensics"
processed_data_dir = base_dir / "ProcessedData/FaceForensics"
model_dir = base_dir / "Trained_models/Unified_DeepFake_Detection_Model"- Run the pipeline:
python main.pyThe system will:
- Process and split the dataset (70% train, 15% validation, 15% test)
- Train the unified model
- Evaluate performance and generate metrics
The system automatically detects and utilizes available GPU resources. For optimal performance:
- CUDA-compatible GPU recommended
- Minimum 8GB GPU memory
- Updated GPU drivers
Common issues and solutions:
-
GPU Memory Error:
- Reduce batch size in training.py
- Decrease image size or sequence length
-
Data Loading Error:
- Verify dataset structure
- Check file permissions
- Ensure correct path configuration
-
Training Instability:
- Adjust learning rate
- Modify batch size
- Check for data imbalance
- Fork the repository
- Create your feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.
If you use this code in your research, please cite:
@project{unified_deepfake_detection,
title = {Unified DeepFake Detection System},
year = {2025},
author = {Syeda Aunanya Mahmud},
url = {https://github.com/Aunanya875/Unified-DeepFake-Detector}
}