Facial emotion recognition (FER) refers to identifying expressions conveying basic emotions such as happiness, sadness, anger, disgust, fear, surprise, and neutrality. Accurate and robust FER is significant for human-computer interaction, clinical practices, marketing analysis, user experience research, and psychological studies.
This project provides an end-to-end pipeline designed for real-time facial expression recognition using YOLOv11 for face detection and EfficientNet-B0 trained on the RAF-DB dataset for emotion classification. The optimized ONNX Runtime enables efficient inference on CPUs and GPUs, providing a seamless user experience.
- ⚡ Fast & lightweight: YOLOv11 + EfficientNet-B0
- 🧠 Real-time face detection and emotion classification
- 🌍 Web-based interface with optional emoji overlays
- 💾 ONNX + quantized model for low-power deployment
- ☁️ Live deployment on Google Cloud Run with CI/CD
This application is deployed live on Google Cloud Run using a CI/CD pipeline:
GitHub → Cloud Build → Cloud Run
👉 Try it now: 🌐 Live Demo
The application workflow follows:
Input Frame → YOLOv11 (Face Detection) → EfficientNet-B0 (Emotion Classification) → Result Overlay → Web Display
The project leverages the RAF-DB dataset, containing around 15,000 facial images annotated with seven basic emotions:
- 0 - Angry 😠
- 1 - Disgust 😧
- 2 - Fear 😨
- 3 - Happy 😃
- 4 - Sad 😞
- 5 - Surprise 😮
- 6 - Neutral 😐
Images have undergone augmentation methods such as random flips, rotations, and color jitter to enhance the generalization and robustness of the trained model.
- Clone the repository:
git clone https://github.com/your-user/Facial_Expression_Recognition.git
cd Facial_Expression_Recognition- Install dependencies:
pip install -r requirements.txt- Obtain pretrained model weights:
Place these files in src/models/weights.
- Download and extract the RAF-DB dataset into
src/data/data.
To train the model:
- Set
IS_TRAINING: trueinsrc/config/custom.yaml - Run:
python main.py --config src/config/custom.yamlThe trained model weights will be saved to src/models/weights.
Convert your trained PyTorch model to ONNX format for optimized and portable deployment:
python src/train/export_onnx.py- Set
IS_TRAINING: falseinsrc/config/custom.yaml - Run:
python main.py --config src/config/custom.yamlAlternatively, directly run the FastAPI service:
uvicorn src.serve.app:app --host 0.0.0.0 --port 8000The trained model achieves approximately:
- Accuracy: Over 71% test accuracy on RAF-DB.
From the confusion matrix, we can see that the model performs exceptionally well on Surprise, Happy, and Neutral, each showing strong diagonal dominance. However, Fear and Sad exhibit noticeable confusion with neighboring emotions like Angry and Disgust, suggesting that the model struggles to clearly separate these expressions, possibly due to subtle overlaps in facial cues.
Build and run the Docker container locally:
docker build -t fer-app .
# CPU-only execution
docker run -p 8000:8000 fer-app
# GPU acceleration execution
docker run --gpus all -p 8000:8000 fer-appThis project is deployed using Google Cloud Run with a fully automated CI/CD pipeline from GitHub.
Follow these steps to deploy it yourself:
- Create a Project on Google Cloud Console.
- Enable the following APIs for the project:
- Cloud Build API
- Artifact Registry API
- Container Analysis API
- Navigate to Cloud Run > Create Service.
- Choose "Continuously deploy from a repository (source or function)".
- Connect your GitHub repository that contains the Dockerfile.
- Select the branch you want to auto-deploy from (e.g.,
main). - Configure the service settings:
- Set the container port your app listens on (e.g.,
8080) - Choose the appropriate CPU, memory, and instance limits
- Set the container port your app listens on (e.g.,
- Click Create — this will trigger a Cloud Build that builds and deploys the service.
✅ Once deployed, your app will be live with a secure HTTPS endpoint.
🌐 Live Demo

