This project compares traditional Recurrent Neural Networks (RNNs) with modern Transformer-based models for emotion detection in text, with a focus on social media content. The research explores how these architectures handle the unique challenges of informal language, including sarcasm, mixed sentiments, and linguistic noise.
- Comparative Analysis: Direct performance comparison between RNN (LSTM/GRU) and Transformer (BERT) architectures
- Social Media Focus: Specialized evaluation on noisy, user-generated text with informal language patterns
- Comprehensive Metrics: Evaluation across accuracy, F1-score, computational efficiency, and memory requirements
- Practical Insights: Identification of optimal use cases for each architecture based on deployment constraints
- Specialized text cleaning for social media content (handling emojis, slang, typos)
- Emotion label normalization across datasets
- RNN Baseline: Bidirectional LSTM with attention mechanism
- Transformer Model: Fine-tuned BERT-base with emotion classification head
- Standard metrics (precision, recall, F1) across emotion categories
- Computational efficiency benchmarks (training time, inference speed)
- Error analysis on challenging cases (sarcasm, ambiguous expressions)