A Sound Detection System for the Deaf and Hard of Hearing using On-Device Machine Learning
This repository serves as the compiled package of our capstone project for Bulacan State University - Sarmiento Campus entitled: "AI-Driven Mobile Platform with IoT-Enabled Haptic Feedback for Real-Time Sound Recognition and Emergency Alerts for Deaf Individuals"
- π About
- β Features
- π· Screenshots
- βοΈ How It Works
- π§° Tech Stack
- π₯ Installation
- π Requirements
- π₯ Team
- π Acknowledgements
Sonavi is a sound detection system that alerts Deaf and Hard of Hearing users to environmental sounds through customizable smartwatch vibrations. We harness the power of on-device machine learning to detect and classify sounds captured by a Wear OS smartwatch, process them on an Android mobile device, and deliver haptic feedbackβall without requiring an internet connection.
We implemented the YAMNet audio classification model using LiteRT (formerly TensorFlow Lite) for real-time sound detection. The system allows users to detect pre-trained sounds as well as create custom sound profiles by recording or uploading their own audio samples, making it highly personalized and adaptable to individual needs.
- π§ Real-time Sound Detection: Captures audio from your Wear OS smartwatch and processes it instantly on your Android phone
- π€ On-Device Machine Learning: Uses YAMNet model via LiteRT (TensorFlow Lite) for accurate sound classification
- π³ Customizable Vibration Patterns: Set unique vibration alerts for different sound types
- π΅ Custom Sound Training: Create personalized sound profiles by recording or uploading 3+ samples of specific sounds you want to detect
- π Privacy-First: All processing happens on-device; no data leaves your phone
- β‘ Low Latency: Optimized communication between watch and phone for quick notifications
| Screenshot 1 | Screenshot 2 |
|---|---|
![]() |
![]() |
- Capture: The Wear OS smartwatch continuously listens for ambient sounds
- Transmit: Audio data is sent to the paired Android mobile device
- Process: The mobile app uses the YAMNet ML model to classify the sound
- Notify: If a registered sound is detected, a vibration pattern is sent back to the smartwatch
- Languages: Kotlin, Java
- Machine Learning: LiteRT (formerly TensorFlow Lite)
- ML Model: YAMNet for audio event classification
- Platform: Android 8.1+ (API 27), Wear OS 3+
- Architecture: MVVM pattern with offline-first approach
- Download the latest release from the Releases page
- Install the mobile APK on your Android phone
- Install the Wear OS APK on your smartwatch
- Pair your devices if not already paired
# Clone the repository
git clone https://github.com/xyugen/sonavi.git
cd sonavi
# Build mobile app
./gradlew :mobile:assembleDebug
# Build wear app
./gradlew :wear:assembleDebug
# Install to connected devices
./gradlew installDebug- Mobile Device: Android 8.1 (Oreo, API 27) or higher
- Wearable Device: Wear OS 3 or higher
Capstone Project Team
| Role | Name |
|---|---|
| Project Leader & Lead Developer | Renz Arias |
| Researcher | Ara Garong |
| Researcher | Angel Estonina |
| Quality Assurance | Jeric Gonzales |
| UI/UX Designer | Jomel Mislos |
| Name | Contributions |
|---|---|
| Dr. Mary Grace G. Hermogenes | Our ever-supportive Capstone Professor |
| Dr. Marlon D.P. Hernandez | Our ever-supportive Capstone Adviser |
This README was inspired by ScolioVis.


