A gesture-based communication system designed to interpret human gestures and provide relevant outputs, assisting visually and speech-impaired individuals in effective communication.
This project was developed as part of the Sciphit Hackathon and leverages state-of-the-art computer vision and deep learning tools for real-time gesture recognition.
- Real-time hand gesture recognition using MediaPipe Holistic pipelines.
- Accurate classification with TensorFlow deep learning models.
- Computer vision powered by OpenCV for live video stream processing.
- Accessibility-focused: enables gesture-to-communication for visually and speech-impaired users.
- Scalable system for future integration with assistive devices.
- Python 🐍
- TensorFlow 🔥
- MediaPipe 🎯
- OpenCV 👀
- Holistic Pipelines
-
Clone this repository:
git clone https://github.com/Gupta-4388/Sciphit-Hackathon-Project.git cd Sciphit-Hackathon-Project -
Create a virtual environment (optional but recommended):
# On Linux / Mac python -m venv venv source venv/bin/activate # On Windows python -m venv venv venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Run the application:
python main.py
Sciphit-Hackathon-Project/ │── main.py # Entry point for running the system │── models/ # Trained ML/DL models │── data/ # Dataset (if included or linked) │── utils/ # Helper scripts │── requirements.txt # Dependencies │── README.md # Project documentation
1.) Helps visually and speech-impaired individuals communicate.
2.) Can be extended to sign language recognition systems.
3.) Useful in human-computer interaction (HCI) applications.
This project was built as part of the Sciphit Hackathon, showcasing AI-powered assistive technology to make communication more inclusive.
Contributions are welcome! Feel free to fork this repo, open issues, and submit pull requests.