This project is a Flask web application that detects facial emotions in real-time using a YOLOv8 model. The model can detect the following emotions:
- Disgusted
- Surprised
- Angry
- Sad
- Happy
- Scared
- Neutral
-
Clone the repository:
git clone https://github.com/owaisahmed142002/facial-emotion-detection.git cd facial-emotion-detection
-
Set up a virtual environment:
python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
-
Install the required dependencies:
pip install -r requirements.txt
- Run the Flask app:
python app.py
- Open your web browser and navigate to:
http://127.0.0.1:5000/
- Upload an image or start the webcam to detect emotions.
If you want to train your own model, follow these steps:
- Prepare your dataset: Ensure your dataset is labeled and formatted correctly for YOLOv8.
- Train the model:
python train/train.py
The model after training will get stored in the runs directory, copy the .pt file of the model, and paste the file into the model directory.
app.py: The main Flask application file.
requirements.txt: List of dependencies required for the project.
templates/: Directory containing HTML templates.
static/: Directory for static files like uploaded images.
models/: Directory containing the trained YOLOv8 model.
train/: Directory containing the training script.
README.md: This documentation file.
Roboflow provided us with a platform for dataset annotation.
Ultralytics, for their pre-trained YOLOv8 weights.
This project is licensed under the MIT License. See the LICENSE file for details.
Feel free to contribute to this project by opening issues or submitting pull requests. Happy coding!