ScreenTimeCalculator is a tool for calculating character screen time in videos using face detection and recognition techniques. It employs RetinaFace for accurate face detection and FaceNet for embedding extraction, enabling users to analyze video content efficiently. Ideal for filmmakers and researchers seeking to quantify character presence.
- Detects faces in videos using RetinaFace.
- Clusters faces to identify different characters.
- Calculates screen time for each character.
- User-friendly command-line interface for easy interaction.
- Python
- OpenCV
- Keras with TensorFlow
- NumPy
- Matplotlib
- Clone the repository:
git clone https://github.com/SAHFEERULWASIHF/ScreenTimeCalculator.git
- Navigate to the project directory:
cd ScreenTimeCalculator - Install the required dependencies:
pip install -r requirements.txt
- Place your video file in the
inputdirectory (create this directory if it doesn’t exist). - Run the main script:
python main.py
- Follow the prompts to analyze the video. The program will process the video, detect faces, and calculate screen time for each identified character.
The program will output the detected characters along with their respective screen time. Below is an example of the expected output format:
Total screen time for women: 35.34 seconds based on 848 frames
Total screen time for kamal: 9.17 seconds based on 220 frames
Additionally, a visual representation (e.g., a plot or chart) may be displayed to illustrate the screen time distribution among characters.
[Sample Input]
sample.input.mp4
[Respective Sample Output]