Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[benchmark] evaluate the detectors on the AutoShot dataset #486

Merged
merged 2 commits into from
Feb 21, 2025

Conversation

awkrail
Copy link
Collaborator

@awkrail awkrail commented Feb 19, 2025

Related to #484. In addition to BBC, I implemented benchmarking codes on the AutoShot dataset. The original work uses 200 videos for the test set, yet 36 videos are missing (link). Hence, I evaluated the codes on the remaining videos.

Results
The following table shows the results, indicating that AdaptiveDetector achieves the highest scores. This is consistent with BBC.

Detector Recall Precision F1 Elapsed time (second)
AdaptiveDetector 70.77 77.65 74.05 1.23
ContentDetector 63.67 76.40 69.46 1.21
HashDetector 56.66 76.35 65.05 1.16
HistogramDetector 63.36 53.34 57.92 1.23
ThresholdDetector 0.75 38.64 1.47 1.24

@Breakthrough
Copy link
Owner

Awesome, thanks for the PR! Judging by the results for ThresholdDetector this dataset probably includes some more variety of transitions like dissolves/fades.

@awkrail
Copy link
Collaborator Author

awkrail commented Feb 20, 2025

@Breakthrough Yes, and I think that video cut type annotations (e.g., abrupt, dissolves, fade-in/out) may be useful. I will attach it with video cuts in the future for detailed evaluation.

@Breakthrough Breakthrough merged commit f85e7cd into Breakthrough:main Feb 21, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants