You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The video task as described in the paper is:- detection(yolo) -> crop -> classification(resnet). However I couldn't find exactly what is being classified here and what dataset is being using(for training and inferencing). Kindly guide me in the right direction, if I missed something.
The text was updated successfully, but these errors were encountered:
Hi @satyamjay-iitd,
Sorry for my late reply, we used a sample image https://github.com/reconfigurable-ml-pipeline/ipa/blob/main/pipelines/mlserver-centralized/video/seldon-core-version/input-sample.JPEG for the experiments, as explained in the paper section 4.1 we do not measure accuracies during the runtime, so in the prototype, we assume the accuracy of the models based on their static offline accuracies. Therefore we just have a sample image for emulating the workload load rather than online accuracy measurements. Please let me know if you need more clarification (please tag me for a faster reponse). Thanks for using IPA!
The video task as described in the paper is:- detection(yolo) -> crop -> classification(resnet). However I couldn't find exactly what is being classified here and what dataset is being using(for training and inferencing). Kindly guide me in the right direction, if I missed something.
The text was updated successfully, but these errors were encountered: