Skip to content

Automatically generate bounding boxes annotations from pose data #102

@sfmig

Description

@sfmig

Is your feature request related to a problem? Please describe.
Not related to a problem, but it could be a useful feature to have to expand the use of pose estimation datasets to other computer vision tasks (mainly detection).

Describe the solution you'd like
Given a set of pose estimation annotations, generate corresponding bounding boxes, keeping animal ID. This could be done using convex hull algorithms or simply min/max x,y coordinates (in pixel space).

Describe alternatives you've considered
N/A

Additional context
Could this feature be relevant not only to expand the use of pose estimation datasets, but also from the analysis point of view? For example, you may want to define a "region of influence" around an animal to inspect when an animal comes close to another animal. Going forward, maybe we could explore the option of deriving panoptic segmentation masks from pose data (using the keypoints as "initial points" in SAM for example?)

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    Status

    📝 Todo

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions