Skip to content

Commit

Permalink
Merge branch 'sp/282-plot-wrappers' of https://github.com/neuroinform…
Browse files Browse the repository at this point in the history
…atics-unit/movement into sp/282-plot-wrappers
  • Loading branch information
stellaprins committed Feb 4, 2025
2 parents 02e1146 + cc506f9 commit 0087615
Show file tree
Hide file tree
Showing 39 changed files with 2,218 additions and 1,663 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/test_and_deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ jobs:
with:
name: artifact
path: dist
- uses: pypa/gh-action-pypi-publish@v1.12.3
- uses: pypa/gh-action-pypi-publish@release/v1
with:
user: __token__
password: ${{ secrets.TWINE_API_KEY }}
5 changes: 3 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ repos:
args: [--fix=lf]
- id: name-tests-test
args: ["--pytest-test-first"]
exclude: ^tests/fixtures/
- id: requirements-txt-fixer
- id: trailing-whitespace
- repo: https://github.com/pre-commit/pygrep-hooks
Expand All @@ -29,7 +30,7 @@ repos:
- id: rst-directive-colons
- id: rst-inline-touching-normal
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.8.6
rev: v0.9.4
hooks:
- id: ruff
- id: ruff-format
Expand All @@ -52,7 +53,7 @@ repos:
additional_dependencies: [setuptools-scm, wheel]
- repo: https://github.com/codespell-project/codespell
# Configuration for codespell is in pyproject.toml
rev: v2.3.0
rev: v2.4.1
hooks:
- id: codespell
additional_dependencies:
Expand Down
2 changes: 1 addition & 1 deletion docs/source/blog/movement-v0_0_21.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ install the latest version or upgrade from an existing installation.
__Input/Output__

- We have added the {func}`movement.io.load_poses.from_multiview_files` function to support loading pose tracking data from multiple camera views.
- We have made several small improvements to reading bounding boxes tracks. See our new {ref}`example <sphx_glr_examples_load_and_upsample_bboxes.py>` to learn more about working with bounding boxes.
- We have made several small improvements to reading bounding box tracks. See our new {ref}`example <sphx_glr_examples_load_and_upsample_bboxes.py>` to learn more about working with bounding boxes.
- We have added a new {ref}`example <sphx_glr_examples_convert_file_formats.py>` on using `movement` to convert pose tracking data between different file formats.

__Kinematics__
Expand Down
4 changes: 2 additions & 2 deletions docs/source/community/mission-scope.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ Animal tracking frameworks such as [DeepLabCut](dlc:) or [SLEAP](sleap:) can
generate keypoint representations from video data by detecting body parts and
tracking them across frames. In the context of `movement`, we refer to these
trajectories as _tracks_: we use _pose tracks_ to refer to the trajectories
of a set of keypoints, _bounding boxes' tracks_ to refer to the trajectories
of bounding boxes' centroids, or _motion tracks_ in the more general case.
of a set of keypoints, _bounding box tracks_ to refer to the trajectories
of bounding box centroids, or _motion tracks_ in the more general case.

Our vision is to present a **consistent interface for representing motion
tracks** along with **modular and accessible analysis tools**. We aim to
Expand Down
20 changes: 10 additions & 10 deletions docs/source/user_guide/input_output.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,21 +4,21 @@
(target-formats)=
## Supported formats
(target-supported-formats)=
`movement` supports the analysis of trajectories of keypoints (_pose tracks_) and of bounding boxes' centroids (_bounding boxes' tracks_).
`movement` supports the analysis of trajectories of keypoints (_pose tracks_) and of bounding box centroids (_bounding box tracks_).

To analyse pose tracks, `movement` supports loading data from various frameworks:
- [DeepLabCut](dlc:) (DLC)
- [SLEAP](sleap:) (SLEAP)
- [LightingPose](lp:) (LP)
- [Anipose](anipose:) (Anipose)

To analyse bounding boxes' tracks, `movement` currently supports the [VGG Image Annotator](via:) (VIA) format for [tracks annotation](via:docs/face_track_annotation.html).
To analyse bounding box tracks, `movement` currently supports the [VGG Image Annotator](via:) (VIA) format for [tracks annotation](via:docs/face_track_annotation.html).

:::{note}
At the moment `movement` only deals with tracked data: either keypoints or bounding boxes whose identities are known from one frame to the next, for a consecutive set of frames. For the pose estimation case, this means it only deals with the predictions output by the software packages above. It currently does not support loading manually labelled data (since this is most often defined over a non-continuous set of frames).
:::

Below we explain how you can load pose tracks and bounding boxes' tracks into `movement`, and how you can export a `movement` poses dataset to different file formats. You can also try `movement` out on some [sample data](target-sample-data)
Below we explain how you can load pose tracks and bounding box tracks into `movement`, and how you can export a `movement` poses dataset to different file formats. You can also try `movement` out on some [sample data](target-sample-data)
included with the package.


Expand Down Expand Up @@ -129,15 +129,15 @@ For more information on the poses data structure, see the [movement poses datase


(target-loading-bbox-tracks)=
## Loading bounding boxes' tracks
To load bounding boxes' tracks into a [movement bounding boxes dataset](target-poses-and-bboxes-dataset), we need the functions from the
## Loading bounding box tracks
To load bounding box tracks into a [movement bounding boxes dataset](target-poses-and-bboxes-dataset), we need the functions from the
{mod}`movement.io.load_bboxes` module. This module can be imported as:

```python
from movement.io import load_bboxes
```

We currently support loading bounding boxes' tracks in the VGG Image Annotator (VIA) format only. However, like in the poses datasets, we additionally provide a `from_numpy()` method, with which we can build a [movement bounding boxes dataset](target-poses-and-bboxes-dataset) from a set of NumPy arrays.
We currently support loading bounding box tracks in the VGG Image Annotator (VIA) format only. However, like in the poses datasets, we additionally provide a `from_numpy()` method, with which we can build a [movement bounding boxes dataset](target-poses-and-bboxes-dataset) from a set of NumPy arrays.

::::{tab-set}
:::{tab-item} VGG Image Annotator
Expand Down Expand Up @@ -247,9 +247,9 @@ save_poses.to_dlc_file(ds, "/path/to/file.csv", split_individuals=True)


(target-saving-bboxes-tracks)=
## Saving bounding boxes' tracks
## Saving bounding box tracks

We currently do not provide explicit methods to export a movement bounding boxes dataset in a specific format. However, you can easily save the bounding boxes' trajectories to a .csv file using the standard Python library `csv`.
We currently do not provide explicit methods to export a movement bounding boxes dataset in a specific format. However, you can easily save the bounding box tracks to a .csv file using the standard Python library `csv`.

Here is an example of how you can save a bounding boxes dataset to a .csv file:

Expand All @@ -273,14 +273,14 @@ with open(filepath, mode="w", newline="") as file:
writer.writerow([frame, individual, x, y, width, height, confidence])

```
Alternatively, we can convert the `movement` bounding boxes' dataset to a pandas DataFrame with the {meth}`xarray.DataArray.to_dataframe` method, wrangle the dataframe as required, and then apply the {meth}`pandas.DataFrame.to_csv` method to save the data as a .csv file.
Alternatively, we can convert the `movement` bounding boxes dataset to a pandas DataFrame with the {meth}`xarray.DataArray.to_dataframe` method, wrangle the dataframe as required, and then apply the {meth}`pandas.DataFrame.to_csv` method to save the data as a .csv file.


(target-sample-data)=
## Sample data

`movement` includes some sample data files that you can use to
try the package out. These files contain pose and bounding boxes' tracks from
try the package out. These files contain pose and bounding box tracks from
various [supported formats](target-supported-formats).

You can list the available sample data files using:
Expand Down
18 changes: 9 additions & 9 deletions docs/source/user_guide/movement_dataset.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
(target-poses-and-bboxes-dataset)=
# The movement datasets

In `movement`, poses or bounding boxes' tracks are represented
In `movement`, poses or bounding box tracks are represented
as an {class}`xarray.Dataset` object.

An {class}`xarray.Dataset` object is a container for multiple arrays. Each array is an {class}`xarray.DataArray` object holding different aspects of the collected data (position, time, confidence scores...). You can think of a {class}`xarray.DataArray` object as a multi-dimensional {class}`numpy.ndarray`
with pandas-style indexing and labelling.

So a `movement` dataset is simply an {class}`xarray.Dataset` with a specific
structure to represent pose tracks or bounding boxes' tracks. Because pose data and bounding boxes data are somewhat different, `movement` provides two types of datasets: `poses` datasets and `bboxes` datasets.
structure to represent pose tracks or bounding box tracks. Because pose data and bounding box data are somewhat different, `movement` provides two types of datasets: `poses` datasets and `bboxes` datasets.

To discuss the specifics of both types of `movement` datasets, it is useful to clarify some concepts such as **data variables**, **dimensions**,
**coordinates** and **attributes**. In the next section, we will describe these concepts and the `movement` datasets' structure in some detail.
Expand Down Expand Up @@ -64,8 +64,8 @@ Attributes:

:::

:::{tab-item} Bounding boxes' dataset
To inspect a sample bounding boxes' dataset, we can run:
:::{tab-item} Bounding boxes dataset
To inspect a sample bounding boxes dataset, we can run:
```python
from movement import sample_data

Expand Down Expand Up @@ -119,7 +119,7 @@ A `movement` poses dataset has the following **dimensions**:
- `individuals`, with size equal to the number of tracked individuals/instances.
:::

:::{tab-item} Bounding boxes' dataset
:::{tab-item} Bounding boxes dataset
A `movement` bounding boxes dataset has the following **dimensions**s:
- `time`, with size equal to the number of frames in the video.
- `space`, which is the number of spatial dimensions. Currently, we support only 2D bounding boxes data.
Expand All @@ -139,7 +139,7 @@ In both cases, appropriate **coordinates** are assigned to each **dimension**.
:icon: info
The above **dimensions** and **coordinates** are created
by default when loading a `movement` dataset from a single
file containing pose or bounding boxes tracks.
file containing pose or bounding box tracks.

In some cases, you may encounter or create datasets with extra
**dimensions**. For example, the
Expand All @@ -160,9 +160,9 @@ A `movement` poses dataset contains two **data variables**:
- `confidence`: the confidence scores associated with each predicted keypoint (as reported by the pose estimation model), with shape (`time`, `keypoints`, `individuals`).
:::

:::{tab-item} Bounding boxes' dataset
:::{tab-item} Bounding boxes dataset
A `movement` bounding boxes dataset contains three **data variables**:
- `position`: the 2D locations of the bounding boxes' centroids over time, with shape (`time`, `space`, `individuals`).
- `position`: the 2D locations of the bounding box centroids over time, with shape (`time`, `space`, `individuals`).
- `shape`: the width and height of the bounding boxes over time, with shape (`time`, `space`, `individuals`).
- `confidence`: the confidence scores associated with each predicted bounding box, with shape (`time`, `individuals`).
:::
Expand All @@ -179,7 +179,7 @@ Both poses and bounding boxes datasets in `movement` have associated metadata. T
Right after loading a `movement` dataset, the following **attributes** are created:
- `fps`: the number of frames per second in the video. If not provided, it is set to `None`.
- `time_unit`: the unit of the `time` **coordinates** (either `frames` or `seconds`).
- `source_software`: the software that produced the pose or bounding boxes tracks.
- `source_software`: the software that produced the pose or bounding box tracks.
- `source_file`: the path to the file from which the data were loaded.
- `ds_type`: the type of dataset loaded (either `poses` or `bboxes`).

Expand Down
4 changes: 2 additions & 2 deletions examples/load_and_upsample_bboxes.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
"""Load and upsample bounding boxes tracks
"""Load and upsample bounding box tracks
==========================================
Load bounding boxes tracks and upsample them to match the video frame rate.
Load bounding box tracks and upsample them to match the video frame rate.
"""

# %%
Expand Down
12 changes: 6 additions & 6 deletions movement/io/load_bboxes.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
"""Load bounding boxes' tracking data into ``movement``."""
"""Load bounding boxes tracking data into ``movement``."""

import ast
import logging
Expand Down Expand Up @@ -37,7 +37,7 @@ def from_numpy(
----------
position_array : np.ndarray
Array of shape (n_frames, n_space, n_individuals)
containing the tracks of the bounding boxes' centroids.
containing the tracks of the bounding box centroids.
It will be converted to a :class:`xarray.DataArray` object
named "position".
shape_array : np.ndarray
Expand Down Expand Up @@ -277,7 +277,7 @@ def from_via_tracks_file(
Notes
-----
The bounding boxes' IDs specified in the "track" field of the VIA
The bounding boxes IDs specified in the "track" field of the VIA
tracks .csv file are mapped to the "individual_name" column of the
``movement`` dataset. The individual names follow the format ``id_<N>``,
with N being the bounding box ID.
Expand Down Expand Up @@ -377,7 +377,7 @@ def _numpy_arrays_from_via_tracks_file(
keys:
- position_array (n_frames, n_space, n_individuals):
contains the trajectories of the bounding boxes' centroids.
contains the trajectories of the bounding box centroids.
- shape_array (n_frames, n_space, n_individuals):
contains the shape of the bounding boxes (width and height).
- confidence_array (n_frames, n_individuals):
Expand All @@ -391,7 +391,7 @@ def _numpy_arrays_from_via_tracks_file(
Parameters
----------
file_path : pathlib.Path
Path to the VIA tracks .csv file containing the bounding boxes' tracks.
Path to the VIA tracks .csv file containing the bounding box tracks.
frame_regexp : str
Regular expression pattern to extract the frame number from the frame
Expand All @@ -402,7 +402,7 @@ def _numpy_arrays_from_via_tracks_file(
Returns
-------
dict
The validated bounding boxes' arrays.
The validated bounding boxes arrays.
"""
# Extract 2D dataframe from input data
Expand Down
3 changes: 1 addition & 2 deletions movement/kinematics.py
Original file line number Diff line number Diff line change
Expand Up @@ -742,8 +742,7 @@ def compute_pairwise_distances(
paired_elements = [
(elem1, elem2)
for elem1, elem2_list in pairs.items()
for elem2 in
(
for elem2 in (
# Ensure elem2_list is a list
[elem2_list] if isinstance(elem2_list, str) else elem2_list
)
Expand Down
2 changes: 2 additions & 0 deletions movement/roi/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
from movement.roi.line import LineOfInterest
from movement.roi.polygon import PolygonOfInterest
Loading

0 comments on commit 0087615

Please sign in to comment.