Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/master'
Browse files Browse the repository at this point in the history
  • Loading branch information
sronilsson committed Feb 5, 2025
2 parents 558a322 + ef002a5 commit 6233846
Show file tree
Hide file tree
Showing 4 changed files with 139 additions and 0 deletions.
35 changes: 35 additions & 0 deletions docs/Tutorial_tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ We have developed video and image processing tools and incorporated them into th
- [Extract ROI definitions to human-readable format](https://github.com/sgoldenlab/simba/blob/master/docs/Tutorial_tools.md#extract-roi-definitions-to-human-readable-format)
- [Temporally join videos](https://github.com/sgoldenlab/simba/blob/master/docs/Tutorial_tools.md#temporal-join-videos)
- [Extract project annotation counts](https://github.com/sgoldenlab/simba/blob/master/docs/Tutorial_tools.md#extract-project-annotation-counts)
- [Remove video backgrounds](https://github.com/sgoldenlab/simba/blob/master/docs/Tutorial_tools.md#remove-background-from-all-videos-in-a-directory)

## Shorten Videos
This is a tool used to trim video lengths. The tool contains two different methods:
Expand Down Expand Up @@ -647,6 +648,40 @@ iv) Click the `RUN` button. You can follow the progress in the main SimBA termin
v) Once complete, new videos will be stored in your selected output directory.


### REMOVE BACKGROUND FROM A SINGLE VIDEO

<p align="center">
<img src="https://github.com/sgoldenlab/simba/blob/master/images/bg_removal_single.webp" />
</p>

* In `VIDEO PATH`, click <kbd>BROWSE</kbd> and select the file you which to remove the background from.
* OPTIONAL: In `BACKGROUND REFERENCE VIDEO PATH`, click <kbd>BROWSE</kbd> and select a video file that represents the background (has no animals visible but otherwise representative / near identical copy of video selected in `VIDEO PATH`)
> [!NOTE]
> `BACKGROUND REFERENCE VIDEO PATH` can often be left blank (do not select any video). If no video is selected, the background with be computed from the `VIDEO PATH`.
* `COMPUTE BACKGROUND FROM ENTIRE VIDEO`: The background can be computed from the entire video, of a segment of the video. If you want to compute it from teh entire video, check the `COMPUTE BACKGROUND FROM ENTIRE VIDEO` checkbox.
- `BACKGROUND VIDEO START (FRAME # OR TIME)`: If **un-checking** the `COMPUTE BACKGROUND FROM ENTIRE VIDEO`, we need to select a segment of the background video which we will use to compute the background. Enter the **START** time (e.g., `00:00:00`) or or start frame (e.g., `0`) of the segment here.
- `BACKGROUND VIDEO END (FRAME # OR TIME)`: If **un-checking** the `COMPUTE BACKGROUND FROM ENTIRE VIDEO`, we need to select a segment of the background video which we will use to compute the background. Enter the **END** time (e.g., `00:00:20`) or or start frame (e.g., `1200`) of the segment here.
* `MULTIPROCESS VIDEO (FASTER): To use multiprocessing, and process each image of the video in parallel on each of your available CPU cores, check this box and select how many cores you want to use in the `CPU cores` dropdown.

Once complete, hit the <kbd>RUN</kbd> button. You can follow the progress in the main SimBA terminal window and the operating system terminal window. The background removed video will be saved in the same directory as the `VIDEO PATH` with the `_bg_subtracted` suffix.

### REMOVE BACKGROUND FROM ALL VIDEOS IN A DIRECTORY

<p align="center">
<img src="https://github.com/sgoldenlab/simba/blob/master/images/bg_removal_directory.webp" />
</p>

* In `VIDEO DIRECTORY`, click <kbd>BROWSE</kbd> and select the directory containing videos you want to remove the background from.
* OPTIONAL: In `BACKGROUND VIDEO DIRECTORY`, click <kbd>BROWSE</kbd> and select a directory containing videos mirring the filenames of the videos in the `VIDEO DIRECTORY` but have animals visible but otherwise representative / near identical copy of video selected in `VIDEO DIRECTORY`.
> [!NOTE]
> `BACKGROUND VIDEO DIRECTORY` can most often be left blank (do not select any directory). If no directory is selected, the background with be computed from the videos in `VIDEO DIRECTORY` themselves.
* `COMPUTE BACKGROUND FROM ENTIRE VIDEO`: The background can be computed from the entire videos, of a segment of the videos. If you want to compute it from the entire videos, check the `COMPUTE BACKGROUND FROM ENTIRE VIDEO` checkbox.
- `BACKGROUND VIDEO START (FRAME # OR TIME)`: If **un-checking** the `COMPUTE BACKGROUND FROM ENTIRE VIDEO`, we need to select a segments of the background videos which we will use to compute the background. Enter the **START** time (e.g., `00:00:00`) or or start frame (e.g., `0`) of the segment here.
- `BACKGROUND VIDEO END (FRAME # OR TIME)`: If **un-checking** the `COMPUTE BACKGROUND FROM ENTIRE VIDEO`, we need to select a segments of the background videos which we will use to compute the background. Enter the **END** time (e.g., `00:00:20`) or or start frame (e.g., `1200`) of the segment here.
* `MULTIPROCESS VIDEO (FASTER): To use multiprocessing, and process each image of the video in parallel on each of your available CPU cores, check this box and select how many cores you want to use in the `CPU cores` dropdown.

Once complete, hit the <kbd>RUN</kbd> button. You can follow the progress in the main SimBA terminal window and the operating system terminal window. The background removed video will be saved in the same directory as the `VIDEO PATH` with the `_bg_subtracted` suffix.

##
Author [Simon N](https://github.com/sronilsson), [JJ Choong](https://github.com/inoejj)

104 changes: 104 additions & 0 deletions docs/mitra_clfs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
# Steps to run grooming, rearing, and Straub tail classifiers.

## 1) PRE-PROCESSING: Egocentric alignment and background subtraction for Straub tail classification.

For accurately classyifing Straub tail, we need features constructed from **images** of the animals tail (not, as done by default, features constructed from the **pose-estimated locations** of the animal). To get standardized images of the animals
tails, we need to create copies of the original videos and original pose-estimation data where the background have been subtracted and the videos have been egocentrically aligned.

For these steps, I have jupyter notebook examples below, but please let me know if your prefer to go through these steps by clicking in a graphical user interface and I can make it available.

#### Background subtraction

As we are constructing features from images of the animal and its tail, we don't want the background (i.e., the arena floor) to be visable. If it was, then pixel values around and under the tail would influence the feature values. To remove the background,
we compute the average frame pixel values, and remove the mean from each image as below.

[docs__static_img_bg_removed_ex_2_clipped.webm](https://github.com/user-attachments/assets/261c36e0-c59d-4f57-9422-430277d3b78b)

To create copies of videos where the background is removed, you can use [THIS](https://simba-uw-tf-dev.readthedocs.io/en/latest/nb/bg_remove.html) notebook.

#### Egocentric alignment

After peforming the background subtraction, we need to [egocentrically aligned](https://simba-uw-tf-dev.readthedocs.io/en/latest/simba.data_processors.html#simba.data_processors.egocentric_aligner.EgocentricalAligner) copies of the background subtracted videos.
This means that we spin the videos around so that the animal is always faacing in the same direction (e.g., east), as in the example below. We do this to remove any variability associated with animal direction: i.e., the animal direction in itself should can not change how the tail looks like.

[EgocentricalAligner.webm](https://github.com/user-attachments/assets/7caf920b-0e86-49c2-bfde-2b606de6d6d8)

As we know the location of the animal snout and center through the pose-estimated locations from SLEAP/DLC to, we can rotate the images so that the center body-part is always located in near the center of the video,
and the nose is always pointing 0 degrees axis-aligned of the center. Note, as we are rotating the images, we should also create copies of the original pose-estimation data where the same rotation factors have been applied. This ensures that we still know where the body-parts are
located in the rotated videos, and we can perform any calculations from the body-parts in the rotated data the same way as we could in the original, un-rotated, data.

To perform pose-estimation based video and data rotation, use [THIS](https://simba-uw-tf-dev.readthedocs.io/en/latest/nb/egocentric_align.html) notebook.

> [!NOTE]
> Again I can insert GUI options for doing these operations (background subtraction and egocentric alignment) if you prefer!
## 2) DATA IMPORT: Interpolation and smoothing.

Next, import the raw, original pose-estimation data into SimBA, perform `Body-part: Nearest` interpolation. This ensures that for any frame a body-part is missing in, the body-part will take the location of when the body-part was last visibale.

Also perform Savitzky-Golay smoothing using a 400ms time-window to remove pose-estimation jitter.

Do not perform any outlier correction.

## 3) CUSTOM FEATURE EXTRACTION.

We will compute a bunch of features that measures the animals hull, and parts of the hull (e.g., lower body, upper body, head) and how their shapes and sizes vary in sliding time-windows. To do this, we will use the
custom feature extraction script in SimBA as documented [HERE](https://github.com/sgoldenlab/simba/blob/master/docs/extractFeatures.md). The feature extraction script that we will be using, is called `MitraFeatureExtractor`,
and is located [HERE](https://github.com/sgoldenlab/simba/blob/master/simba/sandbox/MitraFeatureExtractor.py).

## 4) RUN REARING AND GROOMING CLASSIFIERS

After feature extraction, your good to go and run the `grooming` and `rearing` classifier. Run the classifiers
in SimBA as documented [HERE](https://github.com/sgoldenlab/simba/blob/master/docs/Scenario2.md#part-3-run-the-classifier-on-new-data).

GROOMING THRESHOLD: 0.35

GROOMING MINIMUM BOUT LENGTH: 200ms

REARING THRESHOLD: 0.30

REARING MINIMUM BOUT LENGTH: 200ms

## COMPUTE ADDITIONAL STRAUB TAIL FEATURES

Now, in order to run the Straub tail classifier, we need to compute some additional features, and combine them with the
features computed in the prior step, to create the final feature set for the Straub tail classifier.

**Step i**: To compute these features, use [THIS](https://github.com/sgoldenlab/simba/blob/master/simba/sandbox/mitra_tail_analyzer.py)
script. This produces a set of files, one for each of the videos, inside your passed ``save_dir``

**Step ii**: In step 2, we need to combine these features created in (i) with some features created in step `"3) CUSTOM FEATURE EXTRACTION"` to create the final
feature set for the tail classifier. To do this, run [THIS](https://github.com/sgoldenlab/simba/blob/master/simba/sandbox/mitra_appand_additional.py) scrip.
This produces a final set of files inside the passed ``SAVE_DIR``.

**Step ii**: Move the files ouputted in step (ii) to the ``project_folder/csv/features_extracted`` directory of your SimBA project (move the files you have used for the grooming/rearing classifications first).
Alternatively, place thse files inside the ``project_folder/csv/features_extracted`` directory of a new SimBA project only designed to classify Straub tail.

## 4) RUN STRAUB TAIL CLASSIFIER

Run the straub tail classifier in SimBA as documented [HERE](https://github.com/sgoldenlab/simba/blob/master/docs/Scenario2.md#part-3-run-the-classifier-on-new-data).

STRAUB TAIL THRESHOLD: 0.30

STRAUB TAIL MINIMUM BOUT LENGTH: 200ms





















Binary file added images/bg_removal_directory.webp
Binary file not shown.
Binary file added images/bg_removal_single.webp
Binary file not shown.

0 comments on commit 6233846

Please sign in to comment.