Skip to content

Commit

Permalink
adds pdf thumbnail
Browse files Browse the repository at this point in the history
  • Loading branch information
sinAshish committed Nov 29, 2023
1 parent 9c6cd76 commit be1212d
Show file tree
Hide file tree
Showing 5 changed files with 87 additions and 71 deletions.
152 changes: 83 additions & 69 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,26 @@
# DermSynth3D
[![CircleCI](https://dl.circleci.com/status-badge/img/gh/sfu-mial/DermSynth3D/tree/main.svg?style=svg&circle-token=176de57353747d240e619bdf9aacf9f716e7d04f)](https://dl.circleci.com/status-badge/redirect/gh/sfu-mial/DermSynth3D/tree/main)
[![CircleCI](https://dl.circleci.com/status-badge/img/gh/sfu-mial/DermSynth3D/tree/main.svg?style=svg&circle-token=176de57353747d240e619bdf9aacf9f716e7d04f)](https://dl.circleci.com/status-badge/redirect/gh/sfu-mial/DermSynth3D/tree/main)
![GPLv3](https://img.shields.io/static/v1.svg?label=📃%20License&message=GPL%20v3.0&color=critical&style=flat-square)
[![arXiv](https://img.shields.io/static/v1.svg?label=📄%20arXiv&message=2305.12621&color=important&style=flat-square)](https://arxiv.org/abs/2305.12621)
[![DOI](https://img.shields.io/static/v1.svg?label=📄%20DOI&message=DOI&color=informational&style=flat-square)](https://doi.org/10.48550/arXiv.2305.12621)
[![arXiv](https://img.shields.io/static/v1.svg?label=📄%20arXiv&message=2305.12621&color=important&style=flat-square)](https://arxiv.org/abs/2305.12621)
[![DOI](https://img.shields.io/static/v1.svg?label=📄%20DOI&message=DOI&color=informational&style=flat-square)](https://doi.org/10.48550/arXiv.2305.12621)
[![request dataset](https://img.shields.io/static/v1.svg?label=Dataset&message=Request%20Dataset&style=flat-square&color=blueviolet)](https://cvi2.uni.lu/3dbodytexdermsynth/)
[![Video](https://img.shields.io/badge/Video-Youtube-ff69b4?style=flat-square)](https://www.youtube.com/watch?v=x3gDBJCI_3k)

This is the official code repository following our work [DermSynth3D](https://arxiv.org/pdf/2305.12621.pdf).
:scroll: This is the official code repository for **DermSynth3D**.

[![Video Thumbnail](assets/DERMSYNTH_YOUTUBE_THUMB.png)](http://www.youtube.com/watch?v=x3gDBJCI_3k)
<a href="docs/preprint.pdf">
<img src="assets/thumbnail1.png" alt="PDF thumbnail" height=auto width="100%">
</a>


:tv: Check out the video abstract for this work:
[![Video Thumbnail](assets/DERMSYNTH_YOUTUBE_THUMB.png)](http://www.youtube.com/watch?v=x3gDBJCI_3k)

## TL;DR

A data generation pipeline for creating photorealistic _in-the-wild_ synthetic dermatological data with rich annotations such as semantic segmentation masks, depth maps, and bounding boxes for various skin analysis tasks.

![main pipeline](assets/pipeline.png)
![main pipeline](assets/pipeline.png)
>_The figure shows the DermSynth3D computational pipeline where 2D segmented skin conditions are blended into the texture image of a 3D mesh on locations outside of the hair and clothing regions. After blending, 2D views of the mesh are rendered with a variety of camera viewpoints and lighting conditions and combined with background images to create a synthetic dermatology dataset._
## Motivation
Expand All @@ -39,10 +45,10 @@ DermSynth3D/
┣ out/ # the checkpoints are saved here (auto created)
┣ data/ # directory to store the data
┃ ┣ ... # detailed instructions in the dataset.md
┣ dermsynth3d/ #
┣ dermsynth3d/ #
┃ ┣ datasets/ # class definitions for the datasets
┃ ┣ deepblend/ # code for deep blending
┃ ┣ losses/ # loss functions
┃ ┣ losses/ # loss functions
┃ ┣ models/ # model definitions
┃ ┣ tools/ # wrappers for synthetic data generation
┃ ┗ utils/ # helper functions
Expand All @@ -52,30 +58,38 @@ DermSynth3D/
```

## Table of Contents
- [Installation](#installation)
- [using conda](#using-conda)
- [using Docker](#using-docker) **recommended**
- [Datasets](#datasets)
- [Data for Blending](#data-for-blending)
- [3DBodyTex.v1 dataset](#download-3dbodytexv1-meshes)
- [3DBodyTex.v1 annotations](#download-the-3dbodytexv1-annotations)
- [Fitzpatrick17k dataset](#download-the-fitzpatrick17k-dataset)
- [Background Scenes](#download-the-background-scenes)
- [Data For Training](#data-for-training)
- [FUSeg dataset](#download-the-fuseg-dataset)
- [Pratheepan dataset](#download-the-pratheepan-dataset)
- [PH2 dataset](#download-the-ph2-dataset)
- [DermoFit dataset](#download-the-dermofit-dataset)
- [Creating the Synthetic dataset](#creating-the-synthetic-dataset)
- [How to Use DermSynth3D](#how-to-use-dermsynth3d)
- [Generating Synthetic Dataset](#generating-synthetic-dataset)
- [Post-Process Renderings with Unity](#post-process-renderings-with-unity)
- [Preparing Dataset for Experiments](#preparing-dataset-for-experiments)
- [Cite](#cite)
- [Demo Notebooks for Dermatology Tasks](#demo-notebooks-for-dermatology-tasks)
- [Lesion Segmentation](#lesion-segmentation)
- [Multi-Task Prediction](#multi-task-prediction)
- [Lesion Detection](#lesion-detection)
- [DermSynth3D](#dermsynth3d)
- [TL;DR](#tldr)
- [Motivation](#motivation)
- [Repository layout](#repository-layout)
- [Table of Contents](#table-of-contents)
- [Installation](#installation)
- [using conda](#using-conda)
- [using Docker](#using-docker)
- [Datasets](#datasets)
- [The folder structure of data directory should be as follows:](#the-folder-structure-of-data-directory-should-be-as-follows)
- [Data for Blending](#data-for-blending)
- [Download 3DBodyTex.v1 meshes](#download-3dbodytexv1-meshes)
- [Download the 3DBodyTex.v1 annotations](#download-the-3dbodytexv1-annotations)
- [Download the Fitzpatrick17k dataset](#download-the-fitzpatrick17k-dataset)
- [Download the Background Scenes](#download-the-background-scenes)
- [Data For Training](#data-for-training)
- [Download the FUSeg dataset](#download-the-fuseg-dataset)
- [Download the Pratheepan dataset](#download-the-pratheepan-dataset)
- [Download the PH2 dataset](#download-the-ph2-dataset)
- [Download the DermoFit dataset](#download-the-dermofit-dataset)
- [Creating the Synthetic dataset](#creating-the-synthetic-dataset)
- [How to Use DermSynth3D](#how-to-use-dermsynth3d)
- [Generating Synthetic Dataset](#generating-synthetic-dataset)
- [Post-Process Renderings with Unity](#post-process-renderings-with-unity)
- [Click to see the a visual comparison of the renderings obtained from Pytorch3D and Unity.](#click-to-see-the-a-visual-comparison-of-the-renderings-obtained-from-pytorch3d-and-unity)
- [Preparing Dataset for Experiments](#preparing-dataset-for-experiments)
- [Cite](#cite)
- [Demo Notebooks for Dermatology Tasks](#demo-notebooks-for-dermatology-tasks)
- [Lesion Segmentation](#lesion-segmentation)
- [Multi-Task Prediction](#multi-task-prediction)
- [Lesion Detection](#lesion-detection)
- [Acknowledgements](#acknowledgements)

<a name="installation"></a>

Expand All @@ -86,7 +100,7 @@ DermSynth3D/
#### using conda

```bash
git clone --recurse-submodules https://github.com/sfu-mial/DermSynth3D.git
git clone --recurse-submodules https://github.com/sfu-mial/DermSynth3D.git
cd DermSynth3D
conda env create -f dermsynth3d.yml
conda activate dermsynth3d
Expand All @@ -100,7 +114,7 @@ conda activate dermsynth3d
# Build the container in the root dir
docker build -t dermsynth3d --build-arg UID=$(id -u) --build-arg GID=$(id -g) -f Dockerfile .
# Run the container in interactive mode for using DermSynth3D
# See 3. How to use DermSynth3D
# See 3. How to use DermSynth3D
docker run --gpus all -it --rm -v /path/to/downloaded/data:/data dermsynth3d
```
We provide the [pre-built docker image](https://hub.docker.com/r/sinashish/dermsynth3d), which can be be used as well:
Expand All @@ -122,17 +136,17 @@ If you face any issues installing pytorch3d, please refer to their [installation
## Datasets

Follow the instructions below to download the datasets for generating the synthetic data and training models for various tasks.
All the datasets should be downloaded and placed in the `data` directory.
All the datasets should be downloaded and placed in the `data` directory.

<a name="tree"></a>

<!-- #### The folder structure of data directory should be as follows: -->
<details>

<a name="data_tree"></a>
<summary>
<summary>

### The folder structure of data directory should be as follows:
### The folder structure of data directory should be as follows:

</summary>

Expand All @@ -144,12 +158,12 @@ DermSynth3D/
┃ ┣ fitzpatrick17k/
┃ ┃ ┣ data/ # Fitzpatrick17k images
┃ ┃ ┗ annotations/ # annotations for Fitzpatrick17k lesions
┃ ┣ ph2/
┃ ┣ ph2/
┃ ┃ ┣ images/ # PH2 images
┃ ┃ ┗ labels/ # PH2 annotations
┃ ┣ dermofit/ # Dermofit dataset
┃ ┃ ┣ images/ # Dermofit images
┃ ┃ ┗ targets/ # Dermofit annotations
┃ ┃ ┣ images/ # Dermofit images
┃ ┃ ┗ targets/ # Dermofit annotations
┃ ┣ FUSeg/
┃ ┃ ┣ train/ # training set with images/labels for FUSeg
┃ ┃ ┣ validation/ # val set with images/labels for FUSeg
Expand All @@ -173,12 +187,12 @@ The datasets used in this work can be broadly categorized into data required for

<a name="blend_data"></a>
<summary>

### Data for Blending

</summary> <blockquote>
<!-- list of blending datasets -->
<details>
<details>
<a name="mesh_data"></a>
<summary>

Expand All @@ -193,11 +207,11 @@ The datasets used in this work can be broadly categorized into data required for

The `3DBodyTex.v1` dataset can be downloaded from [here](https://cvi2.uni.lu/3dbodytexv1/).

`3DBodyTex.v1` contains the meshes and texture images used in this work and can be downloaded from the external site linked above (after accepting a license agreement).
`3DBodyTex.v1` contains the meshes and texture images used in this work and can be downloaded from the external site linked above (after accepting a license agreement).

**NOTE**: These textured meshes are needed to run the code to generate the data.

We provide the non-skin texture maps annotations for 2 meshes: `006-f-run` and `221-m-u`.
We provide the non-skin texture maps annotations for 2 meshes: `006-f-run` and `221-m-u`.
Hence, to generate the data, make sure to get the `.obj` files for these two meshes and place them in `data/3dbodytex-1.1-highres` before excecuting `scripts/gen_data.py`.

After accepting the licence, download and unzip the data in `./data/`.
Expand All @@ -207,7 +221,7 @@ The datasets used in this work can be broadly categorized into data required for
<details>
<a name="mesh_annot_data"></a>
<summary>

### Download the 3DBodyTex.v1 annotations


Expand Down Expand Up @@ -242,16 +256,16 @@ The datasets used in this work can be broadly categorized into data required for
We used the skin conditions from [Fitzpatrick17k](https://github.com/mattgroh/fitzpatrick17k).
See their instructions to get access to the Fitzpatrick17k images.
We provide the raw images for the Fitzpatrick17k dataset [here](https://vault.sfu.ca/index.php/s/cMuxZNzk6UUHNmX).

After downloading the dataset, unzip the dataset:
```bash
unzip fitzpatrick17k.zip -d data/fitzpatrick17k/
```

We provide a few samples of the densely annotated lesion masks from the Fitzpatrick17k dataset within this repository under the `data` directory.

More of such annotations can be downloaded from [here](https://vault.sfu.ca/index.php/s/gemdbCeoZXoCqlS).
More of such annotations can be downloaded from [here](https://vault.sfu.ca/index.php/s/gemdbCeoZXoCqlS).

</details>

<details>
Expand All @@ -264,7 +278,7 @@ The datasets used in this work can be broadly categorized into data required for

![bg_scenes](./assets/readme_bg.png)
>_A few examples of the background scenes used for rendering the synthetic data._
<!--
<!--
|||
|:-:|:-:|
|![scene1](./assets/50.jpg)|![scene2](./assets/700.jpg)| -->
Expand All @@ -280,7 +294,7 @@ The datasets used in this work can be broadly categorized into data required for
<a name="train_data"></a>
<summary>

### Data For Training
### Data For Training

</summary> <blockquote>
<details>
Expand All @@ -296,8 +310,8 @@ The datasets used in this work can be broadly categorized into data required for
<!-- |||
|:-:|:-:|
|![scene1](./assets/0011.png)|![scene2](./assets/0011_m.png)| -->
The Foot Ulcer Segmentation Challenge (FUSeg) dataset is available to download from [their official repository](https://github.com/uwm-bigdata/wound-segmentation/tree/master/data/Foot%20Ulcer%20Segmentation%20Challenge).

The Foot Ulcer Segmentation Challenge (FUSeg) dataset is available to download from [their official repository](https://github.com/uwm-bigdata/wound-segmentation/tree/master/data/Foot%20Ulcer%20Segmentation%20Challenge).
Download and unpack the dataset at `data/FUSeg/`, maintaining the Folder Structure shown above.

For simplicity, we mirror the FUSeg dataset [here](https://vault.sfu.ca/index.php/s/2mb8kZg8wOltptT).
Expand All @@ -315,23 +329,23 @@ The datasets used in this work can be broadly categorized into data required for
![prath](./assets/readme_prath.png)
>_A few examples from the Pratheepan dataset showing the images and it's corresponding segmentation mask, in the top and bottom row respectively._
The Pratheepan dataset is available to download from [their official website](https://web.fsktm.um.edu.my/~cschan/downloads_skin_dataset.html).
The Pratheepan dataset is available to download from [their official website](https://web.fsktm.um.edu.my/~cschan/downloads_skin_dataset.html).
The images and the corresponding ground truth masks are available in a ZIP file hosted on Google Drive. Download and unpack the dataset at `data/Pratheepan_Dataset/`.

</details>

<details>
<a name="ph2_data"></a>
<summary>

### Download the PH2 dataset

</summary>

![ph2](./assets/readme_ph2.png)
>_A few examples from the PH2 dataset showing a lesion and it's corresponding segmentation mask, in the top and bottom row respectively._
The PH2 dataset can be downloaded from [the official ADDI Project website](https://www.fc.up.pt/addi/ph2%20database.html).
The PH2 dataset can be downloaded from [the official ADDI Project website](https://www.fc.up.pt/addi/ph2%20database.html).
Download and unpack the dataset at `data/ph2/`, maintaining the Folder Structure shown below.

</details>
Expand All @@ -358,7 +372,7 @@ The datasets used in this work can be broadly categorized into data required for
### Creating the Synthetic dataset

</summary>

![synthetic data](./assets/fig_1-min.png)
>_Generated synthetic images of multiple subjects across a range of skin tones in various skin conditions, background scene, lighting, and viewpoints._
Expand All @@ -368,13 +382,13 @@ The datasets used in this work can be broadly categorized into data required for

If you want to train your models on a different split of the synthetic data, you can download a dataset generated by blending lesions on 26 3DBodyTex scans from [here](https://cvi2.uni.lu/3dbodytexdermsynth/).
To prepare the synthetic dataset for training. Sample the `images`, and `targets` from the path where you saved this dataset and then organise them into `train/val`.

**NOTE**: To download the synthetic 3DBodyTex.DermSynth dataset referred in the links above, you would need to request access by following the instructions on this [link](https://cvi2.uni.lu/3dbodytexdermsynth/).

Alternatively, you can use the provided script `scripts/prep_data.py` to create it.

Even better, you can generate your own dataset, by following the instructions [here](./README.md#generating-synthetic-dataset).



</details>
Expand All @@ -387,7 +401,7 @@ The datasets used in this work can be broadly categorized into data required for

<a name='gen'></a>

### Generating Synthetic Dataset
### Generating Synthetic Dataset

![annots](./assets/AnnotationOverview.png)
> _A few examples of annotated data synthesized using DermSynth3D. The rows from top to bottom show respectively: the rendered images with blended skin conditions, bounding boxes around the lesions, GT semantic segmentation masks, grouped anatomical labels, and the monocular depth maps produced by the renderer._
Expand All @@ -400,7 +414,7 @@ bodytex_dir: './data/3dbodytex-1.1-highres/' # Name of the mesh to blend
mesh_name: '006-f-run' # Path to FitzPatrick17k lesions
fitz_dir: './data/fitzpatrick17k/data/finalfitz17k/' # Path to annotated Fitz17k lesions with masks
annot_dir: './data/annotations/' # Path to save the new texture maps
tex_dir: './data/lesions/'
tex_dir: './data/lesions/'
``` -->

Now, to *generate* the synthetic data with the default parameters, simply run the following command to generate 2000 views for a specified mesh:
Expand All @@ -412,24 +426,24 @@ python -u scripts/gen_data.py
To change the blending or synthesis parameters only, run using:
```bash
# Use python scripts/gen_data.py -h for full list of arguments
python -u scripts/gen_data.py --lr <learning rate> \
python -u scripts/gen_data.py --lr <learning rate> \
-m <mesh_name> \
-s <path to save the views> \
-ps <skin threshold> \
-i <blending iterations> \
-v <number of views> \
-n <number of lesions per mesh>
-n <number of lesions per mesh>
```

Feel free to play around with other `random` parameter in `configs/blend.yaml` to control lighting, material and view points.

<a name="post_proc_data"></a>

### Post-Process Renderings with Unity
### Post-Process Renderings with Unity

We use Pytorch3D as our choice of differential renderer to generate synthetic data.
We use Pytorch3D as our choice of differential renderer to generate synthetic data.
However, Pytorch3D is not a Physically Based Renderer (PBR) and hence, the renderings are not photorealistic or may not look photorealistic.
To achieve photorealistic renderings, we use Unity to post-process the renderings obtained from Pytorch3D.
To achieve photorealistic renderings, we use Unity to post-process the renderings obtained from Pytorch3D.

<details>
<summary>
Expand All @@ -452,7 +466,7 @@ Follow the detailed instructions outlined [here](./docs/unity.md) to create phot
<a name='train_prep'></a>

## Preparing Dataset for Experiments
<!--
<!--
![synthetic data](./assets/fig_1-min.png)
>_Generated synthetic images of multiple subjects across a range of skin tones in various skin conditions, background scene, lighting, and viewpoints._ -->

Expand All @@ -472,7 +486,7 @@ You can look at `scripts/prep_data.py` for more details.
If you find this work useful or use any part of the code in this repo, please cite our paper:
```bibtext
@misc{sinha2023dermsynth3d,
title={DermSynth3D: Synthesis of in-the-wild Annotated Dermatology Images},
title={DermSynth3D: Synthesis of in-the-wild Annotated Dermatology Images},
author={Ashish Sinha and Jeremy Kawahara and Arezou Pakzad and Kumar Abhishek and Matthieu Ruthven and Enjie Ghorbel and Anis Kacem and Djamila Aouada and Ghassan Hamarneh},
year={2023},
eprint={2305.12621},
Expand Down
Binary file added assets/thumbnail.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/thumbnail1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 4 additions & 2 deletions dermsynth3d.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@ name: dermsynth3d
channels:
- pytorch3d
- iopath
- bottler
- pytorch
- fvcore
- conda-forge
- defaults
Expand Down Expand Up @@ -56,6 +58,8 @@ dependencies:
- pysocks=1.7.1=py38h06a4308_0
- python=3.8.15=h7a1cb2a_2
- python_abi=3.8=2_cp38
- pytorch
- torchvision
- pytorch3d=0.7.2=py38_cu113_pyt1100
- pyyaml=6.0=py38h0a891b7_5
- readline=8.2=h5eee18b_0
Expand Down Expand Up @@ -164,8 +168,6 @@ dependencies:
- pywavelets==1.4.1
- pyzmq==24.0.1
- qudida==0.0.4
# - torch==1.10.0+cu111
# - torchvision==0.11.0+cu111
- regex==2022.10.31
- rfc3339-validator==0.1.4
- rfc3986-validator==0.1.1
Expand Down
Binary file added docs/preprint.pdf
Binary file not shown.

0 comments on commit be1212d

Please sign in to comment.