diff --git a/README.md b/README.md
index 25d90097..d484aa7a 100644
--- a/README.md
+++ b/README.md
@@ -1,20 +1,26 @@
# DermSynth3D
-[![CircleCI](https://dl.circleci.com/status-badge/img/gh/sfu-mial/DermSynth3D/tree/main.svg?style=svg&circle-token=176de57353747d240e619bdf9aacf9f716e7d04f)](https://dl.circleci.com/status-badge/redirect/gh/sfu-mial/DermSynth3D/tree/main)
+[![CircleCI](https://dl.circleci.com/status-badge/img/gh/sfu-mial/DermSynth3D/tree/main.svg?style=svg&circle-token=176de57353747d240e619bdf9aacf9f716e7d04f)](https://dl.circleci.com/status-badge/redirect/gh/sfu-mial/DermSynth3D/tree/main)
![GPLv3](https://img.shields.io/static/v1.svg?label=📃%20License&message=GPL%20v3.0&color=critical&style=flat-square)
-[![arXiv](https://img.shields.io/static/v1.svg?label=📄%20arXiv&message=2305.12621&color=important&style=flat-square)](https://arxiv.org/abs/2305.12621)
-[![DOI](https://img.shields.io/static/v1.svg?label=📄%20DOI&message=DOI&color=informational&style=flat-square)](https://doi.org/10.48550/arXiv.2305.12621)
+[![arXiv](https://img.shields.io/static/v1.svg?label=📄%20arXiv&message=2305.12621&color=important&style=flat-square)](https://arxiv.org/abs/2305.12621)
+[![DOI](https://img.shields.io/static/v1.svg?label=📄%20DOI&message=DOI&color=informational&style=flat-square)](https://doi.org/10.48550/arXiv.2305.12621)
[![request dataset](https://img.shields.io/static/v1.svg?label=Dataset&message=Request%20Dataset&style=flat-square&color=blueviolet)](https://cvi2.uni.lu/3dbodytexdermsynth/)
[![Video](https://img.shields.io/badge/Video-Youtube-ff69b4?style=flat-square)](https://www.youtube.com/watch?v=x3gDBJCI_3k)
-This is the official code repository following our work [DermSynth3D](https://arxiv.org/pdf/2305.12621.pdf).
+:scroll: This is the official code repository for **DermSynth3D**.
-[![Video Thumbnail](assets/DERMSYNTH_YOUTUBE_THUMB.png)](http://www.youtube.com/watch?v=x3gDBJCI_3k)
+
+
+
+
+
+:tv: Check out the video abstract for this work:
+[![Video Thumbnail](assets/DERMSYNTH_YOUTUBE_THUMB.png)](http://www.youtube.com/watch?v=x3gDBJCI_3k)
## TL;DR
A data generation pipeline for creating photorealistic _in-the-wild_ synthetic dermatological data with rich annotations such as semantic segmentation masks, depth maps, and bounding boxes for various skin analysis tasks.
-![main pipeline](assets/pipeline.png)
+![main pipeline](assets/pipeline.png)
>_The figure shows the DermSynth3D computational pipeline where 2D segmented skin conditions are blended into the texture image of a 3D mesh on locations outside of the hair and clothing regions. After blending, 2D views of the mesh are rendered with a variety of camera viewpoints and lighting conditions and combined with background images to create a synthetic dermatology dataset._
## Motivation
@@ -39,10 +45,10 @@ DermSynth3D/
┣ out/ # the checkpoints are saved here (auto created)
┣ data/ # directory to store the data
┃ ┣ ... # detailed instructions in the dataset.md
-┣ dermsynth3d/ #
+┣ dermsynth3d/ #
┃ ┣ datasets/ # class definitions for the datasets
┃ ┣ deepblend/ # code for deep blending
-┃ ┣ losses/ # loss functions
+┃ ┣ losses/ # loss functions
┃ ┣ models/ # model definitions
┃ ┣ tools/ # wrappers for synthetic data generation
┃ ┗ utils/ # helper functions
@@ -52,30 +58,38 @@ DermSynth3D/
```
## Table of Contents
-- [Installation](#installation)
- - [using conda](#using-conda)
- - [using Docker](#using-docker) **recommended**
-- [Datasets](#datasets)
- - [Data for Blending](#data-for-blending)
- - [3DBodyTex.v1 dataset](#download-3dbodytexv1-meshes)
- - [3DBodyTex.v1 annotations](#download-the-3dbodytexv1-annotations)
- - [Fitzpatrick17k dataset](#download-the-fitzpatrick17k-dataset)
- - [Background Scenes](#download-the-background-scenes)
- - [Data For Training](#data-for-training)
- - [FUSeg dataset](#download-the-fuseg-dataset)
- - [Pratheepan dataset](#download-the-pratheepan-dataset)
- - [PH2 dataset](#download-the-ph2-dataset)
- - [DermoFit dataset](#download-the-dermofit-dataset)
- - [Creating the Synthetic dataset](#creating-the-synthetic-dataset)
-- [How to Use DermSynth3D](#how-to-use-dermsynth3d)
- - [Generating Synthetic Dataset](#generating-synthetic-dataset)
- - [Post-Process Renderings with Unity](#post-process-renderings-with-unity)
-- [Preparing Dataset for Experiments](#preparing-dataset-for-experiments)
-- [Cite](#cite)
-- [Demo Notebooks for Dermatology Tasks](#demo-notebooks-for-dermatology-tasks)
- - [Lesion Segmentation](#lesion-segmentation)
- - [Multi-Task Prediction](#multi-task-prediction)
- - [Lesion Detection](#lesion-detection)
+- [DermSynth3D](#dermsynth3d)
+ - [TL;DR](#tldr)
+ - [Motivation](#motivation)
+ - [Repository layout](#repository-layout)
+ - [Table of Contents](#table-of-contents)
+ - [Installation](#installation)
+ - [using conda](#using-conda)
+ - [using Docker](#using-docker)
+ - [Datasets](#datasets)
+ - [The folder structure of data directory should be as follows:](#the-folder-structure-of-data-directory-should-be-as-follows)
+ - [Data for Blending](#data-for-blending)
+ - [Download 3DBodyTex.v1 meshes](#download-3dbodytexv1-meshes)
+ - [Download the 3DBodyTex.v1 annotations](#download-the-3dbodytexv1-annotations)
+ - [Download the Fitzpatrick17k dataset](#download-the-fitzpatrick17k-dataset)
+ - [Download the Background Scenes](#download-the-background-scenes)
+ - [Data For Training](#data-for-training)
+ - [Download the FUSeg dataset](#download-the-fuseg-dataset)
+ - [Download the Pratheepan dataset](#download-the-pratheepan-dataset)
+ - [Download the PH2 dataset](#download-the-ph2-dataset)
+ - [Download the DermoFit dataset](#download-the-dermofit-dataset)
+ - [Creating the Synthetic dataset](#creating-the-synthetic-dataset)
+ - [How to Use DermSynth3D](#how-to-use-dermsynth3d)
+ - [Generating Synthetic Dataset](#generating-synthetic-dataset)
+ - [Post-Process Renderings with Unity](#post-process-renderings-with-unity)
+ - [Click to see the a visual comparison of the renderings obtained from Pytorch3D and Unity.](#click-to-see-the-a-visual-comparison-of-the-renderings-obtained-from-pytorch3d-and-unity)
+ - [Preparing Dataset for Experiments](#preparing-dataset-for-experiments)
+ - [Cite](#cite)
+ - [Demo Notebooks for Dermatology Tasks](#demo-notebooks-for-dermatology-tasks)
+ - [Lesion Segmentation](#lesion-segmentation)
+ - [Multi-Task Prediction](#multi-task-prediction)
+ - [Lesion Detection](#lesion-detection)
+ - [Acknowledgements](#acknowledgements)
@@ -86,7 +100,7 @@ DermSynth3D/
#### using conda
```bash
-git clone --recurse-submodules https://github.com/sfu-mial/DermSynth3D.git
+git clone --recurse-submodules https://github.com/sfu-mial/DermSynth3D.git
cd DermSynth3D
conda env create -f dermsynth3d.yml
conda activate dermsynth3d
@@ -100,7 +114,7 @@ conda activate dermsynth3d
# Build the container in the root dir
docker build -t dermsynth3d --build-arg UID=$(id -u) --build-arg GID=$(id -g) -f Dockerfile .
# Run the container in interactive mode for using DermSynth3D
-# See 3. How to use DermSynth3D
+# See 3. How to use DermSynth3D
docker run --gpus all -it --rm -v /path/to/downloaded/data:/data dermsynth3d
```
We provide the [pre-built docker image](https://hub.docker.com/r/sinashish/dermsynth3d), which can be be used as well:
@@ -122,7 +136,7 @@ If you face any issues installing pytorch3d, please refer to their [installation
## Datasets
Follow the instructions below to download the datasets for generating the synthetic data and training models for various tasks.
-All the datasets should be downloaded and placed in the `data` directory.
+All the datasets should be downloaded and placed in the `data` directory.
@@ -130,9 +144,9 @@ All the datasets should be downloaded and placed in the `data` directory.
-
+
- ### The folder structure of data directory should be as follows:
+ ### The folder structure of data directory should be as follows:
@@ -144,12 +158,12 @@ DermSynth3D/
┃ ┣ fitzpatrick17k/
┃ ┃ ┣ data/ # Fitzpatrick17k images
┃ ┃ ┗ annotations/ # annotations for Fitzpatrick17k lesions
-┃ ┣ ph2/
+┃ ┣ ph2/
┃ ┃ ┣ images/ # PH2 images
┃ ┃ ┗ labels/ # PH2 annotations
┃ ┣ dermofit/ # Dermofit dataset
-┃ ┃ ┣ images/ # Dermofit images
-┃ ┃ ┗ targets/ # Dermofit annotations
+┃ ┃ ┣ images/ # Dermofit images
+┃ ┃ ┗ targets/ # Dermofit annotations
┃ ┣ FUSeg/
┃ ┃ ┣ train/ # training set with images/labels for FUSeg
┃ ┃ ┣ validation/ # val set with images/labels for FUSeg
@@ -173,12 +187,12 @@ The datasets used in this work can be broadly categorized into data required for
-
+
### Data for Blending
-
+
@@ -193,11 +207,11 @@ The datasets used in this work can be broadly categorized into data required for
The `3DBodyTex.v1` dataset can be downloaded from [here](https://cvi2.uni.lu/3dbodytexv1/).
- `3DBodyTex.v1` contains the meshes and texture images used in this work and can be downloaded from the external site linked above (after accepting a license agreement).
+ `3DBodyTex.v1` contains the meshes and texture images used in this work and can be downloaded from the external site linked above (after accepting a license agreement).
**NOTE**: These textured meshes are needed to run the code to generate the data.
- We provide the non-skin texture maps annotations for 2 meshes: `006-f-run` and `221-m-u`.
+ We provide the non-skin texture maps annotations for 2 meshes: `006-f-run` and `221-m-u`.
Hence, to generate the data, make sure to get the `.obj` files for these two meshes and place them in `data/3dbodytex-1.1-highres` before excecuting `scripts/gen_data.py`.
After accepting the licence, download and unzip the data in `./data/`.
@@ -207,7 +221,7 @@ The datasets used in this work can be broadly categorized into data required for
-
+
### Download the 3DBodyTex.v1 annotations
@@ -242,16 +256,16 @@ The datasets used in this work can be broadly categorized into data required for
We used the skin conditions from [Fitzpatrick17k](https://github.com/mattgroh/fitzpatrick17k).
See their instructions to get access to the Fitzpatrick17k images.
We provide the raw images for the Fitzpatrick17k dataset [here](https://vault.sfu.ca/index.php/s/cMuxZNzk6UUHNmX).
-
+
After downloading the dataset, unzip the dataset:
```bash
unzip fitzpatrick17k.zip -d data/fitzpatrick17k/
```
-
+
We provide a few samples of the densely annotated lesion masks from the Fitzpatrick17k dataset within this repository under the `data` directory.
- More of such annotations can be downloaded from [here](https://vault.sfu.ca/index.php/s/gemdbCeoZXoCqlS).
-
+ More of such annotations can be downloaded from [here](https://vault.sfu.ca/index.php/s/gemdbCeoZXoCqlS).
+
@@ -264,7 +278,7 @@ The datasets used in this work can be broadly categorized into data required for
![bg_scenes](./assets/readme_bg.png)
>_A few examples of the background scenes used for rendering the synthetic data._
-
@@ -280,7 +294,7 @@ The datasets used in this work can be broadly categorized into data required for
-### Data For Training
+### Data For Training
@@ -296,8 +310,8 @@ The datasets used in this work can be broadly categorized into data required for
-
- The Foot Ulcer Segmentation Challenge (FUSeg) dataset is available to download from [their official repository](https://github.com/uwm-bigdata/wound-segmentation/tree/master/data/Foot%20Ulcer%20Segmentation%20Challenge).
+
+ The Foot Ulcer Segmentation Challenge (FUSeg) dataset is available to download from [their official repository](https://github.com/uwm-bigdata/wound-segmentation/tree/master/data/Foot%20Ulcer%20Segmentation%20Challenge).
Download and unpack the dataset at `data/FUSeg/`, maintaining the Folder Structure shown above.
For simplicity, we mirror the FUSeg dataset [here](https://vault.sfu.ca/index.php/s/2mb8kZg8wOltptT).
@@ -315,7 +329,7 @@ The datasets used in this work can be broadly categorized into data required for
![prath](./assets/readme_prath.png)
>_A few examples from the Pratheepan dataset showing the images and it's corresponding segmentation mask, in the top and bottom row respectively._
- The Pratheepan dataset is available to download from [their official website](https://web.fsktm.um.edu.my/~cschan/downloads_skin_dataset.html).
+ The Pratheepan dataset is available to download from [their official website](https://web.fsktm.um.edu.my/~cschan/downloads_skin_dataset.html).
The images and the corresponding ground truth masks are available in a ZIP file hosted on Google Drive. Download and unpack the dataset at `data/Pratheepan_Dataset/`.
@@ -323,15 +337,15 @@ The datasets used in this work can be broadly categorized into data required for
-
+
### Download the PH2 dataset
![ph2](./assets/readme_ph2.png)
>_A few examples from the PH2 dataset showing a lesion and it's corresponding segmentation mask, in the top and bottom row respectively._
-
- The PH2 dataset can be downloaded from [the official ADDI Project website](https://www.fc.up.pt/addi/ph2%20database.html).
+
+ The PH2 dataset can be downloaded from [the official ADDI Project website](https://www.fc.up.pt/addi/ph2%20database.html).
Download and unpack the dataset at `data/ph2/`, maintaining the Folder Structure shown below.
@@ -358,7 +372,7 @@ The datasets used in this work can be broadly categorized into data required for
### Creating the Synthetic dataset
-
+
![synthetic data](./assets/fig_1-min.png)
>_Generated synthetic images of multiple subjects across a range of skin tones in various skin conditions, background scene, lighting, and viewpoints._
@@ -368,13 +382,13 @@ The datasets used in this work can be broadly categorized into data required for
If you want to train your models on a different split of the synthetic data, you can download a dataset generated by blending lesions on 26 3DBodyTex scans from [here](https://cvi2.uni.lu/3dbodytexdermsynth/).
To prepare the synthetic dataset for training. Sample the `images`, and `targets` from the path where you saved this dataset and then organise them into `train/val`.
-
+
**NOTE**: To download the synthetic 3DBodyTex.DermSynth dataset referred in the links above, you would need to request access by following the instructions on this [link](https://cvi2.uni.lu/3dbodytexdermsynth/).
Alternatively, you can use the provided script `scripts/prep_data.py` to create it.
Even better, you can generate your own dataset, by following the instructions [here](./README.md#generating-synthetic-dataset).
-
+
@@ -387,7 +401,7 @@ The datasets used in this work can be broadly categorized into data required for
-### Generating Synthetic Dataset
+### Generating Synthetic Dataset
![annots](./assets/AnnotationOverview.png)
> _A few examples of annotated data synthesized using DermSynth3D. The rows from top to bottom show respectively: the rendered images with blended skin conditions, bounding boxes around the lesions, GT semantic segmentation masks, grouped anatomical labels, and the monocular depth maps produced by the renderer._
@@ -400,7 +414,7 @@ bodytex_dir: './data/3dbodytex-1.1-highres/' # Name of the mesh to blend
mesh_name: '006-f-run' # Path to FitzPatrick17k lesions
fitz_dir: './data/fitzpatrick17k/data/finalfitz17k/' # Path to annotated Fitz17k lesions with masks
annot_dir: './data/annotations/' # Path to save the new texture maps
-tex_dir: './data/lesions/'
+tex_dir: './data/lesions/'
``` -->
Now, to *generate* the synthetic data with the default parameters, simply run the following command to generate 2000 views for a specified mesh:
@@ -412,24 +426,24 @@ python -u scripts/gen_data.py
To change the blending or synthesis parameters only, run using:
```bash
# Use python scripts/gen_data.py -h for full list of arguments
-python -u scripts/gen_data.py --lr \
+python -u scripts/gen_data.py --lr \
-m \
-s \
-ps \
-i \
-v \
- -n
+ -n
```
Feel free to play around with other `random` parameter in `configs/blend.yaml` to control lighting, material and view points.
-### Post-Process Renderings with Unity
+### Post-Process Renderings with Unity
-We use Pytorch3D as our choice of differential renderer to generate synthetic data.
+We use Pytorch3D as our choice of differential renderer to generate synthetic data.
However, Pytorch3D is not a Physically Based Renderer (PBR) and hence, the renderings are not photorealistic or may not look photorealistic.
-To achieve photorealistic renderings, we use Unity to post-process the renderings obtained from Pytorch3D.
+To achieve photorealistic renderings, we use Unity to post-process the renderings obtained from Pytorch3D.
@@ -452,7 +466,7 @@ Follow the detailed instructions outlined [here](./docs/unity.md) to create phot
## Preparing Dataset for Experiments
-
@@ -472,7 +486,7 @@ You can look at `scripts/prep_data.py` for more details.
If you find this work useful or use any part of the code in this repo, please cite our paper:
```bibtext
@misc{sinha2023dermsynth3d,
- title={DermSynth3D: Synthesis of in-the-wild Annotated Dermatology Images},
+ title={DermSynth3D: Synthesis of in-the-wild Annotated Dermatology Images},
author={Ashish Sinha and Jeremy Kawahara and Arezou Pakzad and Kumar Abhishek and Matthieu Ruthven and Enjie Ghorbel and Anis Kacem and Djamila Aouada and Ghassan Hamarneh},
year={2023},
eprint={2305.12621},
diff --git a/assets/thumbnail.png b/assets/thumbnail.png
new file mode 100644
index 00000000..1e349c6f
Binary files /dev/null and b/assets/thumbnail.png differ
diff --git a/assets/thumbnail1.png b/assets/thumbnail1.png
new file mode 100644
index 00000000..24293ece
Binary files /dev/null and b/assets/thumbnail1.png differ
diff --git a/dermsynth3d.yml b/dermsynth3d.yml
index b244271f..5148fb7c 100644
--- a/dermsynth3d.yml
+++ b/dermsynth3d.yml
@@ -2,6 +2,8 @@ name: dermsynth3d
channels:
- pytorch3d
- iopath
+ - bottler
+ - pytorch
- fvcore
- conda-forge
- defaults
@@ -56,6 +58,8 @@ dependencies:
- pysocks=1.7.1=py38h06a4308_0
- python=3.8.15=h7a1cb2a_2
- python_abi=3.8=2_cp38
+ - pytorch
+ - torchvision
- pytorch3d=0.7.2=py38_cu113_pyt1100
- pyyaml=6.0=py38h0a891b7_5
- readline=8.2=h5eee18b_0
@@ -164,8 +168,6 @@ dependencies:
- pywavelets==1.4.1
- pyzmq==24.0.1
- qudida==0.0.4
- # - torch==1.10.0+cu111
- # - torchvision==0.11.0+cu111
- regex==2022.10.31
- rfc3339-validator==0.1.4
- rfc3986-validator==0.1.1
diff --git a/docs/preprint.pdf b/docs/preprint.pdf
new file mode 100644
index 00000000..f29c3eb6
Binary files /dev/null and b/docs/preprint.pdf differ