(1) providing 7 different color spaces for training ("RGB", "HSV", "HLS", "YCbCr", "YUV", "LAB", and "LUV").
cd Zero-DCE_code
python lowlight_train.py --channel ("RGB", "HSV", "HLS", "YCbCr", "YUV", "LAB", and "LUV")
(2) providing 7 different color spaces of 200 epochs pretrained weight.
./Zero-DCE_code/snapshots/("RGB", "HSV", "HLS", "YCbCr", "YUV", "LAB", and "LUV").pth
(3) providing applications on videos.
cd Zero-DCE_code
python lowlight_test.py --mode (video/image) --channel ("RGB", "HSV", "HLS", "YCbCr", "YUV", "LAB", and "LUV")
(4) providing a tensorboard to display training loss.
tensorboard --logdir log/train_loss_("RGB", "HSV", "HLS", "YCbCr", "YUV", "LAB", and "LUV")
You can find more details here: https://li-chongyi.github.io/Proj_Zero-DCE.html. Have fun!
The implementation of Zero-DCE is for non-commercial use only.
We also provide a MindSpore version of our code: https://pan.baidu.com/s/1uyLBEBdbb1X4QVe2waog_g (passwords: of5l).
Pytorch implementation of Zero-DCE
- Python 3.7
- Pytorch 1.0.0
- opencv
- torchvision 0.2.1
- cuda 10.0
Zero-DCE does not need special configurations. Just basic environment.
Or you can create a conda environment to run our code like this: conda create --name zerodce_env opencv pytorch==1.0.0 torchvision==0.2.1 cuda100 python=3.7 -c pytorch
Download the Zero-DCE_code first. The following shows the basic folder structure.
├── data
│ ├── test_data # testing data. You can make a new folder for your testing data, like LIME, MEF, and NPE.
│ │ ├── LIME
│ │ └── MEF
│ │ └── NPE
│ └── train_data
├── lowlight_test.py # testing code
├── lowlight_train.py # training code
├── model.py # Zero-DEC network
├── dataloader.py
├── snapshots
│ ├── Epoch99.pth # A pre-trained snapshot (Epoch99.pth)
cd Zero-DCE_code
python lowlight_test.py
The script will process the images in the sub-folders of "test_data" folder and make a new folder "result" in the "data". You can find the enhanced images in the "result" folder.
-
cd Zero-DCE_code
-
download the training data google drive or baidu cloud [password: 1234]
-
unzip and put the downloaded "train_data" folder to "data" folder
python lowlight_train.py
The code is made available for academic research purpose only. Under Attribution-NonCommercial 4.0 International License.
@inproceedings{Zero-DCE,
author = {Guo, Chunle Guo and Li, Chongyi and Guo, Jichang and Loy, Chen Change and Hou, Junhui and Kwong, Sam and Cong, Runmin},
title = {Zero-reference deep curve estimation for low-light image enhancement},
booktitle = {Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)},
pages = {1780-1789},
month = {June},
year = {2020}
}
If you have any questions, please contact ICHEN LU at [email protected].