Skip to content

rsk97/diffdiffdepth

 
 

Repository files navigation

Differentiable Diffusion for Dense Depth Estimation from Multi-view Images

Numair Khan1, Min H. Kim2, James Tompkin1
1Brown, 2KAIST
CVPR 2021

Citation

If you use this code in your work, please cite our paper:

@article{khan2021diffdiff,
  title={Differentiable Diffusion for Dense Depth Estimation from Multi-view Images
  author={Numair Khan, Min H. Kim, James Tompkin},
  journal={Computer Vision and Pattern Recognition},
  year={2021}
}

Code in pytorch_ssim is from https://github.com/Po-Hsun-Su/pytorch-ssim

Running the Code

Environment Setup

The code has been tested with Python3.6 using Pytorch=1.5.1.

The provided setup file can be used to install all dependencies and create a conda environment diffdiffdepth:

$ conda env create -f environment.yml

$ conda activate diffdiffdepth

Multiview Stereo

To run the code on multi-view stereo images you will first need to generate poses using COLMAP. Once you have these, run the optimization by calling run_mvs.py:

$ python run_mvs.py --input_dir=<COLMAP_project_directory> --src_img=<target_img> --output_dir=<output_directory>

where <target_img> is the name of the image you want to compute depth for. Run python run_mvs.py -h to view additional optional arguments.

Example usage:

$ python run_mvs.py --input_dir=colmap_dir --src_img=img0.png --output_dir=./results

Lightfield Images

Coming soon!

Troubleshooting

We will add to this section as issues arise.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%