This is the implementation code for the paper titled: Restoring Snow-Degraded Single Images With Wavelet in Vision Transformer IEEE Access 2023.
Images corrupted by snowy adverse weather can impose performance impediments to critical high-level vision-based applications. Restoring snow-degraded images is vital, but the task is ill-posed and very challenging due to the veiling effect, stochastic distribution, and multi-scale characteristics of snow in a scene. In this regard, many existing image denoising methods are often less successful with respect to snow removal, being that they mostly achieve success with one snow dataset and underperform in others, thus questioning their robustness in tackling real-world complex snowfall scenarios. In this paper, we propose the wavelet in transformer (WiT) network to address the image desnow inverse problem. Our model exploits the joint systemic capabilities of the vision transformer and the renowned discrete wavelet transform to achieve effective restoration of snow-degraded images. In our experiments, we evaluated the performance of our model on the popular SRRS, SNOW100K, and CSD datasets, respectively. The efficacy of our learning-based network is proven by our obtained numeric and qualitative result outcomes indicating significant performance gains compared to image desnow benchmark models and other state-of-the-art methods in the literature.
Recommended Python versions is from 3.7 to 3.8, and CUDA versions 10.2 to 11.5
- Download or Clone this repository
git clone https://github.com/WINS-lab/WiT
cd WiT
-
Install package dependencies
pip3 install -r requirements.txt
-
PyTorch environment with GPU support for Windows and Linux can be installed with Conda
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.2 -c pytorch
-
Download test datasets to a
data folder
with the links given below. -
Download the model_weights.zip here and data text file samples here.
-
For inference, edit and run
test.py
accordingly. For examplepython3 test.py -exp_name csd_weight
for csd test dataset.
Create a data
folder and format 2000 samples of the test dataset input and groundtruth images as follows
WiT
|
├── data
| |
| ├── test # Test-set
| | ├── <dataset_name>
| | | ├── input # degraded images
| | | └── gt # clean images
| | └── dataset_filename.txt
The underlisted snow-image datasets were used for the evaluation of the WiT network
- SNOW100K-L download link
- SRRS download link
- CSD download link
The PSNR and SSIM (numeric) results comparison
The quantitative (visual) result comparison evaluated on CSD dataset
Bibtex:
@article{WiTNet2023,
author={Obinna Agbodike and Jenhui Chen},
journal={IEEE Access},
title={Restoring Snow-Degraded Single Images With Wavelet in Vision Transformer},
month=sep,
year={2023},
volume={11},
pages={99470--99480}}
Useful blocks of code adapted in the WiT is credited to the contributions of ImageNetModel, TransWeather, and ViT-PyTorch