-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement GPU Wavelet transform #25
Comments
here is a possible approach: https://lernapparat.de/2d-wavelet-transform-pytorch/ |
This is addressed by PR #32! |
Thanks, Sid! |
Hi everyone, I see this feature was initally added and then removed from the master branch. Would it be possible to know why? I would be really interested to get this working - I have been working on some compressed sensing type of recon with sigpy, keeping an eye to recon time: the wavelet transform is definitely the bottleneck of the pipeline. Is there any plan to allow wavelet to run on gpu? thanks a lot for this amazing tool! |
Hi, I think @frankong was worried about the correctness of my implementation. I would like to re-visit it, but do not have time right now. That being said, if you are able to resurrect the code and clarify that it's in "beta", I'll be happy to re-include it. |
Hi Sid, I have been looking a bit into the solution proposed here I ran some quick tests on 2D data, comparing forward transform using this implementation and the one in PyWavelets. Qualitatively, it looks reasonably similar, although coefficient absolute values are a bit different. It also has a different size, probably the zero padding is applying in a slightly different way, as when I look at the tiled coefficients they appear a bit shifted? Another thing I noticed, when looking at residual error of a forward+backward transform, this new implementation shows a slighltly higher error compared to the forward+backward of pywavelet (still the effect on the input image could not be appreciated). These were just qualitative tests, I am happy to go a bit more in detail with the comparison if needed...Do you rememeber by any chance what the issue was when you first incorporated this? I can try to focus on that... Regarding the performace, I had hard time to get decent computation times. Look forward to your feedbacks Marco |
Hi Marco, Thanks much for the tests! I will personally be a swamped for a few months but if you're willing to take the lead, I'd be happy to review any pull-requests on this. The following are what I envision would need to be done before the GPU version can be implemented:
On the performance difference, I think we can revisit it after (2) is done! Once that's set as a baseline, we can look at optimizations. Thanks much for all you've reported on so far already! Please let me know your thoughts on the above, and if you're interested in pursuing this. |
Hi Sid, Thanks |
Is your feature request related to a problem? Please describe.
There is no wavelet transform in GPU. Currently, SigPy moves array to CPU and uses pywavelet to perform the wavelet transform, which is the main bottleneck for compressed sensing MRI reconstructions.
Describe the solution you'd like
A GPU wavelet transform can leverage the GPU multi-channel convolution operations already wrapped in SigPy. Low pass and high pass filtering can be done in one operation using the output channels. Strides can be used to incorporate subsampling.
Describe alternatives you've considered
Implement the GPU kernels for wavelet transforms. But this would be less optimized than cuDNN convolutions.
The text was updated successfully, but these errors were encountered: