-
Notifications
You must be signed in to change notification settings - Fork 506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TorchToLinalg] Add aten.fft_rfft
and lowering
#3857
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before reviewing in detail, let me see if I understand correctly what your cross-repository goal is.
- You include this torch-to-linalg conversion as a baseline conversion, but in
IREE
, you intend to have a more performant lowering tolinalg_ext
? I suppose this pass exists beforetorch-to-linalg
in thetorch-to-iree
pipeline? - Would it make more sense to add this as a decomposition of the op at the torch-dialect level? That way, other backends like
Tosa
andStableHLO
could benefit, and we can turn off the op viabackend-legal-ops
option intorch-decompose-complex-ops
pass if we want to go a different route inIREE
. I have plans to modify the decompose complex ops pass to be more specific in thetorch-to-iree
pipeline this week, so we can specify abackend-legal-ops
set there.
aten.ffr_rfft
and loweringaten.fft_rfft
and lowering
|
Yeah, precisely. There are some limitations, however. Does the higher performance path for |
Ah, I see you already converted this to a decomposition. Perhaps we should just do both? StableHlo and Tosa would benefit from the decomposition, which we can turn off once you add the |
@zjgarvey The higher-performance path would apply when the input signal length is a power of 2, all other cases would need to be translated to this "naive" algorithm. Do you think it's possible to branch compilation based on the input dimension size? |
It might be possible to mark the op as conditionally illegal for |
@zjgarvey Added conversion back. |
AtenFftRfftOp
to Torch dialect.AtenFftRfftOp
to Linalg, using alinalg.matmul
per output component (real and imaginary). Computing the DFT is O(n^2).AtenFftRfftOp
into Torch-level ops (same paradigm as above).