-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build only needed shaders with --cpu
#383
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of minor notes and suggestions. I think this is overall OK for now but I'd like you to be aware that there are a few aspects of the shaders that are in flux and will change quite a bit:
-
As I noted inline,
src/shaders.rs
will get rewritten to usecrates/shaders
which is our solution for exporting all the code for consumption in external/custom renderers. -
Currently there are a lot of type definitions for CPU shaders and CPU vs GPU-side resource management that live in
wgpu_engine.rs
. I would very much like to decouple initialization and management of CPU pipelines/resources and the engine that executes them.
For example we want to add different backend engines that don't use wgpu because we want to research and experiment with features that are not available in wgpu (I'm working on a metal_engine.rs
right now). IWBN to avoid having duplicate all the CPU-shader definitions for every engine that wants to have that capability.
This shouldn't impact your PR since you've been working with what exists now. I just wanted to point out that a lot of this is in flux and will gradually evolve.
src/wgpu_engine.rs
Outdated
pub enum CpuShaderType { | ||
Present(fn(u32, &[CpuBinding])), | ||
Missing, | ||
Ununsed, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this meant to be Unused
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please fix this typo (this should be Unused
not Ununsed
). Also, following the thread above, I think I prefer to call this Skipped
.
|
||
pub enum CpuShaderType { | ||
Present(fn(u32, &[CpuBinding])), | ||
Missing, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering why we need two different ways (Missing
and Unused
) to disable a CPU shader. I think we could just have two modes instead: a CPU shader is either present or it's not. Then at execution time we either use the GPU version or we use the CPU one. I think for now, this can be represented by two variants rather than three. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused
exists because neither the CPU or GPU shader is needed when passing use-cpu
, so we can skip compiling the GPU shader even if the CPU shader is missing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, Unused
represents a stage that gets skipped when running in CPU mode. I think this only makes sense in the current global "run everything on the CPU setting" but once we support a combination of CPU and GPU pipelines, we will likely want to have the pipelines available regardless.
I think what I would prefer is to have a way to defer pipeline creation for an unused pipeline rather than skip it using a global setting (similarly to what we have with buffer resources and "proxies" in our engine abstraction). But that should happen as future work.
|
||
let mut force_gpu = false; | ||
|
||
let force_gpu_from: Option<&str> = None; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think IWBN to decide this dynamically at render time. I think I'd like to move towards a mode where the presence of the CPU shaders and the decision on their execution are decoupled from each other.
A legitimate production case for this is small scenes: when rasterizing a handful of draw objects, we may want to process them quickly on the CPU to avoid the GPU upload and dispatch overhead and only run the coarse+fine raster stages on the GPU. We may want to make the CPU vs GPU decision dynamically at render time based on the size of the workload. In that world we may want to compile all the pipelines up front even though we'll sometimes use the CPU versions.
Ideally I'd like FullShaders
to hold all available pipelines and have an option we give to engine
that tells it which stages should run on the CPU.
Another thing I'd like you to be aware of is that we want to rewrite src/shaders.rs
to be in terms of crates/shaders
which automatically preprocesses the WGSL and the pipeline layouts. The CPU shaders will need to integrate with that somehow and they may move to the shaders crate (though we haven't put a lot of thought into this yet).
What you have is fine for now but be aware that this is all in flux and will change eventually.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would be nicer if we could run a set of shaders on the CPU, but we'll need readback hooked up on wasm first. This is a bit of a hack until then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the most common case will involve CPU stages that are strictly ordered before any GPU stages (rather than a mix of them), which shouldn't require read-back. I don't know if/when we'd want to run them in combination except maybe for testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved with a couple of comments. Please address them before merging.
Thank you for the improvements to the CPU pipeline management!
|
||
let mut force_gpu = false; | ||
|
||
let force_gpu_from: Option<&str> = None; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the most common case will involve CPU stages that are strictly ordered before any GPU stages (rather than a mix of them), which shouldn't require read-back. I don't know if/when we'd want to run them in combination except maybe for testing.
src/wgpu_engine.rs
Outdated
pub enum CpuShaderType { | ||
Present(fn(u32, &[CpuBinding])), | ||
Missing, | ||
Ununsed, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please fix this typo (this should be Unused
not Ununsed
). Also, following the thread above, I think I prefer to call this Skipped
.
|
||
pub enum CpuShaderType { | ||
Present(fn(u32, &[CpuBinding])), | ||
Missing, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, Unused
represents a stage that gets skipped when running in CPU mode. I think this only makes sense in the current global "run everything on the CPU setting" but once we support a combination of CPU and GPU pipelines, we will likely want to have the pipelines available regardless.
I think what I would prefer is to have a way to defer pipeline creation for an unused pipeline rather than skip it using a global setting (similarly to what we have with buffer resources and "proxies" in our engine abstraction). But that should happen as future work.
This changes shaders to have an optional GPU and an optional CPU shader. This allows us to only compile shaders which will actually get used, helping startup times.
The separate
install_cpu_shaders
function is removed and the CPU shaders are passed withadd_shader
.