You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am having problems reconstructing a scene taken with a not so good camera.
The problem I would like to address is the quality of the depth estimation performed by the DepthMap node.
While trying to tune my pipeline to get a better result, I got to inspecting the depth estimation produced by the depth map node. To my surprise I found the depth estimation to be extremely noisy, to the point that it got me wondering if it were the cause of the bad reconstruction (image below)
Proposed solution
Having experience with neural network methods for depth estimation, I tried computing the same depth estimate using MiDaS and, as expected I got an incredibly better result (image below).
While I do understand that including the MiDaS or another ML based approach for depth estimation in Meshroom could lead to a set of problems and may not be desirable, I think that adding an option to outsource the depth estimation step to an external tool for advanced users could be very beneficial to the quality of the output.
This would also come in handy in cases when depth information for the image is already available thanks to ToF sensors or other, which is also a concern that has been raised in other issues.
Alternatives
Adding choices for integrated DepthMap algorithms
Adding a node to import depth map information that the user is responsible for computing given the folder
Adding the possibilty of inserting a custom command that gets passed as argument input and desired output folder for the depth maps / single input output files
Splitting the depth map node in a way that allows advanced users to compute the depth map autonomously and then compute the simMap / nmodMap using meshroom
Additional context
This issue is related to #1493, however I could not find an actual implemented/documented solution to do it even though it is marked as closed.
Here are some comparison between the meshroom output and the MiDaS output run on the same photos (i resized and reencoded them to jpeg manually for this issue)
Sample 1
Meshroom:
MiDaS:
Sample 2
Meshroom:
MiDaS:
Sample 3
Meshroom:
MiDaS:
EDIT: this is the script I used to compute the depth maps
Problem description
I am having problems reconstructing a scene taken with a not so good camera.
The problem I would like to address is the quality of the depth estimation performed by the DepthMap node.
While trying to tune my pipeline to get a better result, I got to inspecting the depth estimation produced by the depth map node. To my surprise I found the depth estimation to be extremely noisy, to the point that it got me wondering if it were the cause of the bad reconstruction (image below)
Proposed solution
Having experience with neural network methods for depth estimation, I tried computing the same depth estimate using MiDaS and, as expected I got an incredibly better result (image below).
While I do understand that including the MiDaS or another ML based approach for depth estimation in Meshroom could lead to a set of problems and may not be desirable, I think that adding an option to outsource the depth estimation step to an external tool for advanced users could be very beneficial to the quality of the output.
This would also come in handy in cases when depth information for the image is already available thanks to ToF sensors or other, which is also a concern that has been raised in other issues.
Alternatives
Additional context
This issue is related to #1493, however I could not find an actual implemented/documented solution to do it even though it is marked as closed.
Here are some comparison between the meshroom output and the MiDaS output run on the same photos (i resized and reencoded them to jpeg manually for this issue)
Sample 1
Meshroom:
MiDaS:
Sample 2
Meshroom:
MiDaS:
Sample 3
Meshroom:
MiDaS:
EDIT: this is the script I used to compute the depth maps
The text was updated successfully, but these errors were encountered: