-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The transformation matrix between cameras in TAPVid-3D #115
Comments
Hey Runsen, Thanks for reaching out and the kind words. Yup -- the camera extrinsics matrices is something we were thinking of releasing as well, which I think we already have for both those splits, buried in the scripts. As you probably noted, for Panoptic, the camera is fixed. It's very helpful to know that there is demand for this, so thanks for writing in. I can't promise a timeline for releasing this updated version with extrinsics unfortunately -- as the team is super busy with the ICLR and other deadlines right now, but it's something that we ourselves would find useful, so added high to our to-do list. |
I'll leave this issue open, until we get around to uploading those as well. |
Hi Skanda, Thanks for your reply! Really look forward to that! Best, |
Dear Authors, Thank you for your exceptional work and this wonderful dataset! I have a similar question: based on my understanding, the released 3D trajectories of key points are in the camera coordinate space. I was wondering if it is possible to release the camera extrinsic parameters or 3D trajectories in world coordinates. Any updates on this matter would be greatly appreciated. Thank you once again for your valuable time! |
Dear Authors, Me too. Still waiting for the release of camera extrinsic. Hope I am able to use it for my CVPR 2025 project. :) Thanks again for your wonderful work. Best, |
Dear Authors,
Thank you so much for your great work! It's really a great contribution to the filed.
I am writing to ask is it possible for you to provide the transformation matrix (delta pose) of the cameras of the DriveTrack and ADT splits? With these transformation matrix, we can decouple camera motion and object motion, which would benefit greatly for the 3D vision field.
Best,
Runsen
The text was updated successfully, but these errors were encountered: