Skip to content

Latest commit

 

History

History
569 lines (382 loc) · 25.9 KB

2Dto3D.md

File metadata and controls

569 lines (382 loc) · 25.9 KB

Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos https://github.com/tensorflow/models/tree/master/research/struct2depth

DeepTAM: Deep Tracking and Mapping https://github.com/lmb-freiburg/deeptam

FADNet: A Fast and Accurate Network for Disparity Estimation https://github.com/HKBU-HPML/FADNet

code for Mesh R-CNN, an academic publication, presented at ICCV 2019 https://github.com/facebookresearch/meshrcnn

单目图像深度估计的Tensorflow C++实现 https://github.com/yan99033/monodepth-cpp

【用卷积网络将2D照片转换成3D】《Powered by AI: Turning any 2D photo into 3D using convolutional neural nets》 https://ai.facebook.com/blog/powered-by-ai-turning-any-2d-photo-into-3d-using-convolutional-neural-nets/

A Pytorch implementation of Pyramid Stereo Matching Network https://github.com/KinglittleQ/PSMNet

A PyTorch Library for Accelerating 3D Deep Learning Research https://github.com/NVIDIAGameWorks/kaolin

A pytorch implementation of "D4LCN: Learning Depth-Guided Convolutions for Monocular 3D Object Detection" https://github.com/dingmyu/D4LCN

it referenced paper of GANerated Hands for Real-Time 3D Hand Tracking from Monocular RGB. https://github.com/Ninebell/GaneratedHandsForReal_TIME

Hierarchical Deep Stereo Matching on High Resolution Images, CVPR 2019. https://github.com/gengshan-y/high-res-stereo

【PIFu:单图片3D着装人体数字化】《PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization》 https://github.com/shunsukesaito/PIFu

Learning View Priors for Single-view 3D Reconstruction https://github.com/hiroharu-kato/view_prior_learning

The official implementation of the ICCV 2019 paper "GraphX-convolution for point cloud deformation in 2D-to-3D conversion". https://github.com/justanhduc/graphx-conv

Official pytorch implementation of "Indoor Depth Completion with Boundary Consistency and Self-Attention. Huang et al. RLQ@ICCV 2019." https://arxiv.org/abs/1908.08344 https://github.com/patrickwu2/Depth-Completion

MVSNet: Depth Inference for Unstructured Multi-view Stereo. https://github.com/xy-guo/MVSNet_pytorch

Neural network code for Deep Blending for Free-Viewpoint Image-Based Rendering (SIGGRAPH Asia 2018) https://github.com/Phog/DeepBlending

Extreme View Synthesis https://github.com/NVlabs/extreme-view-synth

Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer (NeurIPS 2019) https://github.com/nv-tlabs/DIB-R

CNN-SVO: Improving the Mapping in Semi-Direct Visual Odometry Using Single-Image Depth Prediction https://github.com/yan99033/CNN-SVO

TriDepth: Triangular Patch-based Deep Depth Prediction [Kaneko+, ICCVW2019(oral)] https://github.com/syinari0123/tridepth

From Big to Small: Multi-Scale Local Planar Guidance for Monocular Depth Estimation https://github.com/cogaplex-bts/bts

Geometry meets semantics for semi-supervised monocular depth estimation - ACCV 2018 https://github.com/CVLAB-Unibo/Semantic-Mono-Depth

PyTorch implementation for LayoutNet v2 in the paper: "3D Manhattan Room Layout Reconstruction from a Single 360 Image" https://github.com/zouchuhang/LayoutNetv2

Real-Time 3D Semantic Reconstruction from 2D data https://github.com/MIT-SPARK/Kimera-Semantics

This repo includes the source code of the fully convolutional depth denoising model presented in https://arxiv.org/pdf/1909.01193.pdf https://github.com/VCL3D/DeepDepthDenoising

This is the project page of the paper "Flow-Motion and Depth Network for Monocular Stereo and Beyond'' https://github.com/HKUST-Aerial-Robotics/Flow-Motion-Depth

【神经网络3D重建资源列表】 https://github.com/natowi/3D-Reconstruction-with-Neural-Network

Unsupervised Scale-consistent Depth and Ego-motion Learning from Monocular Video (NeurIPS 2019) https://github.com/JiawangBian/SC-SfMLearner-Release

深度学习深度估计研究指南 https://pan.baidu.com/s/1RhORsmInOk1ZEmOKuUeybw

《Do As I Do: Transferring Human Motion and Appearance between Monocular Videos with Spatial and Temporal Constraints》 https://www.arxiv-vanity.com/papers/2001.02606/

(PyTorch)合成-现实(Synthetic-to-Realistic)转换深度估计 https://github.com/lyndonzheng/Synthetic2Realistic

用强化学习装宜家家具——IKEA(宜家)家具组装仿真环境,用于促进长期操作任务的解决:80多个家具模型、多个机器人、多观察角度、支持 OpenAI gym https://github.com/clvrai/furniture

三维深度学习论文列表 https://github.com/pointcloudlearning/3D-Deep-Learning-Paper-List

【整体3D重建论文/资源列表】'Holistic 3D Reconstruction - A list of papers and resources for holistic 3D reconstruction' https://github.com/holistic-3d/awesome-holistic-3d

'Facebook360 Depth Estimation Pipeline (facebook360_dep) - a computational imaging software pipeline supporting on-line marker-less calibration, high-quality reconstruction, and real-time streaming and rendering of 6DoF content.' https://github.com/facebook/facebook360_dep

ICCV的3D Vision Tutorial https://holistic-3d.github.io/iccv19/ 其实计算机视觉一开始就是研究3D重建问题的。当时几乎所有的奠基者都一致地在研究如何从二维的点、线、面恢复三维结构。用这些几何元素的理由来自于人的perception和cognition。所以一开始计算机视觉就是把recognition和reconstruction统一在三维重建这个问题中的。只是后来,做三维的开始淡化recognition,认为用局部features就够了,所以目前几乎所有三维重建系统(SfM,vSLAM)都是非常不稳定不鲁棒;而做recognition的开始淡化3D,集中在2D图像上做识别,然后认为三维可以从数据学来,不需要几何。这一分就是近二十年。而现在两边的都发现需要对方,不然计算机视觉真正落地就是做梦。

PyTorch无监督单目深度估计 https://github.com/ClubAI/MonoDepth-PyTorch

A collection of segmentation methods working on depth images https://github.com/ethz-asl/depth_segmentation

正交特征变换单目三维目标检测 https://github.com/tom-roddick/oft

Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image https://github.com/mks0601/3DMPPE_ROOTNET_RELEASE

SharpNet: Fast and Accurate Recovery of Occluding Contours in Monocular Depth Estimation https://github.com/MichaelRamamonjisoa/SharpNet

What Do Single-view 3D Reconstruction Networks Learn? https://github.com/lmb-freiburg/what3d

UprightNet: Geometry-Aware Camera Orientation Estimation from Single Images https://arxiv.org/abs/1908.07070

A customized implementation of the paper "StereoNet: guided hierarchical refinement for real-time edge-aware depth prediction" https://github.com/zhixuanli/StereoNet

Tensorflow implementation of Semi-Supervised Monocular Depth Estimation with Left-Right Consistency Using Deep Neural Network.

https://github.com/a-jahani/semiDepth

Code for the CVPR 2019 paper "Learning Single-Image Depth from Videos using Quality Assessment Networks"

https://github.com/princeton-vl/YouTube3D

A Pytorch implement of 《Deeper Depth Prediction with Fully Convolutional Residual Networks》 https://github.com/XPFly1989/FCRN

Single Image Depth Estimation Trained via Depth from Defocus Cues https://github.com/shirgur/UnsupervisedDepthFromFocus

SceneGraphNet: Neural Message Passing for 3D Indoor Scene Augmentation https://github.com/yzhou359/3DIndoor-SceneGraphNet

MultiDepth: Single-Image Depth Estimation via Multi-Task Regression and Classification https://github.com/lukasliebel/MultiDepth https://arxiv.org/abs/1907.11111

Neural RGB→D Sensing: Per-pixel depth and its uncertainty estimation from a monocular RGB video https://github.com/NVlabs/neuralrgbd

深度估计相关文献列表 https://github.com/scott89/awesome-depth

Tensorflow implementation of DeepV2D: Video to Depth with Differentiable Structure from Motion. https://github.com/princeton-vl/DeepV2D

How do neural networks see depth in single images? https://arxiv.org/abs/1905.07005

基于深度图的2D图像3D化效果 https://github.com/ialhashim/DenseDepth/

CVPR 2019 Translate-to-Recognize Networks for RGB-D Scene Recognition https://github.com/ownstyledu/Translate-to-Recognize-Networks

DORN: Deep Ordinal Regression Network for Monocular Depth Estimation https://github.com/hufu6371/DORN

移动像机,移动人物:深度学习深度预测方法 https://arxiv.org/abs/1904.11111 https://ai.googleblog.com/2019/05/moving-camera-moving-people-deep.html

Code for "PackNet-SfM: 3D Packing for Self-Supervised Monocular Depth Estimation" https://github.com/ToyotaResearchInstitute/packnet-sfm

从平行2D部分进行3D重建的C++工具 https://github.com/paulknysh/shaper

Transformable Bottleneck Networks https://github.com/kyleolsz/TB-Networks

Recovering 3D Planes from a Single Image via Convolutional Neural Networks https://github.com/fuy34/planerecover

Geometry-Aware Symmetric Domain Adaptation for Monocular Depth Estimation, CVPR 2019 https://github.com/sshan-zhao/GASDA

Direct Sparse Odometry with CNN Depth Prediction https://github.com/muskie82/CNN-DSO

Annotation webapp (javascript) used in the research project Scan2CAD: Learning CAD Model Alignment in RGB-D Scans https://www.scan2cad.org https://github.com/skanti/Scan2CAD-Annotation-Webapp https://github.com/skanti/Scan2CAD

Monocular depth estimation from a single image

https://github.com/nianticlabs/monodepth2

ICRA 2019 "FastDepth: Fast Monocular Depth Estimation on Embedded Systems"

https://github.com/dwofk/fast-depth

Torch implementation for CVPR 18 paper: "LayoutNet: Reconstructing the 3D Room Layout from a Single RGB Image"

https://github.com/zouchuhang/LayoutNet

Stereo R-CNN based 3D Object Detection for Autonomous Driving https://github.com/HKUST-Aerial-Robotics/Stereo-RCNN

基于Feature Pyramid Network的单图深度估计 https://github.com/xanderchf/MonoDepth-FPN-PyTorch

视频时序一致性学习 https://github.com/phoenix104104/fast_blind_video_consistency https://arxiv.org/abs/1808.00449​

Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image https://github.com/fangchangma/sparse-to-dense.pytorch

快速场景理解(分割/实例分割/单图像深度估计) https://github.com/DavyNeven/fastSceneUnderstanding torch7 lua 景深图像质量增强资源列表 https://github.com/mdcnn/Depth-Image-Quality-Enhancement

Code for RenderNet: A deep convolutional network for differentiable rendering from 3D shapes https://github.com/thunguyenphuoc/RenderNet

LabelFusion: A Pipeline for Generating Ground Truth Labels for Real RGBD Data of Cluttered Scenes http://labelfusion.csail.mit.edu https://github.com/RobotLocomotion/LabelFusion

[ECCV'18] 3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation https://github.com/angeladai/3DMV https://arxiv.org/abs/1803.10409

Code Repo for "Single View Stereo Matching" https://github.com/lawy623/SVS

DeepMVS: Learning Multi-View Stereopsis https://github.com/phuang17/DeepMVS

Layer-structured 3D Scene Inference via View Synthesis https://github.com/google/layered-scene-inference

Deep Depth Completion of a Single RGB-D Image https://github.com/yindaz/DeepCompletionRelease

场景理解和建模挑战(360° RGB-D 3D室内全景理解 https://github.com/facebookresearch/sumo-challenge

Implementation of ICRA 2019 paper: Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation https://github.com/hlzz/DeepMatchVO

Single-Image Piece-wise Planar 3D Reconstruction via Associative Embedding https://github.com/svip-lab/PlanarReconstruction

High Quality Monocular Depth Estimation via Transfer Learning https://arxiv.org/abs/1812.11941 https://github.com/ialhashim/DenseDepth

paper

Real-Time Joint Semantic Segmentation and Depth Estimation Using Asymmetric Annotations https://arxiv.org/abs/1809.04766

[ECCV 2018] DF-Net: Unsupervised Joint Learning of Depth and Flow using Cross-Task Consistency https://github.com/vt-vl-lab/DF-Net

Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving https://github.com/mileyan/pseudo_lidar

Neural RGB->D Sensing: Depth and Uncertainty from a Video Camera https://arxiv.org/abs/1901.02571

This repository provides official models from the paper Real-Time Joint Semantic Segmentation and Depth Estimation Using Asymmetric Annotations

https://github.com/DrSleep/multi-task-refinenet

PyTorch implementation of Deep Ordinal Regression Network for Monocular Depth Estimation https://github.com/dontLoveBugs/DORN_pytorch

The release code and dataset of CNN-MonoFusion for ismar2018 https://github.com/NetEaseAI-CVLab/CNN-MonoFusion

不用3D建模,通过静态图片进行训练,用(非卷积)深度网络表示场景的5D连续体表示,再通过ray marching进行渲染。 paper:《NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis》 https://github.com/yenchenlin/nerf-pytorch https://github.com/krrish94/nerf-pytorch

老照片也能玩3D:上下文感知三维图像分层深度修复技术 https://github.com/vt-vl-lab/3d-photo-inpainting

把Instagram上的图片全部3D化处理的Chrome插件 https://github.com/cyrildiagne/instagram-3d-photo

单目360˚深度估计 paper:《BiFuse: Monocular 360˚ Depth Estimation via Bi-Projection Fusion》 https://github.com/Yeh-yu-hsuan/BiFuse

【深度估计论文/代码汇总】 https://github.com/sxfduter/monocular-depth-estimation

'PatchMatchStereo双目立体匹配算法完整实现' https://github.com/ethan-li-coding/PatchMatchStereo

Code to extract stereo frame pairs from 3D videos, as used in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, arXiv:1907.01341" https://github.com/lasinger/3DVideos2Stereo

DDAD - Dense Depth for Autonomous Driving https://github.com/TRI-ML/DDAD

Towards Better Generalization: Joint Depth-Pose Learning without PoseNet https://github.com/B1ueber2y/TrianFlow

TRI-ML Monocular Depth Estimation Repository https://github.com/TRI-ML/packnet-sfm

SynSin: End-to-end View Synthesis from a Single Image https://github.com/facebookresearch/synsin

Task-Aware Monocular Depth Estimation for 3D Object Detection, AAAI2020 https://github.com/WXinlong/ForeSeE

Learning a Neural 3D Texture Space from 2D Exemplars https://github.com/henzler/neuraltexture

Code for the CVPR paper "CAM-Convs: Camera-Aware Multi-Scale Convolutions for Single-View Depth" https://github.com/jmfacil/camconvs

Towards Good Practice for CNN Based Monocular Depth Estimation https://github.com/zenithfang/supervised_dispnet

解决无监督单目深度估计算法在复杂室内场景难训练的问题 https://jwbian.net/unsupervised-indoor-depth-cn

Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth Estimation Using Displacement Fields https://github.com/dulucas/Displacement_Field

[CVPR 2020] Normal Assisted Stereo Depth Estimation https://github.com/udaykusupati/Normal-Assisted-Stereo

Deformable Kernel Network for Joint Image Filtering https://github.com/jun0kim/DKN

Distilled semantics for comprehensive scene understanding from videos https://github.com/CVLAB-Unibo/omeganet

Monocular depth estimation from a single image https://github.com/minghanz/DepthC3D

[ECCV'20] Patch-match and Plane-regularization for Unsupervised Indoor Depth Estimation https://github.com/svip-lab/Indoor-SfMLearner

Structure-Guided Ranking Loss for Single Image Depth Prediction

https://github.com/KexianHust/Structure-Guided-Ranking-Loss

We estimate dense, flicker-free, geometrically consistent depth from monocular video, for example hand-held cell phone video.

https://github.com/facebookresearch/consistent_depth

语义视图合成——根据语义标记图生成逼真图像,支持新视角的渲染。核心思想是,首先建立可见表面,再推断三维场景表示,最终建立完整的三维场景模型。 https://hhsinping.github.io/svs/link/paper.pdf https://hhsinping.github.io/svs/index.html

[ECCV 2020] Single image depth prediction allows us to rectify planar surfaces in images and extract view-invariant local features for better feature matching https://github.com/nianticlabs/rectified-features

Learning Stereo from Single Images https://github.com/nianticlabs/stereo-from-mono/

Generative View Synthesis: From Single-view Semantics to Novel-view Images https://arxiv.org/abs/2008.09106

Facebook面向具身AI研究的灵活、高性能3D模拟平台AI Habitat最新更新:支持交互式对象、真实物理建模、改进的渲染、从虚拟环境到物理环境的无缝转换,以及更灵活的用户界面和浏览器内运行模拟的支持 https://github.com/facebookresearch/habitat-lab

Unsupervised Monocular Depth Learning in Dynamic Scenes https://github.com/google-research/google-research/tree/master/depth_and_motion_learning

AD-Census - AD-Census立体匹配算法,算法效率高、效果出色,适合硬件加速,Intel RealSense D400 Stereo模块算法 https://github.com/ethan-li-coding/AD-Census

Learning to Predict the 3D Layout of a Scene https://arxiv.org/abs/2011.09977

另一个用手机拍摄视频生成自由视角3D视频的工作 paper:《Deformable Neural Radiance Fields》 https://nerfies.github.io/

HR-Depth: High Resolution Self-Supervised Monocular Depth Estimation https://arxiv.org/abs/2012.07356

3D Bird Reconstruction: a Dataset, Model, and Shape Recovery from a Single View https://github.com/marcbadger/avian-mesh

3D Scene Reconstruction from a Single Viewport https://github.com/DLR-RM/SingleViewReconstruction

inference Code for DELTAS: Depth Estimation by Learning Triangulation And densification of Sparse point https://github.com/magicleap/DELTAS

Pytorch implementation of the ECCV 2020 paper: AtlantaNet: Inferring the 3D Indoor Layout from a Single 360 Image beyond the Manhattan World Assumption https://github.com/crs4/AtlantaNet

Improving Monocular Depth Estimation by Leveraging Structural Awareness and Complementary Datasets https://github.com/ansj11/SANet

Self-supervised Single-view 3D Reconstruction https://github.com/NVlabs/UMR

Non-Local Spatial Propagation Network for Depth Completion https://github.com/zzangjinsun/NLSPN_ECCV20

[AAAI 2021] HR-Depth : High Resolution Self-Supervised Depth Estimation https://github.com/shawLyu/HR-Depth

[ECCV 2020] Self-Supervised Monocular Depth Estimation: Solving the Dynamic Object Problem by Semantic Guidance https://github.com/ifnspaml/SGDepth

Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'. https://github.com/facebookresearch/nonrigid_nerf

Code for iccv2019 paper "A Neural Network for Detailed Human Depth Estimation from a Single Image" https://github.com/sfu-gruvi-3dv/deep_human

This repository contains the code for the paper "Deep Mesh Reconstruction from Single RGB Images ". https://github.com/jnypan/TMNet

Code for 3D Reconstruction of Novel Object Shapes from Single Images paper https://github.com/rehg-lab/3DShapeGen

Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency (AAAI 2021) https://github.com/SeokjuLee/Insta-DM

基于单张RGB图片的3D重建 https://github.com/nihalsid/single-view-3d-reconstruction

NeRF-w:用网上游客拍的各种照合成3D模型——把人们的记忆碎片融合在一起,创造完美视图——相比2020最初发表版本进一步提高了精度 https://nerf-w.github.io/

Google Photos照片3D化展示背后的技术 https://ai.googleblog.com/2021/02/the-technology-behind-cinematic-photos.html

《Depth from Camera Motion and Object Detection》(CVPR 2021)

github.com/griffbr/ODMD

《Transformers Solve the Limited Receptive Field for Monocular Depth Prediction》 github.com/ygjwd12345/TransDepth

《NeX: Real-time View Synthesis with Neural Basis Expansion》(CVPR 2021) github.com/nex-mpi/nex-code

《Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching》(CVPR 2020) github.com/shiyujiao/ cross_view_localization_DSM

《Attention-Aware Feature Aggregation for Real-time Stereo Matching on Edge Devices》(2020) github.com/JiaRenChang/RealtimeStereo

AdelaiDepth:单目深度预测开源工具箱 github.com/aim-uofa/AdelaiDepth

Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction (2021) github.com/uzh-rpg/rpg_ramnet

The Temporal Opportunist: Self-Supervised Multi-Frame Monocular Depth (CVPR 2021) github.com/nianticlabs/manydepth

《Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation》 github.com/qinenergy/corda

《M4Depth: A motion-based approach for monocular depth estimation on video sequences》(2021) github.com/michael-fonder/M4Depth

《Video Depth Estimation by Fusing Flow-to-Depth Proposals》(2020) github.com/jiaxinxie97/Video-depth-estimation

《Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging》(CVPR 2021) github.com/compphoto/BoostingMonocularDepth

Single Image Depth Estimation using Wavelet Decomposition https://www.arxiv-vanity.com/papers/2106.02022

《S2R-DepthNet: Learning a Generalizable Depth-specific Structural Representation》(CVPR 2021) github.com/microsoft/S2R-DepthNet

《Single Image Depth Prediction with Wavelet Decomposition》(CVPR 2021) github.com/nianticlabs/wavelet-monodepth

通过结合人类的视觉系统特点,微软亚洲研究院探究了网络进行单目深度估计的本质,并赋予了网络强大的深度估计泛化能力。 https://weibo.com/ttarticle/p/show?id=2309404644426801873164

Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation github.com/VIS4ROB-lab/aerial-depth-completion

《Monocular Depth Estimation Using Laplacian Pyramid-Based Depth Residuals》(2021) github.com/tjqansthd/LapDepth-release

Consistent Depth of Moving Objects in Video https://arxiv.org/abs/2108.01166

MonoDepth to ManyDepth: Self-Supervised Depth Estimation on Monocular Sequences - Self-Supervised Depth Estimation on Monocular Sequences github.com/sally20921/MonoDepth-to-ManyDepth

StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation》 github.com/SJTU-ViSYS/StructDepth

Keras实例:单目深度估计 https://keras.io/examples/vision/depth_estimation/

MobileStereoNet: 基于MobileNetV1和MobileNetV2的轻量立体匹配网络 github.com/cogsys-tuebingen/mobilestereonet

Excavating the Potential Capacity of Self-Supervised Monocular Depth Estimation github.com/prstrive/EPCDepth

Estimating Image Depth in the Comics Domain https://arxiv.org/abs/2110.03575

ONNX msg_chn_wacv20 depth completion:ONNX深度补全(Python) github.com/ibaiGorordo/ONNX-msg_chn_wacv20-depth-completion

2020 年,谷歌和加州大学发布了一款 AI 模型:NeRF,可借助人工智能技术,把多张 2D 图片进行拼接,进而生成该图片的 3D 模型。

此前 NeRF 模型的训练时间最快也要 5 分钟,但在近日,英伟达发布了 instant-ngp,将模型的训练时间缩短至 5 秒。即便如此,最终呈现出来的效果也尤为震撼。 GitHub:github.com/NVlabs/instant-ngp 另外,该代码库还附有一个交互式测试平台,可帮助开发者完成其它训练功能,调试每个神经元的输入参数,并输出可视化样例。

Satellite Structure from Motion - A library for solving the satellite structure from motion problem github.com/Kai-46/SatelliteSfM

单目3D检测相关文献大列表 github.com/BigTeacher-777/Awesome-Monocular-3D-detection

AdelaiDepth:单目深度预测开源工具箱 github.com/aim-uofa/AdelaiDepth

calibration_kit - 常用传感器标定算法集合工具,包含了单双目相机标定、相机-激光雷达标定、激光雷达-激光雷达标定、激光雷达-IMU标定 github.com/calibtoolkit/calibration_kit

Monocular-Depth-Estimation-Toolbox:基于MMSegmentation的单目深度估计工具箱 github.com/zhyever/Monocular-Depth-Estimation-Toolbox

深度学习3D重构项目集 github.com/natowi/3D-Reconstruction-with-Deep-Learning-Methods

camviz:单目深度估计结果可视化库 github.com/TRI-ML/camviz

stereodemo:用于比较和可视化各种立体深度估计算法输出的Python工具 github.com/nburrus/stereodemo

Layered Depth Refinement with Mask Guidance https://arxiv.org/abs/2206.03048

【深度学习3D视觉最新论文列表】’Trending-in-3D-Vision - An on-going paper list on new trends in 3D vision with deep learning' by Xiaolong GitHub: github.com/dragonlong/Trending-in-3D-Vision

【SL Sensor:基于ROS的开源结构光传感器,用于高精度3D扫描】'SL Sensor: an open-source, ROS-based, structured light sensor for high-accuracy 3D scanning' by ETHZ ASL GitHub: github.com/ethz-asl/sl_sensor

[CV]《GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images》J Gao, T Shen, Z Wang, W Chen, K Yin, D Li, O Litany, Z Gojcic, S Fidler [NVIDIA] (2022) https://arxiv.org/abs/2209.11163

'Depth Maps for Stable Diffusion WebUI' by thygate GitHub: github.com/thygate/stable-diffusion-webui-depthmap-script

'ONNX SCDepth Monocular Depth Estimation' by Ibai Gorordo GitHub: github.com/ibaiGorordo/ONNX-SCDepth-Monocular-Depth-Estimation

[CV]《SSDNeRF: Semantic Soft Decomposition of Neural Radiance Fields》S Ranade, C Lassner, K Li, C Haene, S Chen, J Bazin, S Bouaziz [Meta & University of Utah] (2022) https://arxiv.org/abs/2212.03406

【MVTorch:用于多视图3D 理解和生成的Pytorch库】'MVTorch - a Pytorch library for multi-view 3D understanding and generation' by Abdullah Hamdi GitHub: github.com/ajhamdi/mvtorch

'ONNX-FastACVNet-Stereo-Depth-Estimation - Python scripts performing stereo depth estimation using the Fast-ACVNet model in ONNX.' Ibai Gorordo GitHub: github.com/ibaiGorordo/ONNX-FastACVNet-Depth-Estimation

【PaddleDepth:飞桨深度信息增强开发套件】'PaddleDepth - a lightweight, easy-to-extend, easy-to-learn, high-performance, and for-fair-comparison toolkit based on PaddlePaddle for depth information argumentation’ by PaddlePaddle GitHub: github.com/PaddlePaddle/PaddleDepth