Results for "depth-estimation"
67 matches found.
Intel/zoedepth-nyu-kitti
ZoeDepth adapts DPT, a model for relative depth estimation, for so-called metric (also called absolute) depth estimation. This means that th...
depth-anything/Depth-Anything-V2-Small-hf
Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...
LiheYoung/depth-anything-large-hf
Depth Anything leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~62 million images, obtaining state-of-the-art...
Intel/dpt-hybrid-midas
Dense Prediction Transformer (DPT) model trained on 1.4 million images for monocular depth estimation. It was introduced in the paper Vision...
depth-anything/DA3METRIC-LARGE
DA3 Metric Large model specialized for metric depth estimation in monocular settings, ideal for applications requiring real-world scale. Can...
depth-anything/DA3-GIANT-1.1
DA3 Giant model for multi-view depth estimation, camera pose estimation, and 3D Gaussian estimation. This is the flagship foundation model w...
depth-anything/DA3-GIANT
DA3 Giant model for multi-view depth estimation, camera pose estimation, and 3D Gaussian estimation. This is the flagship foundation model w...
depth-anything/Depth-Anything-V2-Large-hf
Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...
depth-anything/DA3NESTED-GIANT-LARGE-1.1
DA3 Nested model combining the any-view Giant model with the metric Large model for metric-scale visual geometry reconstruction. This is our...
depth-anything/Depth-Anything-V2-Large
No description available.
depth-anything/DA3NESTED-GIANT-LARGE
DA3 Nested model combining the any-view Giant model with the metric Large model for metric-scale visual geometry reconstruction. This is our...
Intel/dpt-large
Dense Prediction Transformer (DPT) model trained on 1.4 million images for monocular depth estimation. It was introduced in the paper Vision...
prs-eth/marigold-depth-v1-0
Developed by: Bingxin Ke, Anton Obukhov, Shengyu Huang, Nando Metzger, Rodrigo Caye Daudt, Konrad Schindler. - Model type: Generative latent...
prs-eth/rollingdepth-v1-0
No description available.
depth-anything/Depth-Anything-V2-Base-hf
Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...
LiheYoung/depth-anything-small-hf
Depth Anything leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~62 million images, obtaining state-of-the-art...
depth-anything/DA3-LARGE-1.1
DA3 Large model for multi-view depth estimation and camera pose estimation. Foundation model with unified depth-ray representation. | Proper...
depth-anything/DA3-SMALL
DA3 Small model for multi-view depth estimation and camera pose estimation. Efficient foundation model with unified depth-ray representation...
LiheYoung/depth_anything_vitl14
No description available.
LiheYoung/depth-anything-base-hf
Depth Anything leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~62 million images, obtaining state-of-the-art...
tencent/DepthCrafter
No description available.
depth-anything/DA3-LARGE
DA3 Large model for multi-view depth estimation and camera pose estimation. Foundation model with unified depth-ray representation. | Proper...
depth-anything/DA3MONO-LARGE
DA3 Monocular Large model for high-quality relative monocular depth estimation. Unlike disparity-based models (e.g., Depth Anything 2), it d...
apple/DepthPro-hf
DepthPro is a foundation model for zero-shot metric monocular depth estimation, designed to generate high-resolution depth maps with remarka...
prs-eth/marigold-depth-v1-1
Developed by: Bingxin Ke, Kevin Qu, Tianfu Wang, Nando Metzger, Shengyu Huang, Bo Li, Anton Obukhov, Konrad Schindler. - Model type: Generat...
depth-anything/Depth-Anything-V2-Metric-Indoor-Large-hf
Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...
depth-anything/Depth-Anything-V2-Metric-Outdoor-Large-hf
Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...
depth-anything/Depth-Anything-V2-Base
No description available.
depth-anything/Depth-Anything-V2-Small
No description available.
jingheya/lotus-depth-g-v1-0
No description available.
depth-anything/Depth-Anything-V2-Metric-Indoor-Small-hf
Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...
jingheya/lotus-depth-g-v2-1-disparity
No description available.
Xenova/depth-anything-small-hf
No description available.
depth-anything/Depth-Anything-V2-Metric-Outdoor-Small-hf
Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...
depth-anything/Depth-Anything-V2-Metric-Indoor-Base-hf
Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...
vinvino02/glpn-nyu
GLPN uses SegFormer as backbone and adds a lightweight head on top for depth estimation. !model image...
depth-anything/prompt-depth-anything-vits-hf
Prompt Depth Anything is a high-resolution and accurate metric depth estimation method, with the following highlights: - using prompting to ...
LiheYoung/depth_anything_vits14
No description available.
Intel/dpt-swinv2-large-384
This Midas 3.1 DPT model uses the SwinV2 Philosophy model as backbone and uses a different approach to Vision that Beit, where Swin backbone...
onnx-community/depth-anything-v2-small
No description available.
prs-eth/marigold-depth-lcm-v1-0
Developed by: Bingxin Ke, Kevin Qu, Tianfu Wang, Nando Metzger, Shengyu Huang, Bo Li, Anton Obukhov, Konrad Schindler. - Model type: Generat...
apple/DepthPro
No description available.
vinvino02/glpn-kitti
GLPN uses SegFormer as backbone and adds a lightweight head on top for depth estimation. !model image...
Intel/zoedepth-nyu
ZoeDepth adapts DPT, a model for relative depth estimation, for so-called metric (also called absolute) depth estimation. This means that th...
LiheYoung/depth_anything_vitb14
No description available.
xingyang1/Distill-Any-Depth-Large-hf
No description available.
xingyang1/Distill-Any-Depth-Small-hf
No description available.
Intel/dpt-swinv2-tiny-256
This Midas 3.1 DPT model uses the SwinV2 Philosophy model as backbone and uses a different approach to Vision that Beit, where Swin backbone...
depth-anything/Depth-Anything-V2-Metric-Outdoor-Base-hf
Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...
apple/coreml-depth-anything-v2-small
Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...
qualcomm/Midas-V2
Model Type: Modelusecase.depthestimation Model Stats: - Model checkpoint: MiDaSsmall - Input resolution: 256x256 - Number of parameters: 16....
Intel/zoedepth-kitti
ZoeDepth adapts DPT, a model for relative depth estimation, for so-called metric (also called absolute) depth estimation. This means that th...
DarthReca/depth-any-canopy-base
The model is Depth-Anything-Base finetuned for canopy height estimation on a filtered set of EarthView. - License: Apache 2.0 - Finetuned fr...
Intel/dpt-beit-base-384
This DPT model uses the BEiT model as backbone and adds a neck + head on top for monocular depth estimation. !model image...
facebook/sapiens-depth-0.6b-torchscript
Sapiens is a family of vision transformers pretrained on 300 million human images at 1024 x 1024 image resolution. The pretrained models, wh...
Intel/dpt-beit-large-512
This DPT model uses the BEiT model as backbone and adds a neck + head on top for monocular depth estimation. !model image The previous relea...
facebook/dpt-dinov2-small-kitti
DPT (Dense Prediction Transformer) model with DINOv2 backbone as proposed in DINOv2: Learning Robust Visual Features without Supervision by ...
apple/coreml-depth-anything-small
Depth Anything leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~62 million images, obtaining state-of-the-art...
jingheya/lotus-depth-d-v1-0
No description available.
DarthReca/depth-any-canopy-small
The model is Depth-Anything-Small finetuned for canopy height estimation on a filtered set of EarthView. - License: Apache 2.0 - Finetuned f...
facebook/sapiens-depth-2b-torchscript
Sapiens is a family of vision transformers pretrained on 300 million human images at 1024 x 1024 image resolution. The pretrained models, wh...
hf-tiny-model-private/tiny-random-GLPNForDepthEstimation
No description available....
Acly/Depth-Anything-V2-GGUF
Depth-Anything is a model for monocular depth estimation. The weights in this repository are converted for lightweight inference on consumer...
onnx-community/depth-anything-v2-base
No description available.
jingheya/lotus-depth-g-v2-0-disparity
No description available.
GonzaloMG/marigold-e2e-ft-depth
No description available.
Xenova/dpt-hybrid-midas
No description available.