Results for "depth-estimation"

67 matches found.

Intel

Intel/zoedepth-nyu-kitti

ZoeDepth adapts DPT, a model for relative depth estimation, for so-called metric (also called absolute) depth estimation. This means that th...

📏 depth-estimation 1,950,250
depth-anything

depth-anything/Depth-Anything-V2-Small-hf

Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...

📏 depth-estimation 1,345,520
LiheYoung

LiheYoung/depth-anything-large-hf

Depth Anything leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~62 million images, obtaining state-of-the-art...

📏 depth-estimation 533,713
Intel

Intel/dpt-hybrid-midas

Dense Prediction Transformer (DPT) model trained on 1.4 million images for monocular depth estimation. It was introduced in the paper Vision...

📏 depth-estimation 527,229
depth-anything

depth-anything/DA3METRIC-LARGE

DA3 Metric Large model specialized for metric depth estimation in monocular settings, ideal for applications requiring real-world scale. Can...

📏 depth-estimation 474,573
depth-anything

depth-anything/DA3-GIANT-1.1

DA3 Giant model for multi-view depth estimation, camera pose estimation, and 3D Gaussian estimation. This is the flagship foundation model w...

📏 depth-estimation 327,936
depth-anything

depth-anything/DA3-GIANT

DA3 Giant model for multi-view depth estimation, camera pose estimation, and 3D Gaussian estimation. This is the flagship foundation model w...

📏 depth-estimation 191,135
depth-anything

depth-anything/Depth-Anything-V2-Large-hf

Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...

📏 depth-estimation 145,958
depth-anything

depth-anything/DA3NESTED-GIANT-LARGE-1.1

DA3 Nested model combining the any-view Giant model with the metric Large model for metric-scale visual geometry reconstruction. This is our...

📏 depth-estimation 137,418
depth-anything

depth-anything/Depth-Anything-V2-Large

No description available.

📏 depth-estimation 113,372
depth-anything

depth-anything/DA3NESTED-GIANT-LARGE

DA3 Nested model combining the any-view Giant model with the metric Large model for metric-scale visual geometry reconstruction. This is our...

📏 depth-estimation 103,396
Intel

Intel/dpt-large

Dense Prediction Transformer (DPT) model trained on 1.4 million images for monocular depth estimation. It was introduced in the paper Vision...

📏 depth-estimation 96,762
prs-eth

prs-eth/marigold-depth-v1-0

Developed by: Bingxin Ke, Anton Obukhov, Shengyu Huang, Nando Metzger, Rodrigo Caye Daudt, Konrad Schindler. - Model type: Generative latent...

📏 depth-estimation 89,022
prs-eth

prs-eth/rollingdepth-v1-0

No description available.

📏 depth-estimation 78,642
depth-anything

depth-anything/Depth-Anything-V2-Base-hf

Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...

📏 depth-estimation 68,551
LiheYoung

LiheYoung/depth-anything-small-hf

Depth Anything leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~62 million images, obtaining state-of-the-art...

📏 depth-estimation 60,793
depth-anything

depth-anything/DA3-LARGE-1.1

DA3 Large model for multi-view depth estimation and camera pose estimation. Foundation model with unified depth-ray representation. | Proper...

📏 depth-estimation 55,432
depth-anything

depth-anything/DA3-SMALL

DA3 Small model for multi-view depth estimation and camera pose estimation. Efficient foundation model with unified depth-ray representation...

📏 depth-estimation 44,146
LiheYoung

LiheYoung/depth_anything_vitl14

No description available.

📏 depth-estimation 43,517
LiheYoung

LiheYoung/depth-anything-base-hf

Depth Anything leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~62 million images, obtaining state-of-the-art...

📏 depth-estimation 43,055
tencent

tencent/DepthCrafter

No description available.

📏 depth-estimation 41,787
depth-anything

depth-anything/DA3-LARGE

DA3 Large model for multi-view depth estimation and camera pose estimation. Foundation model with unified depth-ray representation. | Proper...

📏 depth-estimation 25,101
depth-anything

depth-anything/DA3MONO-LARGE

DA3 Monocular Large model for high-quality relative monocular depth estimation. Unlike disparity-based models (e.g., Depth Anything 2), it d...

📏 depth-estimation 23,998
apple

apple/DepthPro-hf

DepthPro is a foundation model for zero-shot metric monocular depth estimation, designed to generate high-resolution depth maps with remarka...

📏 depth-estimation 21,892
prs-eth

prs-eth/marigold-depth-v1-1

Developed by: Bingxin Ke, Kevin Qu, Tianfu Wang, Nando Metzger, Shengyu Huang, Bo Li, Anton Obukhov, Konrad Schindler. - Model type: Generat...

📏 depth-estimation 17,659
depth-anything

depth-anything/Depth-Anything-V2-Metric-Indoor-Large-hf

Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...

📏 depth-estimation 10,949
depth-anything

depth-anything/Depth-Anything-V2-Metric-Outdoor-Large-hf

Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...

📏 depth-estimation 10,914
depth-anything

depth-anything/Depth-Anything-V2-Base

No description available.

📏 depth-estimation 10,615
depth-anything

depth-anything/Depth-Anything-V2-Small

No description available.

📏 depth-estimation 9,939
jingheya

jingheya/lotus-depth-g-v1-0

No description available.

📏 depth-estimation 9,031
depth-anything

depth-anything/Depth-Anything-V2-Metric-Indoor-Small-hf

Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...

📏 depth-estimation 4,481
jingheya

jingheya/lotus-depth-g-v2-1-disparity

No description available.

📏 depth-estimation 4,444
Xenova

Xenova/depth-anything-small-hf

No description available.

📏 depth-estimation 3,999
depth-anything

depth-anything/Depth-Anything-V2-Metric-Outdoor-Small-hf

Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...

📏 depth-estimation 3,328
depth-anything

depth-anything/Depth-Anything-V2-Metric-Indoor-Base-hf

Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...

📏 depth-estimation 2,515
vinvino02

vinvino02/glpn-nyu

GLPN uses SegFormer as backbone and adds a lightweight head on top for depth estimation. !model image...

📏 depth-estimation 2,426
depth-anything

depth-anything/prompt-depth-anything-vits-hf

Prompt Depth Anything is a high-resolution and accurate metric depth estimation method, with the following highlights: - using prompting to ...

📏 depth-estimation 2,329
LiheYoung

LiheYoung/depth_anything_vits14

No description available.

📏 depth-estimation 2,321
Intel

Intel/dpt-swinv2-large-384

This Midas 3.1 DPT model uses the SwinV2 Philosophy model as backbone and uses a different approach to Vision that Beit, where Swin backbone...

📏 depth-estimation 2,216
onnx-community

onnx-community/depth-anything-v2-small

No description available.

📏 depth-estimation 2,206
prs-eth

prs-eth/marigold-depth-lcm-v1-0

Developed by: Bingxin Ke, Kevin Qu, Tianfu Wang, Nando Metzger, Shengyu Huang, Bo Li, Anton Obukhov, Konrad Schindler. - Model type: Generat...

📏 depth-estimation 2,146
apple

apple/DepthPro

No description available.

📏 depth-estimation 1,696
vinvino02

vinvino02/glpn-kitti

GLPN uses SegFormer as backbone and adds a lightweight head on top for depth estimation. !model image...

📏 depth-estimation 1,683
Intel

Intel/zoedepth-nyu

ZoeDepth adapts DPT, a model for relative depth estimation, for so-called metric (also called absolute) depth estimation. This means that th...

📏 depth-estimation 1,605
LiheYoung

LiheYoung/depth_anything_vitb14

No description available.

📏 depth-estimation 1,563
xingyang1

xingyang1/Distill-Any-Depth-Large-hf

No description available.

📏 depth-estimation 1,274
xingyang1

xingyang1/Distill-Any-Depth-Small-hf

No description available.

📏 depth-estimation 1,233
Intel

Intel/dpt-swinv2-tiny-256

This Midas 3.1 DPT model uses the SwinV2 Philosophy model as backbone and uses a different approach to Vision that Beit, where Swin backbone...

📏 depth-estimation 960
depth-anything

depth-anything/Depth-Anything-V2-Metric-Outdoor-Base-hf

Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...

📏 depth-estimation 685
apple

apple/coreml-depth-anything-v2-small

Depth Anything V2 leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~600K synthetic labeled images and ~62 mill...

📏 depth-estimation 589
qualcomm

qualcomm/Midas-V2

Model Type: Modelusecase.depthestimation Model Stats: - Model checkpoint: MiDaSsmall - Input resolution: 256x256 - Number of parameters: 16....

📏 depth-estimation 495
Intel

Intel/zoedepth-kitti

ZoeDepth adapts DPT, a model for relative depth estimation, for so-called metric (also called absolute) depth estimation. This means that th...

📏 depth-estimation 463
DarthReca

DarthReca/depth-any-canopy-base

The model is Depth-Anything-Base finetuned for canopy height estimation on a filtered set of EarthView. - License: Apache 2.0 - Finetuned fr...

📏 depth-estimation 449
Intel

Intel/dpt-beit-base-384

This DPT model uses the BEiT model as backbone and adds a neck + head on top for monocular depth estimation. !model image...

📏 depth-estimation 443
facebook

facebook/sapiens-depth-0.6b-torchscript

Sapiens is a family of vision transformers pretrained on 300 million human images at 1024 x 1024 image resolution. The pretrained models, wh...

📏 depth-estimation 435
Intel

Intel/dpt-beit-large-512

This DPT model uses the BEiT model as backbone and adds a neck + head on top for monocular depth estimation. !model image The previous relea...

📏 depth-estimation 395
facebook

facebook/dpt-dinov2-small-kitti

DPT (Dense Prediction Transformer) model with DINOv2 backbone as proposed in DINOv2: Learning Robust Visual Features without Supervision by ...

📏 depth-estimation 392
apple

apple/coreml-depth-anything-small

Depth Anything leverages the DPT architecture with a DINOv2 backbone. The model is trained on ~62 million images, obtaining state-of-the-art...

📏 depth-estimation 307
jingheya

jingheya/lotus-depth-d-v1-0

No description available.

📏 depth-estimation 254
DarthReca

DarthReca/depth-any-canopy-small

The model is Depth-Anything-Small finetuned for canopy height estimation on a filtered set of EarthView. - License: Apache 2.0 - Finetuned f...

📏 depth-estimation 232
facebook

facebook/sapiens-depth-2b-torchscript

Sapiens is a family of vision transformers pretrained on 300 million human images at 1024 x 1024 image resolution. The pretrained models, wh...

📏 depth-estimation 227
hf-tiny-model-private

hf-tiny-model-private/tiny-random-GLPNForDepthEstimation

No description available....

📏 depth-estimation 192
Acly

Acly/Depth-Anything-V2-GGUF

Depth-Anything is a model for monocular depth estimation. The weights in this repository are converted for lightweight inference on consumer...

📏 depth-estimation 192
onnx-community

onnx-community/depth-anything-v2-base

No description available.

📏 depth-estimation 185
jingheya

jingheya/lotus-depth-g-v2-0-disparity

No description available.

📏 depth-estimation 159
GonzaloMG

GonzaloMG/marigold-e2e-ft-depth

No description available.

📏 depth-estimation 153
Xenova

Xenova/dpt-hybrid-midas

No description available.

📏 depth-estimation 146