Discover the Best AI Models
Search, analyze, and download from our global directory of 3,000+ open-source models.
Model Index 55939 Total
sentence-transformers/all-MiniLM-L6-v2
No description available.
google-bert/bert-base-uncased
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the ...
google/electra-base-discriminator
No description available.
Falconsai/nsfw_image_detection
The Fine-Tuned Vision Transformer (ViT) is a variant of the transformer encoder architecture, similar to BERT, that has been adapted for ima...
sentence-transformers/all-mpnet-base-v2
No description available.
timm/mobilenetv3_small_100.lamb_in1k
Model Type: Image classification / feature backbone - Model Stats: - Params (M): 2.5 - GMACs: 0.1 - Activations (M): 1.4 - Image size: 224 x...
FacebookAI/roberta-large
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on t...
sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
No description available.
Qwen/Qwen2.5-7B-Instruct
No description available.
laion/clap-htsat-fused
No description available.
openai/clip-vit-base-patch32
The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was ...
FacebookAI/xlm-roberta-base
XLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. RoBERTa ...
BAAI/bge-m3
No description available.
amazon/chronos-2
- time series - forecasting - foundation models - pretrained models - safetensors - https://arxiv.org/abs/2510.15821 - autogluon/chronosdata...
Bingsu/adetailer
No description available.
pyannote/wespeaker-voxceleb-resnet34-LM
No description available.
pyannote/segmentation-3.0
No description available.
pyannote/speaker-diarization-3.1
No description available.
cross-encoder/ms-marco-MiniLM-L6-v2
No description available.
Qwen/Qwen3-VL-2B-Instruct
No description available.
alana89/TabSTAR
No description available.
colbert-ir/colbertv2.0
No description available.
Xenova/paraphrase-multilingual-MiniLM-L12-v2
No description available.
Qwen/Qwen3-0.6B
Qwen3-0.6B has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: ...
omni-research/Tarsier2-Recap-7b
Base Model: Qwen2-VL-7B-Instruct - Training Data: Tarsier2-Recap-585K Model date: Tarsier2-Recap-7b was trained in December 2024. Paper or r...
FacebookAI/roberta-base
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on t...
openai-community/gpt2
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained o...
Comfy-Org/Wan_2.2_ComfyUI_Repackaged
No description available.
hexgrad/Kokoro-82M
Kokoro is an open-weight TTS model with 82 million parameters. Despite its lightweight architecture, it delivers comparable quality to large...
Kijai/WanVideo_comfy
Combined and quantized models for WanVideo, originating from here:...
autogluon/chronos-bolt-small
No description available.
distilbert/distilbert-base-uncased
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a self-supervised fashion, usin...
coqui/XTTS-v2
No description available.
Qwen/Qwen3-VL-8B-Instruct
No description available.
facebook/contriever
This model has been trained without supervision following the approach described in Towards Unsupervised Dense Information Retrieval with Co...
Qwen/Qwen2.5-1.5B-Instruct
No description available.
openai/gpt-oss-20b
No description available.
dima806/fairface_age_image_detection
Detects age group with about 59% accuracy based on an image....
meta-llama/Llama-3.1-8B-Instruct
- en - de - fr - it - pt - hi - es - th basemodel: meta-llama/Meta-Llama-3.1-8B pipelinetag: text-generation - facebook - meta - pytorch - l...
apple/DFN5B-CLIP-ViT-H-14-378
Model Type: Contrastive Image-Text, Zero-Shot Image Classification. - Dataset: DFN-5b - Papers: - Data Filtering Networks: https://arxiv.org...
Qwen/Qwen2.5-0.5B-Instruct
No description available.
timm/convnextv2_nano.fcmae_ft_in22k_in1k
Model Type: Image classification / feature backbone - Model Stats: - Params (M): 15.6 - GMACs: 2.5 - Activations (M): 8.4 - Image size: trai...
openai/clip-vit-large-patch14
The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was ...
Qwen/Qwen2.5-3B-Instruct
No description available.
Qwen/Qwen3-1.7B
Qwen3-1.7B has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: ...
BAAI/bge-small-en-v1.5
- sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb - name: bge-small-en-v1.5 results: - task: type: Cl...
FacebookAI/xlm-roberta-large
XLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. RoBERTa ...
Qwen/Qwen3-8B
Qwen3-8B has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 8....
sentence-transformers/paraphrase-multilingual-mpnet-base-v2
No description available.
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5
No description available.