timm

timm/mobilenetv3_small_100.lamb_in1k

Model Type: Image classification / feature backbone - Model Stats: - Params (M): 2.5 - GMACs: 0.1 - Activations (M): 1.4 - Image size: 224 x...

Model Documentation

Model card for mobilenetv3_small_100.lamb_in1k



A MobileNet-v3 image classification model. Trained on ImageNet-1k in timm using recipe template described below.

Recipe details: * A LAMB optimizer based recipe that is similar to ResNet Strikes Back A2 but 50% longer with EMA weight averaging, no CutMix * Step (exponential decay w/ staircase) LR schedule with warmup

Model Details

  • Model Type: Image classification / feature backbone
  • Model Stats:
  • Params (M): 2.5
  • GMACs: 0.1
  • Activations (M): 1.4
  • Image size: 224 x 224
  • Papers:
  • Searching for MobileNetV3: https://arxiv.org/abs/1905.02244
  • Dataset: ImageNet-1k
  • Original: https://github.com/huggingface/pytorch-image-models


  • Model Usage

    Image Classification

    python
    from urllib.request import urlopen
    from PIL import Image
    import timm

    img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' ))

    model = timm.create_model('mobilenetv3_small_100.lamb_in1k', pretrained=True) model = model.eval()

    get model specific transforms (normalization, resize)

    data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False)

    output = model(transforms(img).unsqueeze(0))

    unsqueeze single image into batch of 1



    top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)


    Feature Map Extraction

    python
    from urllib.request import urlopen
    from PIL import Image
    import timm

    img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' ))

    model = timm.create_model( 'mobilenetv3_small_100.lamb_in1k', pretrained=True, features_only=True, ) model = model.eval()

    get model specific transforms (normalization, resize)

    data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False)

    output = model(transforms(img).unsqueeze(0))

    unsqueeze single image into batch of 1



    for o in output:

    print shape of each feature map in output

    e.g.:

    torch.Size([1, 16, 112, 112])

    torch.Size([1, 16, 56, 56])

    torch.Size([1, 24, 28, 28])

    torch.Size([1, 48, 14, 14])

    torch.Size([1, 576, 7, 7])



    print(o.shape)


    Image Embeddings

    python
    from urllib.request import urlopen
    from PIL import Image
    import timm

    img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' ))

    model = timm.create_model( 'mobilenetv3_small_100.lamb_in1k', pretrained=True, num_classes=0,

    remove classifier nn.Linear

    ) model = model.eval()

    get model specific transforms (normalization, resize)

    data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False)

    output = model(transforms(img).unsqueeze(0))

    output is (batch_size, num_features) shaped tensor



    or equivalently (without needing to set num_classes=0)



    output = model.forward_features(transforms(img).unsqueeze(0))

    output is unpooled, a (1, 576, 7, 7) shaped tensor



    output = model.forward_head(output, pre_logits=True)

    output is a (1, num_features) shaped tensor



    Model Comparison

    Explore the dataset and runtime metrics of this model in timm model results.

    Citation

    bibtex
    @misc{rw2019timm,
      author = {Ross Wightman},
      title = {PyTorch Image Models},
      year = {2019},
      publisher = {GitHub},
      journal = {GitHub repository},
      doi = {10.5281/zenodo.4414861},
      howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
    }
    
    bibtex
    @inproceedings{howard2019searching,
      title={Searching for mobilenetv3},
      author={Howard, Andrew and Sandler, Mark and Chu, Grace and Chen, Liang-Chieh and Chen, Bo and Tan, Mingxing and Wang, Weijun and Zhu, Yukun and Pang, Ruoming and Vasudevan, Vijay and others},
      booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
      pages={1314--1324},
      year={2019}
    }
    

    Files & Weights

    FilenameSizeAction
    model.safetensors 0.01 GB
    pytorch_model.bin 0.01 GB