pytorch

Model card for vitlargepatch14clip336.laion2bftaugreg_inat21 Part of a series of timm fine-tune expe
300pytorchimage-classification
Model card for convnextv2_nano.fcmae A ConvNeXt-V2 self-supervised feature representation model. Pre
340pytorchimage-feature-extraction
Model card for evagiantpatch14224.clipft_in1k An EVA-CLIP image classification model. Pretrained on
240pytorchimage-classification
Model card for convnextv2_pico.fcmae A ConvNeXt-V2 self-supervised feature representation model. Pre
350pytorchimage-feature-extraction
Model card for convnextv2_atto.fcmae A ConvNeXt-V2 self-supervised feature representation model. Pre
270pytorchimage-feature-extraction
Model card for evagiantpatch14336.clipft_in1k An EVA-CLIP image classification model. Pretrained on
290pytorchimage-classification
Model card for convnextv2_huge.fcmae A ConvNeXt-V2 self-supervised feature representation model. Pre
350pytorchimage-feature-extraction
CLIP (OpenAI model for timm) Model Details The CLIP model was developed by researchers at OpenAI to
360pytorchtimm
Model card for convnextv2_tiny.fcmae A ConvNeXt-V2 self-supervised feature representation model. Pre
410pytorchimage-feature-extraction
Model card for convnextv2_large.fcmae A ConvNeXt-V2 self-supervised feature representation model. Pr
290pytorchimage-feature-extraction
Model card for vitsmallpatch16_224.dino A Vision Transformer (ViT) image feature model. Trained with
270pytorchimage-feature-extraction
Model card for samvitlargepatch16.sa1b A Segment-Anything Vision Transformer (SAM ViT) image feature
240pytorchimage-feature-extraction
Model card for vitsmallpatch8_224.dino A Vision Transformer (ViT) image feature model. Trained with
250pytorchimage-feature-extraction
Model card for convnextv2_base.fcmae A ConvNeXt-V2 self-supervised feature representation model. Pre
300pytorchimage-feature-extraction
Model card for eva02largepatch14clip336.merged2bftinat21 Part of a series of timm fine-tune experime
320pytorchimage-classification
CLIP (OpenAI model for timm) Model Details The CLIP model was developed by researchers at OpenAI to
330pytorchtimm
Model card for vitbasepatch14_dinov2.lvd142m A Vision Transformer (ViT) image feature model. Pretrai
240pytorchimage-feature-extraction
Model card for vitsmallpatch14reg4dinov2.lvd142m A Vision Transformer (ViT) image feature model with
260pytorchimage-feature-extraction
Introduction Models in this repo are from https://www.modelscope.cn/models/crazyant/speechparaformer
340
Model card for vitgiantpatch14reg4dinov2.lvd142m A Vision Transformer (ViT) image feature model with
310pytorchimage-feature-extraction
当前共5187个项目
×
寻找源码
源码描述
联系方式
提交