This model is trained on 180G data, we recommend using this one than the original version. 示例代码 from
250pytorch
Please use ElectraForPreTraining for discriminator and ElectraForMaskedLM for generator if you are r
220pytorch
示例代码 from modelscope.pipelines import pipeline from modelscope.utils.constant import Tasks pipelin
260pytorchbert
示例代码 from modelscope.pipelines import pipeline from modelscope.utils.constant import Tasks pipelin
240pytorchbert
Please use ElectraForPreTraining for discriminator and ElectraForMaskedLM for generator if you are r
250pytorch
Chinese Pre-Trained XLNet This project provides a XLNet pre-training model for Chinese, which aims t
300pytorch
示例代码 from modelscope.pipelines import pipeline from modelscope.utils.constant import Tasks pipelin
240pytorch
示例代码 from modelscope.pipelines import pipeline from modelscope.utils.constant import Tasks pipelin
270pytorchbert
This model is trained on 180G data, we recommend using this one than the original version. Chinese E
200pytorch
Chinese Pre-Trained XLNet This project provides a XLNet pre-training model for Chinese, which aims t
220pytorch
示例代码 from modelscope.pipelines import pipeline from modelscope.utils.constant import Tasks pipelin
260pytorch
This is a re-trained 6-layer RoBERTa-wwm-ext model. Chinese BERT with Whole Word Masking For further
250pytorchbert
This is a re-trained 3-layer RoBERTa-wwm-ext model. Chinese BERT with Whole Word Masking For further
250pytorchbert
示例代码 from modelscope.pipelines import pipeline from modelscope.utils.constant import Tasks pipelin
220pytorch
This is a re-trained 3-layer RoBERTa-wwm-ext-large model. Chinese BERT with Whole Word Masking For f
240pytorchbert
示例代码 from modelscope.pipelines import pipeline from modelscope.utils.constant import Tasks pipelin
250pytorch
Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, w
450pytorch
This model is trained on 180G data, we recommend using this one than the original version. Chinese E
210pytorch
This model is trained on 180G data, we recommend using this one than the original version. Chinese E
230pytorch
Please use 'Bert' related functions to load this model! Chinese BERT with Whole Word Masking For fur
260pytorchbert
当前共161641个项目
×
寻找源码
源码描述
联系方式
提交