匿名用户2024年07月31日
51阅读

技术信息

开源地址
https://modelscope.cn/models/owl123/bart-base
授权协议
apache-2.0

作品详情

BART (base-sized model)

BART model pre-traied o Eglish laguage. It was itroduced i the paper BART: Deoisig Sequece-to-Sequece Pre-traiig for Natural Laguage Geeratio, Traslatio, ad Comprehesio by Lewis et al. ad first released i this repository.

Disclaimer: The team releasig BART did ot write a model card for this model so this model card has bee writte by the Huggig Face team.

Model descriptio

BART is a trasformer ecoder-decoder (seq2seq) model with a bidirectioal (BERT-like) ecoder ad a autoregressive (GPT-like) decoder. BART is pre-traied by (1) corruptig text with a arbitrary oisig fuctio, ad (2) learig a model to recostruct the origial text.

BART is particularly effective whe fie-tued for text geeratio (e.g. summarizatio, traslatio) but also works well for comprehesio tasks (e.g. text classificatio, questio aswerig).

Iteded uses & limitatios

You ca use the raw model for text ifillig. However, the model is mostly meat to be fie-tued o a supervised dataset. See the model hub to look for fie-tued versios o a task that iterests you.

How to use

Here is how to use this model i PyTorch:

from trasformers import BartTokeizer, BartModel

tokeizer = BartTokeizer.from_pretraied('facebook/bart-base')
model = BartModel.from_pretraied('facebook/bart-base')

iputs = tokeizer("Hello, my dog is cute", retur_tesors="pt")
outputs = model(**iputs)

last_hidde_states = outputs.last_hidde_state

BibTeX etry ad citatio ifo

@article{DBLP:jourals/corr/abs-1910-13461,
  author    = {Mike Lewis ad
               Yiha Liu ad
               Nama Goyal ad
               Marja Ghazviiejad ad
               Abdelrahma Mohamed ad
               Omer Levy ad
               Veseli Stoyaov ad
               Luke Zettlemoyer},
  title     = {{BART:} Deoisig Sequece-to-Sequece Pre-traiig for Natural Laguage
               Geeratio, Traslatio, ad Comprehesio},
  joural   = {CoRR},
  volume    = {abs/1910.13461},
  year      = {2019},
  url       = {http://arxiv.org/abs/1910.13461},
  eprittype = {arXiv},
  eprit    = {1910.13461},
  timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
  biburl    = {https://dblp.org/rec/jourals/corr/abs-1910-13461.bib},
  bibsource = {dblp computer sciece bibliography, https://dblp.org}
}

功能介绍

BART (base-sized model) BART model pre-trained on English language. It was introduced in the paper B

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论