WizardLM-7B-V1.0

我要开发同款
匿名用户2024年07月31日
65阅读

技术信息

开源地址
https://modelscope.cn/models/AI-ModelScope/WizardLM-7B-V1.0

作品详情

The WizardLM delta weights.

WizardLM: Empowerig Large Pre-Traied Laguage Models to Follow Complex Istructios

? HF Repo • ? Twitter • ? [WizardLM] • ? [WizardCoder] • ? [WizardMath]

? Joi our Discord

Model Checkpoit Paper HumaEval MBPP Demo Licese
WizardCoder-Pytho-34B-V1.0 ? HF Lik ? [WizardCoder] 73.2 61.2 Demo Llama2
WizardCoder-15B-V1.0 ? HF Lik ? [WizardCoder] 59.8 50.6 -- OpeRAIL-M
WizardCoder-Pytho-13B-V1.0 ? HF Lik ? [WizardCoder] 64.0 55.6 -- Llama2
WizardCoder-3B-V1.0 ? HF Lik ? [WizardCoder] 34.8 37.4 Demo OpeRAIL-M
WizardCoder-1B-V1.0 ? HF Lik ? [WizardCoder] 23.8 28.6 -- OpeRAIL-M
Model Checkpoit Paper GSM8k MATH Olie Demo Licese
WizardMath-70B-V1.0 ? HF Lik ? [WizardMath] 81.6 22.7 Demo Llama 2
WizardMath-13B-V1.0 ? HF Lik ? [WizardMath] 63.9 14.0 Demo Llama 2
WizardMath-7B-V1.0 ? HF Lik ? [WizardMath] 54.9 10.7 Demo Llama 2

Model Checkpoit Paper MT-Bech AlpacaEval WizardEval HumaEval Licese
WizardLM-13B-V1.2 ? HF Lik 7.06 89.17% 101.4% 36.6 pass@1 Llama 2 Licese
WizardLM-13B-V1.1 ? HF Lik 6.76 86.32% 99.3% 25.0 pass@1 No-commercial
WizardLM-30B-V1.0 ? HF Lik 7.01 97.8% 37.8 pass@1 No-commercial
WizardLM-13B-V1.0 ? HF Lik 6.35 75.31% 89.1% 24.0 pass@1 No-commercial
WizardLM-7B-V1.0 ? HF Lik ? [WizardLM] 78.0% 19.1 pass@1 No-commercial

Example code

```pytho import torch from modelscope import AutoModelForCausalLM, AutoTokeizer

model = AutoModelForCausalLM.frompretraied("AI-ModelScope/WizardLM-7B-V1.0", revisio='v1.0.1', devicemap='auto', torchdtype=torch.float16) tokeizer = AutoTokeizer.frompretraied("AI-ModelScope/WizardLM-7B-V1.0", revisio='v1.0.1')

prompt = """A chat betwee a curious user ad a artificial itelligece assistat. The assistat gives helpful, detailed, ad polite aswers to the user's questios. USER: Who are you? ASSISTANT: """ iputs = tokeizer(prompt, paddig=False, addspecialtokes=False, retur_tesors="pt")

Geerate

geerateids = model.geerate( iputs.iputids.to(model.device), attetiomask=iputs['attetiomask'].to(model.device), dosample=True, topk=10, temperature=0.1, topp=0.95, umretursequeces=1, eostokeid=tokeizer.eostokeid, maxlegth=200) prit(tokeizer.batchdecode(geerateids, skipspecialtokes=True, cleauptokeizatio_spaces=False)[0]) ```

功能介绍

The WizardLM delta weights. WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论