WizardLM-13B-V1.2

我要开发同款
匿名用户2024年07月31日
37阅读
所属分类ai、llama、pytorch
开源地址https://modelscope.cn/models/AI-ModelScope/WizardLM-13B-V1.2
授权协议llama2

作品详情

This is the Full-Weight of WizardLM-13B V1.2 model, this model is trained from Llama-2 13b.

WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions

? HF Repo •? Github Repo • ? Twitter • ? [WizardLM] • ? [WizardCoder] • ? [WizardMath]

? Join our Discord

News

  • ???[2023/08/26] We released WizardCoder-Python-34B-V1.0 , which achieves the 73.2 pass@1 and surpasses GPT4 (2023/03/15), ChatGPT-3.5, and Claude2 on the HumanEval Benchmarks. For more details, please refer to WizardCoder.
  • [2023/06/16] We released WizardCoder-15B-V1.0 , which surpasses Claude-Plus (+6.8), Bard (+15.3) and InstructCodeT5+ (+22.3) on the HumanEval Benchmarks. For more details, please refer to WizardCoder.
Model Checkpoint Paper HumanEval MBPP Demo License
WizardCoder-Python-34B-V1.0 ? HF Link ? [WizardCoder] 73.2 61.2 Demo Llama2
WizardCoder-15B-V1.0 ? HF Link ? [WizardCoder] 59.8 50.6 -- OpenRAIL-M
WizardCoder-Python-13B-V1.0 ? HF Link ? [WizardCoder] 64.0 55.6 -- Llama2
WizardCoder-3B-V1.0 ? HF Link ? [WizardCoder] 34.8 37.4 Demo OpenRAIL-M
WizardCoder-1B-V1.0 ? HF Link ? [WizardCoder] 23.8 28.6 -- OpenRAIL-M
  • ? [08/11/2023] We release WizardMath Models.
  • ? Our WizardMath-70B-V1.0 model slightly outperforms some closed-source LLMs on the GSM8K, including ChatGPT 3.5, Claude Instant 1 and PaLM 2 540B.
  • ? Our WizardMath-70B-V1.0 model achieves 81.6 pass@1 on the GSM8k Benchmarks, which is 24.8 points higher than the SOTA open-source LLM.
  • ? Our WizardMath-70B-V1.0 model achieves 22.7 pass@1 on the MATH Benchmarks, which is 9.2 points higher than the SOTA open-source LLM.
Model Checkpoint Paper GSM8k MATH Online Demo License
WizardMath-70B-V1.0 ? HF Link ? [WizardMath] 81.6 22.7 Demo Llama 2
WizardMath-13B-V1.0 ? HF Link ? [WizardMath] 63.9 14.0 Demo Llama 2
WizardMath-7B-V1.0 ? HF Link ? [WizardMath] 54.9 10.7 Demo Llama 2

Model Checkpoint Paper MT-Bench AlpacaEval WizardEval HumanEval License
WizardLM-13B-V1.2 ? HF Link 7.06 89.17% 101.4% 36.6 pass@1 Llama 2 License
WizardLM-13B-V1.1 ? HF Link 6.76 86.32% 99.3% 25.0 pass@1 Non-commercial
WizardLM-30B-V1.0 ? HF Link 7.01 97.8% 37.8 pass@1 Non-commercial
WizardLM-13B-V1.0 ? HF Link 6.35 75.31% 89.1% 24.0 pass@1 Non-commercial
WizardLM-7B-V1.0 ? HF Link ? [WizardLM] 78.0% 19.1 pass@1 Non-commercial

Repository: https://github.com/nlpxucan/WizardLM

Twitter:

Example code

```python import torch from modelscope import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.frompretrained("AI-ModelScope/WizardLM-13B-V1.2", revision='v1.0.0', devicemap='auto', torchdtype=torch.float16) tokenizer = AutoTokenizer.frompretrained("AI-ModelScope/WizardLM-13B-V1.2", revision='v1.0.0')

prompt = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Who are you? ASSISTANT: """ inputs = tokenizer(prompt, padding=False, addspecialtokens=False, return_tensors="pt")

Generate

generateids = model.generate( inputs.inputids.to(model.device), attentionmask=inputs['attentionmask'].to(model.device), dosample=True, topk=10, temperature=0.1, topp=0.95, numreturnsequences=1, eostokenid=tokenizer.eostokenid, maxlength=200) print(tokenizer.batchdecode(generateids, skipspecialtokens=True, cleanuptokenization_spaces=False)[0])

❗<b>Note for model system prompts usage:</b>


<b>WizardLM</b>  adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:

A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.USER: Who are you? ASSISTANT: I am WizardLM.…… ```

To commen concern about dataset:

Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models.

Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team .

Our researchers have no authority to publicly release them without authorization.

Thank you for your understanding.

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论