WizardMath-7B-V1.0

我要开发同款
匿名用户2024年07月31日
21阅读
所属分类ai、llama、pytorch
开源地址https://modelscope.cn/models/AI-ModelScope/WizardMath-7B-V1.0
授权协议llama2

作品详情

WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct (RLEIF)

? HF Repo •? Github Repo • ? Twitter • ? [WizardLM] • ? [WizardCoder] • ? [WizardMath]

? Join our Discord

Model Checkpoint Paper HumanEval MBPP Demo License
WizardCoder-Python-34B-V1.0 ? HF Link ? [WizardCoder] 73.2 61.2 Demo Llama2
WizardCoder-15B-V1.0 ? HF Link ? [WizardCoder] 59.8 50.6 -- OpenRAIL-M
WizardCoder-Python-13B-V1.0 ? HF Link ? [WizardCoder] 64.0 55.6 -- Llama2
WizardCoder-3B-V1.0 ? HF Link ? [WizardCoder] 34.8 37.4 Demo OpenRAIL-M
WizardCoder-1B-V1.0 ? HF Link ? [WizardCoder] 23.8 28.6 -- OpenRAIL-M
Model Checkpoint Paper GSM8k MATH Online Demo License
WizardMath-70B-V1.0 ? HF Link ? [WizardMath] 81.6 22.7 Demo Llama 2
WizardMath-13B-V1.0 ? HF Link ? [WizardMath] 63.9 14.0 Demo Llama 2
WizardMath-7B-V1.0 ? HF Link ? [WizardMath] 54.9 10.7 Demo Llama 2

Model Checkpoint Paper MT-Bench AlpacaEval GSM8k HumanEval License
WizardLM-70B-V1.0 ? HF Link ?Coming Soon 7.78 92.91% 77.6% 50.6 pass@1 Llama 2 License
WizardLM-13B-V1.2 ? HF Link 7.06 89.17% 55.3% 36.6 pass@1 Llama 2 License
WizardLM-13B-V1.1 ? HF Link 6.76 86.32% 25.0 pass@1 Non-commercial
WizardLM-30B-V1.0 ? HF Link 7.01 37.8 pass@1 Non-commercial
WizardLM-13B-V1.0 ? HF Link 6.35 75.31% 24.0 pass@1 Non-commercial
WizardLM-7B-V1.0 ? HF Link ? [WizardLM] 19.1 pass@1 Non-commercial

Github Repo: https://github.com/nlpxucan/WizardLM/tree/main/WizardMath

Twitter: https://twitter.com/WizardLM_AI/status/1689998428200112128

Discord: https://discord.gg/VZjjHtWrKs

Note for model system prompts usage:

Please use the same systems prompts strictly with us, and we do not guarantee the accuracy of the quantified versions.

Default version:

"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"

CoT Version: (❗For the simple math questions, we do NOT recommend to use the CoT prompt.)

"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response: Let's think step by step."

Example code

import torch
from modelscope import AutoModelForCausalLM, AutoTokenizer


model = AutoModelForCausalLM.from_pretrained("AI-ModelScope/WizardMath-7B-V1.0", revision='v1.0.0', device_map='auto', torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("AI-ModelScope/WizardMath-7B-V1.0", revision='v1.0.0')

prompt = """"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\nJames decides to run 3 sprints 3 times a week.  He runs 60 meters each sprint.  How many total meters does he run a week?\n\n### Response:"""
inputs = tokenizer(prompt, padding=False, add_special_tokens=False, return_tensors="pt")

# Generate
generate_ids = model.generate(
    inputs.input_ids.to(model.device), 
    attention_mask=inputs['attention_mask'].to(model.device), 
    do_sample=True,
    top_k=10,
    temperature=0.1,
    top_p=0.95,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_length=200)
print(tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0])

To commen concern about dataset:

Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models. Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team . Our researchers have no authority to publicly release them without authorization. Thank you for your understanding.

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论