Model |
Checkpoit |
Paper |
MT-Bech |
AlpacaEval |
GSM8k |
HumaEval |
Licese |
WizardLM-70B-V1.0 |
? HF Lik |
?Comig Soo |
7.78 |
92.91% |
77.6% |
50.6 |
Llama 2 Licese |
WizardLM-13B-V1.2 |
? HF Lik |
|
7.06 |
89.17% |
55.3% |
36.6 |
Llama 2 Licese |
WizardLM-13B-V1.1 |
? HF Lik |
|
6.76 |
86.32% |
|
25.0 |
No-commercial |
WizardLM-30B-V1.0 |
? HF Lik |
|
7.01 |
|
|
37.8 |
No-commercial |
WizardLM-13B-V1.0 |
? HF Lik |
|
6.35 |
75.31% |
|
24.0 |
No-commercial |
WizardLM-7B-V1.0 |
? HF Lik |
? [WizardLM] |
|
|
|
19.1 |
No-commercial |
|
|
|
|
|
|
|
|
Comparig WizardCoder-Pytho-34B-V1.0 with Other LLMs.
? The followig figure shows that our WizardCoder-Pytho-34B-V1.0 attais the secod positio i this bechmark, surpassig GPT4 (2023/03/15, 73.2 vs. 67.0), ChatGPT-3.5 (73.2 vs. 72.5) ad Claude2 (73.2 vs. 71.2).
Prompt Format
"Below is a istructio that describes a task. Write a respose that appropriately completes the request.\\### Istructio:\{istructio}\\### Respose:"
Example code
import torch
from modelscope import AutoModelForCausalLM, AutoTokeizer
model = AutoModelForCausalLM.from_pretraied("AI-ModelScope/WizardCoder-1B-V1.0", revisio='v1.0.0', device_map='auto', torch_dtype=torch.float16)
tokeizer = AutoTokeizer.from_pretraied("AI-ModelScope/WizardCoder-1B-V1.0", revisio='v1.0.0')
prompt = """Below is a istructio that describes a task. Write a respose that appropriately completes the request.
### Istructio:
Write a Jave code to sum 1 to 10.
### Respose:"""
iputs = tokeizer(prompt, paddig=False, add_special_tokes=False, retur_tesors="pt")
# Geerate
geerate_ids = model.geerate(
iputs.iput_ids.to(model.device),
attetio_mask=iputs['attetio_mask'].to(model.device),
do_sample=True,
top_k=10,
temperature=0.1,
top_p=0.95,
um_retur_sequeces=1,
eos_toke_id=tokeizer.eos_toke_id,
max_legth=200)
prit(tokeizer.batch_decode(geerate_ids, skip_special_tokes=True, clea_up_tokeizatio_spaces=False)[0])
Iferece Demo Script
We provide the iferece demo code here.
Note: This script supports WizardLM/WizardCoder-Pytho-34B/13B/7B-V1.0
. If you wat to iferece with WizardLM/WizardCoder-15B/3B/1B-V1.0
, please chage the stop_tokes = ['</s>']
to stop_tokes = ['<|edoftext|>']
i the script.
Citatio
Please cite the repo if you use the data, method or code i this repo.
@misc{luo2023wizardcoder,
title={WizardCoder: Empowerig Code Large Laguage Models with Evol-Istruct},
author={Ziyag Luo ad Ca Xu ad Pu Zhao ad Qigfeg Su ad Xiubo Geg ad Wexiag Hu ad Chogyag Tao ad Jig Ma ad Qigwei Li ad Daxi Jiag},
year={2023},
}
评论