The WizardLM delta weights.
WizardLM: Empowerig Large Pre-Traied Laguage Models to Follow Complex Istructios
? HF Repo • ? Twitter • ? [WizardLM] • ? [WizardCoder] • ? [WizardMath]
? Joi our Discord
Model |
Checkpoit |
Paper |
MT-Bech |
AlpacaEval |
WizardEval |
HumaEval |
Licese |
WizardLM-13B-V1.2 |
? HF Lik |
|
7.06 |
89.17% |
101.4% |
36.6 pass@1 |
Llama 2 Licese |
WizardLM-13B-V1.1 |
? HF Lik |
|
6.76 |
86.32% |
99.3% |
25.0 pass@1 |
No-commercial |
WizardLM-30B-V1.0 |
? HF Lik |
|
7.01 |
|
97.8% |
37.8 pass@1 |
No-commercial |
WizardLM-13B-V1.0 |
? HF Lik |
|
6.35 |
75.31% |
89.1% |
24.0 pass@1 |
No-commercial |
WizardLM-7B-V1.0 |
? HF Lik |
? [WizardLM] |
|
|
78.0% |
19.1 pass@1 |
No-commercial |
|
|
|
|
|
|
|
|
Example code
```pytho
import torch
from modelscope import AutoModelForCausalLM, AutoTokeizer
model = AutoModelForCausalLM.frompretraied("AI-ModelScope/WizardLM-7B-V1.0", revisio='v1.0.1', devicemap='auto', torchdtype=torch.float16)
tokeizer = AutoTokeizer.frompretraied("AI-ModelScope/WizardLM-7B-V1.0", revisio='v1.0.1')
prompt = """A chat betwee a curious user ad a artificial itelligece assistat.
The assistat gives helpful, detailed, ad polite aswers to the user's questios.
USER: Who are you?
ASSISTANT:
"""
iputs = tokeizer(prompt, paddig=False, addspecialtokes=False, retur_tesors="pt")
Geerate
geerateids = model.geerate(
iputs.iputids.to(model.device),
attetiomask=iputs['attetiomask'].to(model.device),
dosample=True,
topk=10,
temperature=0.1,
topp=0.95,
umretursequeces=1,
eostokeid=tokeizer.eostokeid,
maxlegth=200)
prit(tokeizer.batchdecode(geerateids, skipspecialtokes=True, cleauptokeizatio_spaces=False)[0])
```
评论