speechless-code-mistral-orca-7b-v1.0

我要开发同款
匿名用户2024年07月31日
28阅读
所属分类ai、mistral、pytorch、code、llama2、mistral
开源地址https://modelscope.cn/models/keepitsimple/speechless-code-mistral-orca-7b-v1.0
授权协议other

作品详情

speechless-code-mistral-orca-7b-v1.0

Use the following dataset to fine-tune Open-Orca/Mistral-7B-OpenOrca in order to improve the model's reasoning and planning abilities.

Total 201,981 samples.

  • jondurbin/airoboros-2.2: Filter categories related to coding, reasoning and planning. 23,462 samples.
  • Open-Orca/OpenOrca: Filter the 'cot' category in 1M GPT4 dataset. 74,440 samples.
  • garage-bAInd/Open-Platypus: 100%, 24,926 samples.
  • WizardLM/WizardLMevolinstructV2196k: Coding coversation part. 30,185 samples
  • TokenBender/pythonevalinstruct_51k: “python” in output .40,309 samples
  • Spider: 8,659 samples

HumanEval

Metric Value
humaneval-python 47.561

Big Code Models Leaderboard

CodeLlama-34B-Python: 53.29

CodeLlama-34B-Instruct: 50.79

CodeLlama-13B-Instruct: 50.6

CodeLlama-34B: 45.11

CodeLlama-13B-Python: 42.89

CodeLlama-13B: 35.07

lm-evaluation-harness

Open LLM Leaderboard

Metric Value
ARC 59.64
HellaSwag 82.25
MMLU 61.33
TruthfulQA 48.45
Average 62.92

Parameters

lr 2e-4
lrschedulertype cosine
weight_decay 0.0
optim pagedadamw8bit
flash_attention True
rerope False
maxnewtokens 4096
numtrainepochs 2
bits 4
lora_r 64
lora_alpha 16
lora_dropout 0.05
double_quant True
quant_type nf4
dataset_format airoboros
minibatchsize 2
grandientaccumulationsteps 32
bf16 True

A100-40G x 4

epoch 2.0
etrain_loss 0.4708
etrain_runtime 12:12:53.64
etrainsamplesper_second 9.002
etrainstepsper_second 0.07
eeval_loss 0.4851
eeval_runtime 0:00:10.31
eevalsamplesper_second 19.385
eevalstepsper_second 4.846
声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论