yingji_7b_pt_2epoch

我要开发同款
匿名用户2024年07月31日
12阅读
开发技术qwen2
所属分类ai、generated_from_train、full、llama-factory
开源地址https://modelscope.cn/models/Enderfga/yingji_7b_pt_2epoch
授权协议other

作品详情

72bptoutput

This model is a fine-tuned version of /apdcephfsqy3/share301372554/shareinfo/publicmodels/Qwen2-7B on the filteredpretraindatabuildingllamafactory0, the filteredpretraindatabuildingllamafactory1, the filteredpretraindatabuildingllamafactory2, the filteredpretraindatafirellamafactory, the pretrainsourceLinly-AI, the crawlpretraindatabuildingllamafactory and the crawlpretraindatafirellama_factory datasets. It achieves the following results on the evaluation set:

  • Loss: 1.5537

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • trainbatchsize: 1
  • evalbatchsize: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 32
  • gradientaccumulationsteps: 8
  • totaltrainbatch_size: 256
  • totalevalbatch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lrschedulertype: cosine
  • lrschedulerwarmup_ratio: 0.1
  • num_epochs: 2.0

Training results

Training Loss Epoch Step Validation Loss
1.8743 0.1404 50 1.8325
1.7854 0.2809 100 1.7169
1.6816 0.4213 150 1.6719
1.609 0.5618 200 1.6429
1.6005 0.7022 250 1.6212
1.6172 0.8427 300 1.6040
1.597 0.9831 350 1.5896
1.5527 1.1236 400 1.5786
1.5035 1.2640 450 1.5695
1.534 1.4045 500 1.5627
1.6003 1.5449 550 1.5581
1.5466 1.6854 600 1.5552
1.5062 1.8258 650 1.5540
1.5292 1.9663 700 1.5537

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.20.0
  • Tokenizers 0.19.1
声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论