姜子牙-LLaMA-13B-1v1

我要开发同款
匿名用户2024年07月31日
25阅读
所属分类ai、llama、pytorch
开源地址https://modelscope.cn/models/Fengshenbang/Ziya-LLaMA-13B-1v1
授权协议gpl-3.0

作品详情

Ziya-LLaMA-13B-v1.1

(LLaMA权重的许可证限制,我们无法直接发布完整的模型权重,用户需要参考使用说明进行合并)

姜子牙系列模型

简介 Brief Introduction

我们对Ziya-LLaMA-13B-v1模型进行继续优化,推出开源版本Ziya-LLaMA-13B-v1.1。通过调整微调数据的比例和采用更优的强化学习策略,本版本在问答准确性、数学能力以及安全性等方面得到了明显提升,详细能力分析如下图所示。

We have further optimized the Ziya-LLaMA-13B-v1 model and released the open-source version Ziya-LLaMA-13B-v1.1. By adjusting the proportion of fine-tuning data and adopting a better reinforcement learning strategy, this version has achieved significant improvements in question-answering accuracy, mathematical ability, and safety, as shown in the following figure in detail.

软件依赖

pip install torch==1.12.1 tokenizers==0.13.3 git+https://github.com/huggingface/transformers

使用 Usage

请参考Ziya-LLaMA-13B-v1的使用说明。

Please refer to the usage for Ziya-LLaMA-13B-v1.

引用 Citation

如果您在您的工作中使用了我们的模型,可以引用我们的论文

If you are using the resource for your work, please cite the our paper:

@article{fengshenbang,
  author    = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
  title     = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
  journal   = {CoRR},
  volume    = {abs/2209.02970},
  year      = {2022}
}

You can also cite our website:

欢迎引用我们的网站:

@misc{Fengshenbang-LM,
  title={Fengshenbang-LM},
  author={IDEA-CCNL},
  year={2021},
  howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论