LLaMA2-7B-ZH-Chat-52k

我要开发同款
匿名用户2024年07月31日
32阅读
所属分类ai、llama、Pytorch、arxiv:2309.02033、data-juicer
开源地址https://modelscope.cn/models/Data-Juicer/LLaMA2-7B-ZH-Chat-52k
授权协议Apache License 2.0

作品详情

News

Our first data-centric LLM competition begins! Please visit the competition's official websites, FT-Data Ranker (1B Track, 7B Track), for more information.

Introduction

This is a reference LLM from Data-Juicer.

The model architecture is LLaMA2-7B and we built it upon the a pre-trained Chinese checkpoint from FlagAlpha. The model is fine-trained on 52k Chinese chat samples of Data-Juicer's refined alpaca-CoT data. It beats LLaMA2-7B fine-tuned on 543k Belle samples in GPT-4 evaluation.

For more details, please refer to our paper.

exp_llama

使用

from modelscope import (
    AutoModelForCausalLM, AutoTokenizer, GenerationConfig, snapshot_download
)
model_dir = 'LLaMA2-7B-ZH-Chat-52k'

tokenizer = AutoTokenizer.from_pretrained(model_dir)
model = AutoModelForCausalLM.from_pretrained(model_dir).eval()

inputs = tokenizer('How are you?', return_tensors='pt').to(model.device)
response = model.generate(inputs.input_ids, max_length=128)
print(tokenizer.decode(response.cpu()[0], skip_special_tokens=True))

参考

If you find our work useful for your research or development, please kindly cite the following paper.

@misc{chen2023datajuicer,
title={Data-Juicer: A One-Stop Data Processing System for Large Language Models},
author={Daoyuan Chen and Yilun Huang and Zhijian Ma and Hesen Chen and Xuchen Pan and Ce Ge and Dawei Gao and Yuexiang Xie and Zhaoyang Liu and Jinyang Gao and Yaliang Li and Bolin Ding and Jingren Zhou},
year={2023},
eprint={2309.02033},
archivePrefix={arXiv},
primaryClass={cs.LG}
}

Clone with HTTP

 git clone https://www.modelscope.cn/Data-Juicer/LLaMA2-7B-ZH-Chat-52k.git
声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论