LLaMA-1B-dj-refine-100B

我要开发同款
匿名用户2024年07月31日
22阅读
所属分类ai、llama、Pytorch、arxiv:2309.02033、data-juicer
开源地址https://modelscope.cn/models/Data-Juicer/LLaMA-1B-dj-refine-100B
授权协议Apache License 2.0

作品详情

News

Our first data-centric LLM competition begins! Please visit the competition's official websites, FT-Data Ranker (1B Track, 7B Track), for more information.

Introduction

This is a reference LLM from Data-Juicer.

The model architecture is LLaMA-1.3B and we adopt the OpenLLaMA implementation. The model is pre-trained on 100B tokens of Data-Juicer's refined RedPajama and Pile. It achieves an average score of 33.07 over 16 HELM tasks, beating LLMs trained on original RedPajama and Pile datasets.

For more details, please refer to our paper.

exp_llama

使用

from modelscope import (
    AutoModelForCausalLM, AutoTokenizer, GenerationConfig, snapshot_download
)
model_dir = 'Data-Juicer/LLaMA-1B-dj-refine-100B'

tokenizer = AutoTokenizer.from_pretrained(model_dir)
model = AutoModelForCausalLM.from_pretrained(model_dir).eval()

inputs = tokenizer('How are you?', return_tensors='pt').to(model.device)
response = model.generate(inputs.input_ids, max_length=128)
print(tokenizer.decode(response.cpu()[0], skip_special_tokens=True))

参考

If you find our work useful for your research or development, please kindly cite the following paper.

@misc{chen2023datajuicer,
title={Data-Juicer: A One-Stop Data Processing System for Large Language Models},
author={Daoyuan Chen and Yilun Huang and Zhijian Ma and Hesen Chen and Xuchen Pan and Ce Ge and Dawei Gao and Yuexiang Xie and Zhaoyang Liu and Jinyang Gao and Yaliang Li and Bolin Ding and Jingren Zhou},
year={2023},
eprint={2309.02033},
archivePrefix={arXiv},
primaryClass={cs.LG}
}

Clone with HTTP

 git clone https://www.modelscope.cn/Data-Juicer/LLaMA-1B-dj-refine-100B.git
声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论