News
Our first data-centric LLM competition begins! Please visit the competition's official websites, FT-Data Ranker (1B Track, 7B Track), for more information.
Introduction
This is a reference LLM from Data-Juicer.
The model is pre-trained on 150B tokens of Data-Juicer's refined RedPajama and Pile, and 4.7B tokens of Data-Juicer refined instruct data. It achieves an average score of 36.76 over 16 HELM tasks, improved the OpenLLaMA-DJ-150B by 2.55 point.
For more details, please refer to our paper.
使用
from modelscope import (
AutoModelForCausalLM, AutoTokenizer, GenerationConfig, snapshot_download
)
model_dir = 'Data-Juicer/LLaMA-1B-dj-refine-150B-instruct-4.7B'
tokenizer = AutoTokenizer.from_pretrained(model_dir)
model = AutoModelForCausalLM.from_pretrained(model_dir).eval()
inputs = tokenizer('How are you?', return_tensors='pt').to(model.device)
response = model.generate(inputs.input_ids, max_length=128)
print(tokenizer.decode(response.cpu()[0], skip_special_tokens=True))
参考
If you find our work useful for your research or development, please kindly cite the following paper.
@misc{chen2023datajuicer,
title={Data-Juicer: A One-Stop Data Processing System for Large Language Models},
author={Daoyuan Chen and Yilun Huang and Zhijian Ma and Hesen Chen and Xuchen Pan and Ce Ge and Dawei Gao and Yuexiang Xie and Zhaoyang Liu and Jinyang Gao and Yaliang Li and Bolin Ding and Jingren Zhou},
year={2023},
eprint={2309.02033},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Clone with HTTP
git clone https://www.modelscope.cn/Data-Juicer/LLaMA-1B-dj-refine-150B-instruct-4.7B.git
评论