Introduction
A 7B decoder-only LLM with two early exits, trained using EE-LLM with 150B tokens from the Data-Juicer refined dataset.
Since ModelScope does not currently support early-exit LLM, this repository only provides the model checkpoint in EE-LLM format. Please use EE-LLM to experiment with the inference acceleration brought by early exits.
The model is used in the experiment section of our paper. For more details, please refer to our paper.
Usage
To use this checkpoint, please refer to EE-LLM.
Reference
@misc{chen2023eellm,
title={EE-LLM: Large-Scale Training and Inference of Early-Exit Large Language Models with 3D Parallelism},
author={Yanxi Chen and Xuchen Pan and Yaliang Li and Bolin Ding and Jingren Zhou},
year={2023},
eprint={2312.04916},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Clone with HTTP
git clone https://www.modelscope.cn/Data-Juicer/EE-LLM-7B-dj-refine-150B.git
评论