Llama-3-Chinese-8B-LoRA
这个仓库包含了Llama-3-Chinese-8B-LoRA,它是在Meta-Llama-3-8B的基础上,使用了120GB的中文文本语料进行进一步预训练。
注意:LoRA必须和原版Meta-Llama-3-8B进行合并才能得到完整模型。
更多细节(性能、使用方法等)请参考GitHub项目页面:https://github.com/ymcui/Chinese-LLaMA-Alpaca-3
其他
完整模型:https://modelscope.cn/models/ChineseAlpacaGroup/llama-3-chinese-8b
GGUF模型(llama.cpp兼容):https://modelscope.cn/models/ChineseAlpacaGroup/llama-3-chinese-8b-gguf
关于本模型的提问,请通过 https://github.com/ymcui/Chinese-LLaMA-Alpaca-3 提交issue
This repository contains Llama-3-Chinese-8B-LoRA, which is further pre-trained on Meta-Llama-3-8B with 120 GB Chinese text corpora.
Note: You must combine LoRA with the original Meta-Llama-3-8B to obtain full weight.
Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3
Others
For full model, please see: https://modelscope.cn/models/ChineseAlpacaGroup/llama-3-chinese-8b
For GGUF model (llama.cpp compatible), please see: https://modelscope.cn/models/ChineseAlpacaGroup/llama-3-chinese-8b-gguf
If you have questions/issues regarding this model, please submit an issue through https://github.com/ymcui/Chinese-LLaMA-Alpaca-3
评论