Llama-3-Chinese-8B
这个仓库包含了Llama-3-Chinese-8B,它是在Meta-Llama-3-8B的基础上,使用了120GB的中文文本语料进行进一步预训练。
注意:这是一个基座模型,不适用于对话、问答等任务。
更多细节(性能、使用方法等)请参考GitHub项目页面:https://github.com/ymcui/Chinese-LLaMA-Alpaca-3
其他
LoRA模型:https://modelscope.cn/models/ChineseAlpacaGroup/llama-3-chinese-8b-lora
GGUF模型(llama.cpp兼容):https://modelscope.cn/models/ChineseAlpacaGroup/llama-3-chinese-8b-gguf
关于本模型的提问,请通过 https://github.com/ymcui/Chinese-LLaMA-Alpaca-3 提交issue
This repository contains Llama-3-Chinese-8B, which is further pre-trained on Meta-Llama-3-8B with 120 GB Chinese text corpora.
Note: this is a foundation model, which is not suitable for conversation, QA, etc.
Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3
Others
For LoRA-only model, please see: https://modelscope.cn/models/ChineseAlpacaGroup/llama-3-chinese-8b-lora
For GGUF model (llama.cpp compatible), please see: https://modelscope.cn/models/ChineseAlpacaGroup/llama-3-chinese-8b-gguf
If you have questions/issues regarding this model, please submit an issue through https://github.com/ymcui/Chinese-LLaMA-Alpaca-3
评论