llama-3-chinese-8b-instruct-gguf

我要开发同款
匿名用户2024年07月31日
39阅读
所属分类ai、llama、Pytorch
开源地址https://modelscope.cn/models/ChineseAlpacaGroup/llama-3-chinese-8b-instruct-gguf
授权协议Apache License 2.0

作品详情

??? v3版指令模型已发布,欢迎使用 ? [HF版] [GGUF版]

Llama-3-Chinese-8B-Instruct-GGUF

提醒: GGUF文件已重新生成,由于llama.cpp仍然有可能对其作出修改,因此建议保留HF版模型。

这个仓库包含了Llama-3-Chinese-8B-Instruct-GGUF(兼容llama.cpp/ollama等),是Llama-3-Chinese-8B-Instruct模型的量化版本。

注意:这是一个指令模型,可以直接适用于对话、问答等任务。

更多细节(性能、使用方法等)请参考GitHub项目页面:https://github.com/ymcui/Chinese-LLaMA-Alpaca-3

量化性能

评测指标:PPL,越低越好

Quant Size PPL (old model) ?? PPL (new model)
Q2_K 2.96 GB 10.3918 +/- 0.13288 9.1168 +/- 0.10711
Q3_K 3.74 GB 6.3018 +/- 0.07849 5.4082 +/- 0.05955
Q4_0 4.34 GB 6.0628 +/- 0.07501 5.2048 +/- 0.05725
Q4_K 4.58 GB 5.9066 +/- 0.07419 5.0189 +/- 0.05520
Q5_0 5.21 GB 5.8562 +/- 0.07355 4.9803 +/- 0.05493
Q5_K 5.34 GB 5.8062 +/- 0.07331 4.9195 +/- 0.05436
Q6_K 6.14 GB 5.7757 +/- 0.07298 4.8966 +/- 0.05413
Q8_0 7.95 GB 5.7626 +/- 0.07272 4.8822 +/- 0.05396
F16 14.97 GB 5.7628 +/- 0.07275 4.8802 +/- 0.05392

其他

  • 完整模型:https://modelscope.cn/models/ChineseAlpacaGroup/llama-3-chinese-8b-instruct

  • LoRA模型:https://modelscope.cn/models/ChineseAlpacaGroup/llama-3-chinese-8b-instruct-lora

  • 关于本模型的提问,请通过 https://github.com/ymcui/Chinese-LLaMA-Alpaca-3 提交issue


This repository contains Llama-3-Chinese-8B-Instruct-GGUF (llama.cpp/ollama/tgw, etc. compatible), which is the quantized version of Llama-3-Chinese-8B-Instruct.

Note: this is an instruction (chat) model, which can be used for conversation, QA, etc.

Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3

Performance

Metric: PPL, lower is better

Quant Size PPL (old model) ?? PPL (new model)
Q2_K 2.96 GB 10.3918 +/- 0.13288 9.1168 +/- 0.10711
Q3_K 3.74 GB 6.3018 +/- 0.07849 5.4082 +/- 0.05955
Q4_0 4.34 GB 6.0628 +/- 0.07501 5.2048 +/- 0.05725
Q4_K 4.58 GB 5.9066 +/- 0.07419 5.0189 +/- 0.05520
Q5_0 5.21 GB 5.8562 +/- 0.07355 4.9803 +/- 0.05493
Q5_K 5.34 GB 5.8062 +/- 0.07331 4.9195 +/- 0.05436
Q6_K 6.14 GB 5.7757 +/- 0.07298 4.8966 +/- 0.05413
Q8_0 7.95 GB 5.7626 +/- 0.07272 4.8822 +/- 0.05396
F16 14.97 GB 5.7628 +/- 0.07275 4.8802 +/- 0.05392

Others

  • For full model, please see: https://huggingface.co/hfl/llama-3-chinese-8b-instruct

  • For LoRA-only model, please see: https://huggingface.co/hfl/llama-3-chinese-8b-instruct-lora

  • If you have questions/issues regarding this model, please submit an issue through https://github.com/ymcui/Chinese-LLaMA-Alpaca-3

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论