vicuna-7b-v1.5-16k

我要开发同款
匿名用户2024年07月31日
28阅读
所属分类ai、llama
开源地址https://modelscope.cn/models/MaybeOK/vicuna-7b-v1.5-16k
授权协议llama2

作品详情

Vicuna Model Card

This model is copied from the coordinating repo of https://huggingface.co/lmsys/vicuna-7b-v1.5-16k

Model Details

Vicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT.

  • Developed by: LMSYS
  • Model type: An auto-regressive language model based on the transformer architecture
  • License: Llama 2 Community License Agreement
  • Finetuned from model: Llama 2

Model Sources

  • Repository: https://github.com/lm-sys/FastChat
  • Blog: https://lmsys.org/blog/2023-03-30-vicuna/
  • Paper: https://arxiv.org/abs/2306.05685
  • Demo: https://chat.lmsys.org/

Uses

The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.

How to Get Started with the Model

  • Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights
  • APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api

Training Details

Vicuna v1.5 (16k) is fine-tuned from Llama 2 with supervised instruction fine-tuning and linear RoPE scaling. The training data is around 125K conversations collected from ShareGPT.com. These conversations are packed into sequences that contain 16K tokens each. See more details in the "Training Details of Vicuna Models" section in the appendix of this paper.

Evaluation

Evaluation Results

Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this paper and leaderboard.

Difference between different versions of Vicuna

See vicunaweightsversion.md

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论