baichuan2_multirounds

我要开发同款
匿名用户2024年07月31日
11阅读
所属分类ai、baichuan
开源地址https://modelscope.cn/models/asatala/baichuan2_multirounds

作品详情

Training procedure

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

The following bitsandbytes quantization config was used during training:

  • loadin8bit: False
  • loadin4bit: True
  • llmint8threshold: 6.0
  • llmint8skip_modules: None
  • llmint8enablefp32cpu_offload: False
  • llmint8hasfp16weight: False
  • bnb4bitquant_type: nf4
  • bnb4bitusedoublequant: True
  • bnb4bitcompute_dtype: float16

Framework versions

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

  • PEFT 0.5.0

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论