This model has been quantized using GPTQModel.
- bits: 4
- group_size: 128
- desc_act: false
- static_groups: false
- sym: true
- lm_head: false
- damp_percent: 0.005
- true_sequential: true
- quant_method: "gptq"
- checkpoint_format: "gptq"
- meta:
- quantizer: "gptqmodel:0.9.0"
评论