This model has been quantized using GPTQModel.
- bits: 4
- group_size: 128
- desc_act: true
- static_groups: false
- sym: true
- lm_head: false
- damp_percent: 0.01
- true_sequential: true
- modelnameor_path:
- modelfilebase_name: model
- quant_method: gptq
- checkpoint_format: gptq
- meta:
- quantizer: gptqmodel:0.9.2
评论