C4AI-Command-R-v01-GGUF
Original Model
second-state/C4AI-Command-R-v01-GGUF
Run with LlamaEdge
LlamaEdge version: coming soon
Context size:
8192
Quantized GGUF Models
Name | Quant method | Bits | Size | Use case |
---|---|---|---|---|
c4ai-command-r-v01-Q2_K.gguf | Q2_K | 2 | 13.8 GB | smallest, significant quality loss - not recommended for most purposes |
c4ai-command-r-v01-Q3KL.gguf | Q3KL | 3 | 19.1 GB | small, substantial quality loss |
c4ai-command-r-v01-Q3KM.gguf | Q3KM | 3 | 17.6 GB | very small, high quality loss |
c4ai-command-r-v01-Q3KS.gguf | Q3KS | 3 | 15.9 GB | very small, high quality loss |
c4ai-command-r-v01-Q4_0.gguf | Q4_0 | 4 | 20.2 GB | legacy; small, very high quality loss - prefer using Q3KM |
c4ai-command-r-v01-Q4KM.gguf | Q4KM | 4 | 21.5 GB | medium, balanced quality - recommended |
c4ai-command-r-v01-Q4KS.gguf | Q4KS | 4 | 20.4 GB | small, greater quality loss |
c4ai-command-r-v01-Q5_0.gguf | Q5_0 | 5 | 24.3 GB | legacy; medium, balanced quality - prefer using Q4KM |
c4ai-command-r-v01-Q5KM.gguf | Q5KM | 5 | 25 GB | large, very low quality loss - recommended |
c4ai-command-r-v01-Q5KS.gguf | Q5KS | 5 | 24.3 GB | large, low quality loss - recommended |
c4ai-command-r-v01-Q6_K.gguf | Q6_K | 6 | 28.7 GB | very large, extremely low quality loss |
c4ai-command-r-v01-Q8_0.gguf | Q8_0 | 8 | 37.2 GB | very large, extremely low quality loss - not recommended |
Quantized with llama.cpp b2450
评论