How to use
Cloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use modelscope
(pip install modelscope
) as shown below:
from modelscope.hub.file_download import model_file_download
model_dir = model_file_download(model_id='X-D-Lab/MindChat-Qwen2-4B_GGUF',file_path='mindchat_qwen2_4b.gguf',revision='master',cache_dir='path/to/local/dir')
We demonstrate how to use llama.cpp
to run Qwen2-beta:
./main -m mindchat_qwen2_4b.gguf -n 512 --color -i -cml -f prompts/chat-with-qwen.txt
Clone with HTTP
git clone https://www.modelscope.cn/X-D-Lab/MindChat-Qwen2-4B_GGUF.git
评论