Llama-2-7b-Chat-GGUF
This repo contains GGUF format model files for Llama-2-7b-Chat.
About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Supported quantization methods:
- Q4KM
Will add more methods in the future, you can contact us if support for other quantification is needed.
Example code
Install packages
pip install xinference[ggml]>=0.4.3
If you want to run with GPU acceleration, refer to installation.
Start a local instance of Xinference
xinference -p 9997
Launch and inference
from xinference.client import Client
client = Client("http://localhost:9997")
model_uid = client.launch_model(
model_name="llama-2-chat",
model_format="ggufv2",
model_size_in_billions=7,
quantization="Q4_K_M",
)
model = client.get_model(model_uid)
chat_history = []
prompt = "What is the largest animal?"
model.chat(
prompt,
chat_history=chat_history,
generate_config={"max_tokens": 1024}
)
More information
Xinference Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you are empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
评论