通义千问2-7B-Instruct-FP8
#SDK模型下载
from modelscope import snapshot_download
model_dir = snapshot_download('HowieJun/Qwen2-7B-Instruct-FP8')
vllm(0.4.2)推理
OPENAI API Server
>>> python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 8001 --gpu-memory-utilization 0.9 --served-model-name Qwen2-7B-Instruct-FP8 --model /home/harvey/PM/Qwen2-7B-Instruct-FP8
基准测试
Qwen2-7B-FP16
============ Serving Benchmark Result ============
**Successful requests: 1000 **
**Benchmark duration (s): 114.53 **
**Total input tokens: 217393 **
**Total generated tokens: 181935 **
**Request throughput (req/s): 8.73 **
**Input token throughput (tok/s): 1898.17 **
**Output token throughput (tok/s): 1588.57 **
---------------Time to First Token----------------
Qwen2-7B-FP8
============ Serving Benchmark Result ============
**Successful requests: 1000 **
**Benchmark duration (s): 110.69 **
**Total input tokens: 217393 **
**Total generated tokens: 180916 **
**Request throughput (req/s): 9.03 **
**Input token throughput (tok/s): 1964.01 **
**Output token throughput (tok/s): 1634.46 **
---------------Time to First Token----------------
评论