简介
cvx-coder 增强了大模型CVX 代码能力和QA能力。它是phi-3在CVX文档, 合成代码, 论坛对话数据上的微调版本。
开始
先下载模型: Git下载
#Git模型下载
git clone https://www.modelscope.cn/tommy1235/cvx-coder.git
然后运行下面示例代码
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
m_path="你的路径/cvx-coder"
model = AutoModelForCausalLM.from_pretrained(
m_path,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(m_path)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 2000,
"return_full_text": False,
"temperature": 0,
"do_sample": False,
}
content='''my problem is not convex, can i use cvx? if not, what should i do, be specific.'''
messages = [
{"role": "user", "content": content},
]
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
若想进入聊天模式,请运行下面的代码:
import gradio as gr
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
m_path="你的路径/cvx-coder"
model = AutoModelForCausalLM.from_pretrained(
m_path,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(m_path)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 2000,
"return_full_text": False,
"temperature": 0,
"do_sample": False,
}
def assistant_talk(message, history):
message=[
{"role": "user", "content": message},
]
temp=[]
for i in history:
temp+=[{"role": "user", "content": i[0]},{"role": "assistant", "content": i[1]}]
messages =temp + message
output = pipe(messages, **generation_args)
return output[0]['generated_text']
gr.ChatInterface(assistant_talk).launch()
评论