Yuanfang Agent
介绍
模型任务:给定APIs候选集,设计和优化LLM的planning能力、插件机制,通过LLM sft、prompt优化等手段,完成某个场景下的特定任务。
模型训练方法:基于bloom-7b1模型,用LoRA微调。
训练数据集: https://modelscope.cn/datasets/modelscope/mshackathon23agenttrain_dev/files
在线Demo平台
请至以下创空间中查看
https://modelscope.cn/studios/jiapingW/YuanFang/summary
使用方法
1 从模型文件中下载model_state_dict.pt
和configuration.json
文件
2 执行下面代码
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM
# 从本地文件夹加载模型
model_path = "/path/to/local/folder/model_state_dict.pt"
config_path = "/path/to/local/folder/configuration.json"
model = AutoModelForCausalLM.from_pretrained(model_path, return_dict=True, load_in_8bit=True, device_map='auto')
config = PeftConfig.from_pretrained(config_path)
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-7b1")
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
3 模型加载2.0
from transformers import AutoModel, AutoTokenizer, AutoModelForCausalLM
from peft import get_peft_model, LoraConfig, PeftModel, PeftConfig
# 加载peft配置
# 将 adapter_config.json 和 adapter_model.bin 下载到本地outputs文件夹下
peft_model_id = "outputs/"
peft_config = PeftConfig.from_pretrained(peft_model_id)
# 加载tokenizer
tokenizer = AutoTokenizer.from_pretrained(peft_config.base_model_name_or_path)
# 结合基础模型和微调结果,加载模型
model = AutoModelForCausalLM.from_pretrained(peft_config.base_model_name_or_path)
model = PeftModel.from_pretrained(model, peft_model_id)
model = model.to('cuda')
model.eval()
评论