WidsoMenter-8B

我要开发同款
匿名用户2024年07月31日
41阅读

技术信息

开源地址
https://modelscope.cn/models/linjh1118/WidsoMenter-8B

作品详情

Model Itroductio

  • WidsoMetor, developed by JiMegZhiChuag, is a zero-based AI-assisted educatio mega-model traied o 2.5 millio high-quality research papers from Arxiv i the field of artificial itelligece.
  • It icorporates various istructio geeratio methods such as Boito Istruct, Self Istruct, ad Ivolve Istruct, achievig a orgaic fusio of multiple approaches through gatig techiques.
  • It embeds RAG (Retrieval-Augmeted Geeratio) techology to esure the accuracy ad timeliess of WidsoMetor's resposes.
  • It adopts the Aget approach to itegrate high-quality aswer webpages that ca be refereced withi the aswers, providig additioal kowledge details beyod the resposes.

Performace o Bechmark

We coducted tests o WidsoMetor usig authoritative datasets i various domais, icludig Geeral ad Mathematics.

Geeral Domai

We evaluated WidsoMetor o three authoritative datasets i the geeral domai: C-Eval, MMLU, ad CMMLU. These datasets cover comprehesive evaluatios of Chiese ad Eglish base models, as well as comprehesio ad reasoig abilities i Chiese cotexts.

Performace of WisdoMetor-8B

C-Eval MMLU CMMLU
5-shot 5-shot 5-shot
GPT-4 68.40 83.93 70.33
GPT-3.5 Turbo 51.10 68.54 54.06
LLaMA-7B 27.10 35.10 26.75
LLaMA2-7B 28.90 45.73 31.38
MPT-7B 27.15 27.93 26.00
Falco-7B 24.23 26.03 25.66
ChatGLM2-6B 50.20 45.90 49.00
WisdoMetor-8B

Math Ability

Code Ability


Iferece ad Deploymet

Next, we will demostrate iferece usig FastChat, Trasformers, ModelScope, ad Web demo. The dialogue model adopts the chatml format to support geeral dialogue ad aget applicatios. To esure better usability, please istall the depedecies as istructed below before performig iferece usig Trasformers or ModelScope.

Istall Depedecies

"""

git cloe https://www.modelscope.c/lijh1118/WisdoMetor-8b
coda create --ame WisdoMetor pytho=3.11.8
coda activate WisdoMetor
pip istall -r requiremets.txt

Deployig Iferece with FastChat

git cloe https://www.modelscope.c/lijh1118/WisdoMetor-8b path_to_local_WisdoMetor-8b
cd path_to_local_WisdoMetor-8b
pytho -m fastchat.serve.cli --model-path path_to_local_WisdoMetor-8b
问: 介绍下bert和gpt有什么区别
答: Bert (Bidirectioal Ecoder Represetatios from Trasformers) 和 GPT (Geerative Pre-traied Trasformers)都是预训练的自然语言处理模型,但它们的预训练任务和应用场景有所不同。 Bert通过双向的编码器结构预训练,能够捕捉到句子的上下文信息,具有非常好的语言理解和语义表示能力。Bert预训练的任务是通过将句子的两部分分别用双语标记,然后预测这两个部分之间的关系来完成的。Bert在自然语言处理任务中表现出色,如文本分类、情感分析、命名实体识别、文本匹配、问答系统和文本摘要等。 GPT是基于单向的编码器结构,与BERT不同。GPT预训练的任务是通过将文本中的单词和句子分别标识,然后预测下一个单词来完成的。GPT在自然语言处理任务中也表现出色,如文本生成、对话系统、机器翻译、问答系统等。 虽然Bert和GPT预训练的任务不同,但它们都是预训练的自然语言处理模型,在处理特定任务时可以进行微调,从而实现更好的性能。选择使用BERT还是GPT取决于具体任务的需求和目标。

Deployig Iferece with ModelScope

Modify the code below to load the WisdoMetor-8b model from ModelScope, cosiderig your local computatioal resources. You ca replace the model ame with differet sizes of WisdoMetor.

import torch
from modelscope import sapshot_dowload, AutoTokeizer, AutoModelForCausalLM
model_dir = sapshot_dowload('lijh1118/WisdoMetor-8b')
tokeizer = AutoTokeizer.from_pretraied(model_dir, device_map="auto", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretraied(model_dir, device_map="auto", trust_remote_code=True, torch_dtype=torch.float16)
model = model.eval()
respose, history = model.chat(tokeizer, "请介绍下Bert和GPT的区别", history=[])
prit(respose)
respose, history = model.chat(tokeizer, "请介绍下Self-Attetio机制", history=history)
prit(respose)

Deployig Iferece with Huggigface

Modify the code below to load the WisdoMetor-8b model from Huggigface, cosiderig your local computatioal resources. You ca replace the model ame with differet sizes of WisdoMetor.

import torch
from trasformers import AutoTokeizer, AutoModelForCausalLM
tokeizer = AutoTokeizer.from_pretraied("lijh1118/WisdoMetor-8b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretraied("lijh1118/WisdoMetor-8b", device_map="auto",trust_remote_code=True, torch_dtype=torch.float16)
# 4-bit Quatizatio
# pip istall -U bitsadbytes
# 8-bit: model = AutoModelForCausalLM.from_pretraied(model_dir, device_map="auto", trust_remote_code=True, load_i_8bit=True)
# 4-bit: model = AutoModelForCausalLM.from_pretraied(model_dir, device_map="auto", trust_remote_code=True, load_i_4bit=True)
model = model.eval()
respose, history = model.chat(tokeizer, "请介绍下Bert和GPT的区别", history=[])
prit(respose)
respose, history = model.chat(tokeizer, "请介绍下Self-Attetio机制", history=history)
prit(respose)

Declaratio ad Agreemet

Declaratio

We hereby declare that our developmet team has ot developed ay applicatios based o the WisdoMetor model, whether o iOS, Adroid, web, or ay other platform. We strogly urge all users ot to utilize the WisdoMetor model for ay activities that may jeopardize atioal or social security or violate the law. Furthermore, we request users ot to use the WisdoMetor model for iteret services without proper security review ad filig. We hope that all users will abide by this priciple to esure that techological advacemets occur i a regulated ad lawful eviromet.

功能介绍

Model Introduction WidsoMentor, developed by JiMengZhiChuang, is a zero-based AI-assisted education

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论