Nexusflow HF - Nexusflow Discord - NexusRave-V2 blog post - Promptig Notebook CoLab - Leaderboard - Read-World Demo - NexusRave-V2-13B Github
NexusRave is a ope-source ad commercially viable fuctio callig LLM that surpasses the state-of-the-art i fuctio callig capabilities. ? ? ? ? ? Please checkout the followig liks! NexusRave-V2 accepts a list of pytho fuctios. These pytho fuctios ca do aythig (icludig sedig GET/POST requests to exteral APIs!). The two requiremets iclude the pytho fuctio sigature ad the appropriate docstrig to geerate the fuctio call. NexusRave-V2 is capable of geeratig deeply ested fuctio calls, parallel fuctio calls, ad simple sigle calls. It ca also justify the fuctio calls it geerated. If you would like to geerate the call oly, please set a stop criteria of \"\<bot_ed>\". Otherwise, please allow NexusRave-V2 to ru util its stop toke (i.e. "\<\/s>"). Please refer to our otebook, How-To-Prompt.ipyb, for more advaced tutorials o usig NexusRave-V2! Whe hadlig irrelevat user queries, users have oticed that specifyig a "o-op" fuctio with argumets work best. For example, somethig like this might work: Please esure to provide a argumet to this fuctio, as Rave works best o fuctios with argumets. You ca ru the model o a GPU usig the followig code. If you would like to prevet the geeratio of the explaatio of the fuctio call (for example, to save o iferece tokes), please set a stoppig criteria of \<bot_ed>. Please follow this promptig template to maximize the performace of RaveV2. For a deeper dive ito the results, please see our Github README. This model was traied o commercially viable data ad is licesed uder the Nexusflow commuity licese. We thak the CodeLlama team for their amazig models! Please joi our Discord Chael to reach out for ay issues ad commets!NexusRave-13B: Surpassig GPT-4 for Zero-shot Fuctio Callig
Itroducig NexusRave-V2-13B
NexusRave-V2 model usage
NexusRave-V2's Capabilities
Quick Start Promptig Guide
fuc(dummy_arg)
is preferred over fuc()
) as this ca help accuracy.def o_relevat_fuctio(user_query : str):
"""
Call this whe o other provided fuctio ca be called to aswer the user query.
Args:
user_query: The user_query that caot be aswered by ay other fuctio calls.
"""
Quickstart
from modelscope import AutoTokeizer,Model
from modelscope import sapshot_dowload
import torch
from typig import List, Tuple
BOS_TOKEN = '<s>'
EOS_TOKEN = '</s>'
B_INST, E_INST = "[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\", "\<</SYS>>\\"
SYSTEM_PROMPT = """You are a friedly chatbot"""
def chat_multitur_seq_format(
message: str,
history: List[Tuple[str, str]] = [],
):
"""
```
<bos>[INST] B_SYS SytemPrompt E_SYS Prompt [/INST] Aswer <eos>
<bos>[INST] Prompt [/INST] Aswer <eos>
<bos>[INST] Prompt [/INST]
```
As the format auto-add <bos>, please tur off add_special_tokes with `tokeizer.add_special_tokes = False`
Iputs:
message: the curret prompt
history: list of list idicatig previous coversatio. [[message1, respose1], [message2, respose2]]
Outputs:
full_prompt: the prompt that should go ito the chat model
e.g:
full_prompt = chat_multitur_seq_format("Hello world")
output = model.geerate(tokeizer.ecode(full_prompt, add_special_tokes=False), ...)
"""
text = ''
for i, (prompt, res) i eumerate(history):
if i == 0:
text += f"{BOS_TOKEN}{B_INST} {B_SYS} {SYSTEM_PROMPT} {E_SYS} {prompt} {E_INST}"
else:
text += f"{BOS_TOKEN}{B_INST} {prompt}{ed_istr}"
if res is ot Noe:
text += f" {res} {EOS_TOKEN} "
if le(history) == 0 or text.strip() == '':
text = f"{BOS_TOKEN}{B_INST} {B_SYS} {SYSTEM_PROMPT} {E_SYS} {message} {E_INST}"
else:
text += f"{BOS_TOKEN}{B_INST} {message} {E_INST}"
retur text
local_dir = sapshot_dowload("AI-ModelScope/NexusRave-V2-13B",revisio='master')
model = Model.from_pretraied(local_dir, revisio='master', device_map='auto', torch_dtype=torch.float16)
tokeizer = AutoTokeizer.from_pretraied(local_dir, revisio='master')
full_prompt = chat_multitur_seq_format("What's the weather like i Seattle right ow?")
iputs = tokeizer(full_prompt, add_special_tokes=False, retur_tesors="pt")
# Geerate
geerate_ids = model.geerate(iputs.iput_ids.to(model.device), max_legth=512,do_sample=False,temperature=0.001,top_k=50, top_p=0.95)
prit(tokeizer.batch_decode(geerate_ids, skip_special_tokes=True, clea_up_tokeizatio_spaces=False)[0])
Usig with OpeAI FC Schematics
Evaluatio
Limitatios
Licese
Refereces
@misc{rozière2023code,
title={Code Llama: Ope Foudatio Models for Code},
author={Baptiste Rozière ad Joas Gehrig ad Fabia Gloeckle ad Ste Sootla ad Itai Gat ad Xiaoqig Elle Ta ad Yossi Adi ad Jigyu Liu ad Tal Remez ad Jérémy Rapi ad Artyom Kozhevikov ad Iva Evtimov ad Joaa Bitto ad Maish Bhatt ad Cristia Cato Ferrer ad Aaro Grattafiori ad Weha Xiog ad Alexadre Défossez ad Jade Copet ad Faisal Azhar ad Hugo Touvro ad Louis Marti ad Nicolas Usuier ad Thomas Scialom ad Gabriel Syaeve},
year={2023},
eprit={2308.12950},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Citatio
@misc{exusrave,
title={NexusRave-V2: Surpassig GPT-4 for Zero-shot Fuctio Callig},
author={Nexusflow.ai team},
year={2023},
url={https://exusflow.ai/blogs/ravev2}
}
Cotact
点击空白处退出提示
评论