gte-Qwen2-7B-instruct

我要开发同款
匿名用户2024年07月31日
155阅读
所属分类ai、qwen2、Pytorch、sentence-similarity、Qwen2、transformers、sentence-transformer、mteb
开源地址https://modelscope.cn/models/buaadreamer/gte-Qwen2-7B-instruct
授权协议apache-2.0

作品详情

gte-Qwen2-7B-instruct

gte-Qwen2-7B-instruct is the latest model in the gte (General Text Embedding) model family that ranks No.1 in both English and Chinese evaluations on the Massive Text Embedding Benchmark MTEB benchmark (as of June 16, 2024).

Recently, the Qwen team released the Qwen2 series models, and we have trained the gte-Qwen2-7B-instruct model based on the Qwen2-7B LLM model. Compared to the gte-Qwen1.5-7B-instruct model, the gte-Qwen2-7B-instruct model uses the same training data and training strategies during the finetuning stage, with the only difference being the upgraded base model to Qwen2-7B. Considering the improvements in the Qwen2 series models compared to the Qwen1.5 series, we can also expect consistent performance enhancements in the embedding models.

The model incorporates several key advancements:

  • Integration of bidirectional attention mechanisms, enriching its contextual understanding.
  • Instruction tuning, applied solely on the query side for streamlined efficiency
  • Comprehensive training across a vast, multilingual text corpus spanning diverse domains and scenarios. This training leverages both weakly supervised and supervised data, ensuring the model's applicability across numerous languages and a wide array of downstream tasks.

Model Information

  • Model Size: 7B
  • Embedding Dimension: 3584
  • Max Input Tokens: 32k

Requirements

transformers>=4.39.2
flash_attn>=2.5.6

Usage

Sentence Transformers

from sentence_transformers import SentenceTransformer

model = SentenceTransformer("Alibaba-NLP/gte-Qwen2-7B-instruct", trust_remote_code=True)
# In case you want to reduce the maximum length:
model.max_seq_length = 8192

queries = [
    "how much protein should a female eat",
    "summit define",
]
documents = [
    "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
    "Definition of summit for English Language Learners. : 1  the highest point of a mountain : the top of a mountain. : 2  the highest level. : 3  a meeting or series of meetings between the leaders of two or more governments.",
]

query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)

scores = (query_embeddings @ document_embeddings.T) * 100
print(scores.tolist())

Observe the configsentencetransformers.json to see all pre-built prompt names. Otherwise, you can use model.encode(queries, prompt="Instruct: ...\nQuery: " to use a custom prompt of your choice.

Transformers

import torch
import torch.nn.functional as F

from torch import Tensor
from transformers import AutoTokenizer, AutoModel


def last_token_pool(last_hidden_states: Tensor,
                 attention_mask: Tensor) -> Tensor:
    left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
    if left_padding:
        return last_hidden_states[:, -1]
    else:
        sequence_lengths = attention_mask.sum(dim=1) - 1
        batch_size = last_hidden_states.shape[0]
        return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]


def get_detailed_instruct(task_description: str, query: str) -> str:
    return f'Instruct: {task_description}\nQuery: {query}'


# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
    get_detailed_instruct(task, 'how much protein should a female eat'),
    get_detailed_instruct(task, 'summit define')
]
# No need to add instruction for retrieval documents
documents = [
    "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
    "Definition of summit for English Language Learners. : 1  the highest point of a mountain : the top of a mountain. : 2  the highest level. : 3  a meeting or series of meetings between the leaders of two or more governments."
]
input_texts = queries + documents

tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/gte-Qwen2-7B-instruct', trust_remote_code=True)
model = AutoModel.from_pretrained('Alibaba-NLP/gte-Qwen2-7B-instruct', trust_remote_code=True)

max_length = 8192

# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])

# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())

Evaluation

MTEB & C-MTEB

You can use the scripts/eval_mteb.py to reproduce the following result of gte-Qwen2-7B-instruct on MTEB(English)/C-MTEB(Chinese):

Model Name MTEB(56) C-MTEB(35) MTEB-fr(26) MTEB-pl(26)
bge-base-en-1.5 64.23 - - -
bge-large-en-1.5 63.55 - - -
gte-large-en-v1.5 65.39 - - -
gte-base-en-v1.5 64.11 - - -
mxbai-embed-large-v1 64.68 - - -
acgetextembedding - 69.07 - -
stella-mrl-large-zh-v3.5-1792d - 68.55 - -
gte-large-zh - 66.72 - -
multilingual-e5-base 59.45 56.21 - -
multilingual-e5-large 61.50 58.81 - -
e5-mistral-7b-instruct 66.63 60.81 - -
gte-Qwen1.5-7B-instruct 67.34 69.52 - -
NV-Embed-v1 69.32 - - -
gte-Qwen2-7B-instruct 70.24 72.05 68.25 67.86
gte-Qwen2-1.5B-instruc(https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) 67.16 67.65 66.60 64.04

GTE Models

The gte series models have consistently released two types of models: encoder-only models (based on the BERT architecture) and decode-only models (based on the LLM architecture).

Models Language Max Sequence Length Dimension Model Size (Memory Usage, fp32)
GTE-large-zh Chinese 512 1024 1.25GB
GTE-base-zh Chinese 512 512 0.41GB
GTE-small-zh Chinese 512 512 0.12GB
GTE-large English 512 1024 1.25GB
GTE-base English 512 512 0.21GB
GTE-small English 512 384 0.10GB
GTE-large-en-v1.5 English 8192 1024 1.74GB
GTE-base-en-v1.5 English 8192 768 0.51GB
GTE-Qwen1.5-7B-instruct Multilingual 32000 4096 26.45GB
GTE-Qwen2-7B-instruct Multilingual 32000 3584 26.45GB
GTE-Qwen2-1.5B-instruct Multilingual 32000 1536 6.62GB

Citation

If you find our paper or models helpful, please consider cite:

@article{li2023towards,
  title={Towards general text embeddings with multi-stage contrastive learning},
  author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan},
  journal={arXiv preprint arXiv:2308.03281},
  year={2023}
}
声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论