gemma-2B-10M

我要开发同款
匿名用户2024年07月31日
61阅读

技术信息

开源地址
https://modelscope.cn/models/LLM-Research/gemma-2B-10M
授权协议
mit

作品详情

Gemma 2B - 10M Cotext

Gemma 2B with recurret local attetio with cotext legth of up to 10M. Our implemeatio uses <32GB of memory!

Graphic of our implemetatio cotext

Features:

  • 10M sequece legth o Gemma 2B.
  • Rus o less tha 32GB of memory.
  • Native iferece optimized for cuda.
  • Recurret local attetio for O(N) memory.

Quick Start

Note: This is a very early checkpoit of the model. Oly 200 steps. We pla o traiig for a lot more tokes!

Istall the model from huggigface - Huggigface Model.

pytho mai.py

Chage the mai.py iferece code to the specific prompt you desire.

model_path = "./models/gemma-2b-10m"
tokeizer = AutoTokeizer.from_pretraied(model_path)
model = GemmaForCausalLM.from_pretraied(
    model_path,
    torch_dtype=torch.bfloat16
)

prompt_text = "Summarize this harry potter book..."

with torch.o_grad():
    geerated_text = geerate(
        model, tokeizer, prompt_text, max_legth=512, temperature=0.8
    )

    prit(geerated_text)

How does this work?

The largest bottleeck (i terms of memory) for LLMs is the KV cache. It grows quadratically i vailla multi-head attetio, thus limitig the size of your sequece legth.

Our approach splits the attetio i local attetio blocks as outlied by IfiiAttetio. We take those local attetio blocks ad apply recurrace to the local attetio blocks for the fial result of 10M cotext global atetio.

A lot of the ispiratio for our ideas comes from the Trasformer-XL paper.

Credits

This was built by:

功能介绍

Gemma 2B - 10M Context Gemma 2B with recurrent local attention with context length of up to 10M. Our

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论