neo_scalinglaw_980M

我要开发同款
匿名用户2024年07月31日
25阅读
所属分类aipytorch
开源地址https://modelscope.cn/models/m-a-p/neo_scalinglaw_980M
授权协议apache-2.0

作品详情

NEO

?Neo-Models | ?Neo-Datasets | Github

Neo is a completely open source large language model, including code, all model weights, datasets used for training, and training details.

Model

Model Describe Download
neo_7b This repository contains the base model of neo_7b ? Hugging Face
neo7bintermediate This repo contains normal pre-training intermediate ckpts. A total of 3.7T tokens were learned at this phase. ? Hugging Face
neo7bdecay This repo contains intermediate ckpts during the decay phase. A total of 720B tokens were learned at this phase. ? Hugging Face
neoscalinglaw980M This repo contains ckpts related to scalinglaw experiments ? Hugging Face
neoscalinglaw460M This repo contains ckpts related to scalinglaw experiments ? Hugging Face
neoscalinglaw250M This repo contains ckpts related to scalinglaw experiments ? Hugging Face
neo2bgeneral This repo contains ckpts of 2b model trained using common domain knowledge ? Hugging Face

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = '<your-hf-model-path-with-tokenizer>'

tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False, trust_remote_code=True)

model = AutoModelForCausalLM.from_pretrained(
    model_path,
    device_map="auto",
    torch_dtype='auto'
).eval()

input_text = "A long, long time ago,"

input_ids = tokenizer(input_text, add_generation_prompt=True, return_tensors='pt').to(model.device)
output_ids = model.generate(**input_ids, max_new_tokens=20)
response = tokenizer.decode(output_ids[0], skip_special_tokens=True)

print(response)
声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论