文本生成3D模型_shap-e

我要开发同款
匿名用户2024年07月31日
29阅读
所属分类aipytorch
开源地址https://modelscope.cn/models/Lvcoco/Text.To.3D.Model_shap-e
授权协议Apache License 2.0

作品详情

该模型当前使用的是默认介绍模版,处于“预发布”阶段,页面仅限所有者可见。
请根据模型贡献文档说明,及时完善模型卡片内容。ModelScope平台将在模型卡片完善后展示。谢谢您的理解。

Clone with HTTP

 git clone https://www.modelscope.cn/Lvcoco/Text.To.3D.Model_shap-e.git

Shap-E

This is the official code and model release for Shap-E: Generating Conditional 3D Implicit Functions.

  • See Usage for guidance on how to use this repository.
  • See Samples for examples of what our text-conditional model can generate.

Samples

Here are some highlighted samples from our text-conditional model. For random samples on selected prompts, see samples.md.

A chair that looks like an avocado An airplane that looks like a banana A spaceship
A chair that looks
like an avocado
An airplane that looks
like a banana
A spaceship
A birthday cupcake A chair that looks like a tree A green boot
A birthday cupcake A chair that looks
like a tree
A green boot
A penguin Ube ice cream cone A bowl of vegetables
A penguin Ube ice cream cone A bowl of vegetables

Usage

Install with pip install -e ..

To get started with examples, see the following notebooks:

  • sampletextto_3d.ipynb - sample a 3D model, conditioned on a text prompt.
  • sampleimageto_3d.ipynb - sample a 3D model, conditioned on a synthetic view image. To get the best result, you should remove background from the input image.
  • encode_model.ipynb - loads a 3D model or a trimesh, creates a batch of multiview renders and a point cloud, encodes them into a latent, and renders it back. For this to work, install Blender version 3.3.1 or higher, and set the environment variable BLENDER_PATH to the path of the Blender executable.

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论