t2iadapter_depth_sd15v2

我要开发同款
匿名用户2024年07月31日
27阅读
所属分类aipytorch、image-to-image、stable-diffusion、controlnet、t2i-adapter、art
开源地址https://modelscope.cn/models/AI-ModelScope/t2iadapter_depth_sd15v2
授权协议apache-2.0

作品详情

T2I Adapter - Depth

T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint.

This checkpoint provides conditioning on depth for the stable diffusion 1.5 checkpoint.

Model Details

  • Developed by: T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models

  • Model type: Diffusion-based text-to-image generation model

  • Language(s): English

  • License: Apache 2.0

  • Resources for more information: GitHub Repository, Paper.

  • Cite as:

    @misc{ title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models}, author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie}, year={2023}, eprint={2302.08453}, archivePrefix={arXiv}, primaryClass={cs.CV} }

Checkpoints

Model Name Control Image Overview Control Image Example Generated Image Example
TencentARC/t2iadaptercolorsd14v1
Trained with spatial color palette
A image with 8x8 color palette.
TencentARC/t2iadaptercannysd14v1
Trained with canny edge detection
A monochrome image with white edges on a black background.
TencentARC/t2iadaptersketchsd14v1
Trained with PidiNet edge detection
A hand-drawn monochrome image with white outlines on a black background.
TencentARC/t2iadapterdepthsd14v1
Trained with Midas depth estimation
A grayscale image with black representing deep areas and white representing shallow areas.
TencentARC/t2iadapteropenposesd14v1
Trained with OpenPose bone image
A OpenPose bone image.
TencentARC/t2iadapterkeyposesd14v1
Trained with mmpose skeleton image
A mmpose skeleton image.
TencentARC/t2iadaptersegsd14v1
Trained with semantic segmentation
An custom segmentation protocol image.
TencentARC/t2iadaptercannysd15v2
TencentARC/t2iadapterdepthsd15v2
TencentARC/t2iadaptersketchsd15v2
TencentARC/t2iadapterzoedepthsd15v1

Example

  1. Dependencies
pip install diffusers transformers controlnet_aux
  1. Run code:
from controlnet_aux import MidasDetector
from PIL import Image
from diffusers import T2IAdapter, StableDiffusionAdapterPipeline
import torch

midas = MidasDetector.from_pretrained("lllyasviel/Annotators")

image = Image.open('./images/depth_input.png')

image = midas(image)

image.save('./images/depth.png')

adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_depth_sd15v2", torch_dtype=torch.float16)
pipe = StableDiffusionAdapterPipeline.from_pretrained(
    "runwayml/stable-diffusion-v1-5", adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16"
)

pipe.to('cuda')

generator = torch.Generator().manual_seed(1)

depth_out = pipe(prompt="storm trooper giving a speech", image=image, generator=generator).images[0]

depth_out.save('./images/depth_output.png')

depth_input depth depth_output

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论