IP-Adapter图像内容迁移

我要开发同款
匿名用户2024年07月31日
33阅读
所属分类aipytorch、Stable Diffusion、IP-Adapter、Text to Image
开源地址https://modelscope.cn/models/zcmaas/cv_ip-adapter_image-prompt-adapter_base
授权协议Apache License 2.0

作品详情

模型描述 (Model Description)

IP-Adapter是一种有效且轻量级的适配器,可为预训练的文本到图像扩散模型实现图像提示功能。IP-Adapter还可以推广到使用现有可控工具的可控生成。详情可参考来源

arch

运行环境 (Operating environment)

Dependencies and Installation

# git clone the original repository
git clone https://github.com/hotshotco/IP-Adapter.git
cd IP-Adapter

# Create a conda environment and activate it
conda create -n ipadapter python=3.9
conda activate ipadapter

# Install from requirements.txt
pip install  torch==2.0.1 torchvision==0.15.2 transformers==4.34.0 diffusers==0.21.4

代码范例 (Code example)

from modelscope.models import Model
from modelscope.pipelines import pipeline

input = {
    # required
    "image": "assets/images/woman.png", 

    # following are optional:
    "num_samples": 4,
    "num_inference_steps": 50,
    "seed": 42,
    "output_path": "./out_t2i.png"
}

p_input = {
    # required
    "image": "assets/images/woman.png", 
    "prompt": "best quality, high quality, wearing a hat on the beach", 
    "scale": 0.6,

    # following are optional:
    "num_samples": 4,
    "num_inference_steps": 50,
    "seed": 42,
    "output_path": "./out_t2i_p.png"
}

i2i_input = {
    # required
    "image": "assets/images/river.png", 
    "g_image": "assets/images/vermeer.jpg",

    # following are optional:
    "strength": 0.6,
    "num_samples": 4,
    "num_inference_steps": 50,
    "seed": 42,
    "output_path": "./out_i2i.png"
}

depth_input = {
    # required
    "image": "assets/images/statue.png", 
    "control_map": "assets/structure_controls/depth.png",

    # following are optional:
    "num_samples": 4,
    "num_inference_steps": 50,
    "seed": 42,
    "output_path": "./out_control.png"
}

# text to image pipeline
model = Model.from_pretrained("zcmaas/cv_ip-adapter_image-prompt-adapter_base",
            pipe_type="StableDiffusionPipeline")
inference = pipeline('ip-adapter-task', model=model)
output = inference(input)
print(f"Result saved as {output}")
# text to image pipeline with prompt
output = inference(p_input)
print(f"Result saved as {output}")

# image to image pipeline
model = Model.from_pretrained("zcmaas/cv_ip-adapter_image-prompt-adapter_base",
            pipe_type="StableDiffusionImg2ImgPipeline")
inference = pipeline('ip-adapter-task', model=model)
output = inference(i2i_input)
print(f"Result saved as {output}")

# text to image pipeline with controlnet
model = Model.from_pretrained("zcmaas/cv_ip-adapter_image-prompt-adapter_base",
            pipe_type="StableDiffusionControlNetPipeline")
inference = pipeline('ip-adapter-task', model=model)
output = inference(depth_input)
print(f"Result saved as {output}")

# Model.from_pretrained() also support model path input such as 'base_model_path', 'vae_model_path', 'image_encoder_path', 'ip_ckpt'
声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论