ip-composition-adapter

我要开发同款
匿名用户2024年07月31日
30阅读
所属分类aipytorch、ip adapter、stable diffusion
开源地址https://modelscope.cn/models/AI-ModelScope/ip-composition-adapter
授权协议apache-2.0

作品详情

IP Composition Adapter

This adapter for Stable Diffusion 1.5 and SDXL is designed to inject the general composition of an image into the model while mostly ignoring the style and content. Meaning a portrait of a person waving their left hand will result in an image of a completely different person waving with their left hand.

Follow Me

I do a lot of experiments and other things. To keep up to date, follow me on Twitter.

Thanks

I want to give a special thanks to POM with BANODOCO. This was their idea, I just trained it. Full credit goes to them.

Usage

Use just like other IP+ adapters from h94/IP-Adapter. For both SD1.5 and SDXL variants, use the CLIP vision encoder (CLIP-H)

You may need to lower the CFG to around 3 for best results, especially on the SDXL variant.

How is it different from control nets?

Control nets are more rigid. A control net will spatially align an image to nearly perfectly match the control image. The composition adapter allows the control to be more flexible.

SDXL Examples

1

1

1

1

1

1

1

SD 1.5 Examples

1

2

3

4

5

6

7

8

9

10

11

12

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论