混元DiT文生图模型

我要开发同款
匿名用户2024年07月31日
37阅读
所属分类aiPytorch
开源地址https://modelscope.cn/models/modelscope/HunyuanDiT
授权协议other

作品详情

? Requirements

This repo consists of DialogGen (a prompt enhancement model) and Hunyuan-DiT (a text-to-image model).

The following table shows the requirements for running the models (The TensorRT version will be updated soon):

Model TensorRT Batch Size GPU Memory GPU
DialogGen + Hunyuan-DiT 1 32G V100/A100
Hunyuan-DiT 1 11G V100/A100
  • An NVIDIA GPU with CUDA support is required.
  • We have tested V100 and A100 GPUs.
  • Minimum: The minimum GPU memory required is 11GB.
  • Recommended: We recommend using a GPU with 32GB of memory for better generation quality.
  • Tested operating system: Linux

?️ Dependencies and Installation

Begin by cloning the repository:

git clone https://github.com/tencent/HunyuanDiT
cd HunyuanDiT

We provide an environment.yml file for setting up a Conda environment. Conda's installation instructions are available here.

# 1. Prepare conda environment
conda env create -f environment.yml

# 2. Activate the environment
conda activate HunyuanDiT

# 3. Install pip dependencies
python -m pip install -r requirements.txt

# 4. (Optional) Install flash attention v2 for acceleration (requires CUDA 11.6 or above)
python -m pip install git+https://github.com/Dao-AILab/flash-attention.git@v2.1.2.post3

? Download Pretrained Models

To download the model, support git clone and modelscope SDK

Then download the model using the following commands:

# Create a directory named 'ckpts' where the model will be saved, fulfilling the prerequisites for running the demo.
mkdir ckpts
# Use the huggingface-cli tool to download the model.
# The download time may vary from 10 minutes to 1 hour depending on network conditions.
git clone https://www.modelscope.cn/modelscope/HunyuanDiT.git
mv HunyuanDiT/* ckpts/

Note:If an No such file or directory: 'ckpts/.huggingface/.gitignore.lock' like error occurs during the download process, you can ignore the error and retry the command by executing huggingface-cli download Tencent-Hunyuan/HunyuanDiT --local-dir ./ckpts

All models will be automatically downloaded. For more information about the model, visit the Hugging Face repository here.

Model #Params Download URL
mT5 1.6B mT5
CLIP 350M CLIP
DialogGen 7.0B DialogGen
sdxl-vae-fp16-fix 83M sdxl-vae-fp16-fix
Hunyuan-DiT 1.5B Hunyuan-DiT

? Inference

Using Gradio

Make sure you have activated the conda environment before running the following command.

# By default, we start a Chinese UI.
python app/hydit_app.py

# Using Flash Attention for acceleration.
python app/hydit_app.py --infer-mode fa

# You can disable the enhancement model if the GPU memory is insufficient.
# The enhancement will be unavailable until you restart the app without the `--no-enhance` flag. 
python app/hydit_app.py --no-enhance

# Start with English UI
python app/hydit_app.py --lang en

Using Command Line

We provide 3 modes to quick start:

# Prompt Enhancement + Text-to-Image. Torch mode
python sample_t2i.py --prompt "渔舟唱晚"

# Only Text-to-Image. Torch mode
python sample_t2i.py --prompt "渔舟唱晚" --no-enhance

# Only Text-to-Image. Flash Attention mode
python sample_t2i.py --infer-mode fa --prompt "渔舟唱晚"

# Generate an image with other image sizes.
python sample_t2i.py --prompt "渔舟唱晚" --image-size 1280 768

More example prompts can be found in example_prompts.txt

More Configurations

We list some more useful configurations for easy usage:

Argument Default Description
--prompt None The text prompt for image generation
--image-size 1024 1024 The size of the generated image
--seed 42 The random seed for generating images
--infer-steps 100 The number of steps for sampling
--negative - The negative prompt for image generation
--infer-mode torch The inference mode (torch or fa)
--sampler ddpm The diffusion sampler (ddpm, ddim, or dpmms)
--no-enhance False Disable the prompt enhancement model
--model-root ckpts The root directory of the model checkpoints
--load-key ema Load the student model or EMA model (ema or module)

? BibTeX

If you find Hunyuan-DiT useful for your research and applications, please cite using this BibTeX:

@misc{hunyuandit,
      title={Hunyuan-DiT: A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding},
      author={Zhimin Li, Jianwei Zhang, Qin Lin, Jiangfeng Xiong, Yanxin Long, Xinchi Deng, Yingfang Zhang, Xingchao Liu, Minbin Huang, Zedong Xiao, Dayou Chen, Jiajun He, Jiahao Li, Wenyue Li, Chen Zhang, Rongwei Quan, Jianxiang Lu, Jiabin Huang, Xiaoyan Yuan, Xiaoxiao Zheng, Yixuan Li, Jihong Zhang, Chao Zhang, Meng Chen, Jie Liu, Zheng Fang, Weiyan Wang, Jinbao Xue, Yangyu Tao, JianChen Zhu, Kai Liu, Sihuan Lin, Yifu Sun, Yun Li, Dongdong Wang, Zhichao Hu, Xiao Xiao, Yan Chen, Yuhong Liu, Wei Liu, Di Wang, Yong Yang, Jie Jiang, Qinglin Lu},
      year={2024},
}
声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论