DynamicControl: Adaptive Condition Selection for Improved Text-to-Image Generation

1Youtu Lab, Tencent 2Western University 3Nanyang Technological University

(Left:) Multiple conditions generation results of our DynamicControl. (Right:) Comparison of different schemes dealing with multiple conditions in T2I task. (a) One condition is randomly selected, with an activated MOE encoder, (b) the input number of conditions is fixed manually, along with the pre-trained visual encoder, and (c) our proposed DynamicControl proposes a condition evaluator and multi-control adapter to select conditions adaptively.

Abstract

To enhance the controllability of text-to-image diffusion models, current ControlNet-like models have explored various control signals to dictate image attributes. However, existing methods either handle conditions inefficiently or use a fixed number of conditions, which does not fully address the complexity of multiple conditions and their potential conflicts. This underscores the need for innovative approaches to manage multiple conditions effectively for more reliable and detailed image synthesis. To address this issue, we propose a novel framework, DynamicControl , which supports dynamic combinations of diverse control signals, allowing adaptive selection of different numbers and types of conditions. Our approach begins with a double-cycle controller that generates an initial real score sorting for all input conditions by leveraging pre-trained conditional generation models and discriminative models. This controller evaluates the similarity between extracted conditions and input conditions, as well as the pixel-level similarity with the source image. Then, we integrate a Multimodal Large Language Model (MLLM) to build an efficient condition evaluator. This evaluator optimizes the ordering of conditions based on the double-cycle controller’s score ranking. Our method jointly optimizes MLLMs and diffusion models, utilizing MLLMs’ reasoning capabilities to facilitate multi-condition text-to-image (T2I) tasks. The final sorted conditions are fed into a parallel multi-control adapter, which learns feature maps from dynamic visual conditions and integrates them to modulate ControlNet, thereby enhancing control over generated images. Through both quantitative and qualitative comparisons, DynamicControl demonstrates its superiority over existing methods in terms of controllability, generation quality and composability under various conditional controls.

Method

Overall pipeline of the proposed DynamicControl.




Comparison with Other Methods

BibTeX

@article{he2024dynamiccontrol,
      title={DynamicControl: Adaptive Condition Selection for Improved Text-to-Image Generation},
      author={He, Qingdong and Peng, Jinlong and Xu, Pengcheng and Jiang, Boyuan and Hu, Xiaobin and Luo, Donghao and Liu, Yong and Wang, Yabiao and Wang, Chengjie and Li, Xiangtai and Zhang, Jiangning},
      journal={arXiv preprint arXiv:2412.03255},
      year={2024}
      }