Condition-Aware Neural Networks for Image Generation: a paper dive

Federico Peccia
3 min readApr 7, 2024

Image generation is a fascinating field in Artificial Intelligence used to generate new images from scratch based on text prompts. The main idea is simple: the user inserts a text detailing what he or she wants to generate, and the model outputs an image resembling the provided description. These generated images can be anything, ranging from natural scenes to abstract art or anime.

Example of an image generated using Stable Diffusion, a popular text-to-image model. Prompt: “An elephant riding a bike, photorealistic.”.

This month, a new paper was published about Condition-Aware Neural Networks (CAN), a new technique that claims to improve these kind of models. In this article, I will explain the core idea of the paper, and finish with some ethical questions about the use of generative AI (which we cannot and should not ignore). So lets dive into it together, shall we?

The premise

As you may know, the knowledge of a Neural Network (NN) is encoded inside its parameters. These are huge matrixes whose values are selected during the training of the NN. Once the training finishes, these weights (as they are normally called) are fixed and do not change. They are tailored for the trained tasks.

The core idea of the paper is to introduce the Condition-Aware Neural Network (CAN) as a novel method for enhancing control in generative models. They asked themselves: can we modify on-the-fly the weights of a NN based on what the user wants to generate, and does this improve the generative process?

Left: a schematic of the usual image generation model. Right: the new approach including the on-the-fly generation of new weights. Image from original paper.

How to select to which layers the condition should be applied?

The selection of layers to which this conditional weight generation is applied is a crucial design decision. It is not practical to make all NN layers condition-aware, as this would add a significant overhead to the training of the network.

In order to find the layers to apply this technique to, they performed ablation studies and performance analysis on the output metrics of the model to try to identify the combination of condition-aware layers that result in the most significant performance boost while maintaining computational efficiency.

They found that making a layer condition-aware does not directly mean that the performance will be improved. They found that making depthwise convolutions, patch embedding and output projection layers condition-aware provides a significant boost to the performance.

Results

They demonstrated that remarkable improvements can be achieved by the CAN approach compared to existing state-of-the-art methods. Compared to previous works, their models achieve better metrics on common datasets like COCO and ImageNet. The interesting part is that this technique can be integrated with existing transformers models for image generation like DiT and UViT, leading to substantial performance boosts while incurring minimal computational cost increases.

Conclusion and final thoughts

CAN’s adaptability and flexibility have the potential to revolutionize applications in art, design, entertainment, and other fields requiring customized image synthesis. In a broader context, CAN’s ability to tailor image generation to specific conditions opens up new possibilities for personalized content creation, automated design processes, and innovative visual storytelling.

Of course, although the technique is interesting, we cannot ignore the ethical problems that arise from using generative AI models. How do we still protect intellectual property rights and copyright? How do we avoid the use of this models for the creation of deep-fake content? How do generative models impact cultural and artistic integrity? How can we avoid the perpetuation of stereotypes or biases while using these kind of models?

Feel free to follow me on LinkedIn and let me know what you think of this article, or buy me a coffee if you really liked it!

Thanks for reading!

--

--

Federico Peccia

Computer Science PhD student and research assistant. Opinions are my own.