What is ControlNet

If you have been using Stable diffusion for a while, you would have known how difficult it is to generate images with precise composition or desired poses. ControlNet a Stable diffusion model lets users control how placement and appearance of images are generated.

In this guide, we will learn how to install and use ControlNet models in Automatic1111.

Install ControlNet in Automatic1111

Below are the steps to install ControlNet in Automatic1111 stable-diffusion-webui.

  • Navigate to the Extensions tab in Automatic1111

  • Click Install from URL tab, copy and paste the below URL to "URL for extension's git repository".

Press Install button.

Controlnet image1


  • We have observed that after you click install, you may not see any progress bar. Wait for few seconds/minutes till you see this message. Installed into /home/stable-diffusion-webui/extensions/sd-webui-controlnet. Use Installed tab to restart..

    Controlnet image2

Go to "Installed" tab, click "Check for updates", and then click "Apply and restart UI".

Controlnet image3

If the extension is successfully installed, you will see a collapsible section in the txt2img tab by name ControlNet right above the Script drop-down menu.

Controlnet image3

By clicking on the expansion, it looks as below.

Controlnet image3

Lets download few ControlNet model weights, to create some awesome images in Automatic1111.

Download ControlNet model weights

We have listed the 13 ControlNet models available in Hugging Face below with their links. You can copy the link to the required model and use wget to download the model weights.


You can also check here for any updated models.

  • To download the weights, use the following command.
  • After downloading the weights, use the following command to move them to the required Automatic1111 folder.
mv *.pth /home/stable-diffusion-webui/extensions/sd-webui-controlnet/models/

You are all set to use the ControlNet extension in the Automatic1111 webui.

Using ControlNet – a simple example

Now ControlNet is installed and required weights are downloaded and placed them in the right path.Let's go through a simple example of generating image using canny edge.

You should have the ControlNet extension installed to follow this section. You can verify by seeing the ControlNet section below.

Controlnet image3

Text-to-image settings:

ControlNet need to be used with a Stable Diffusion model. In Stable Diffusion checkpoint dropdown menu select the model you want to use with ControlNet.

Select deliberate_v2.safetensors to select Deliberate model.

Controlnet image1

In txt2img tab, pass the prompt and negative prompt(optional) to be used by ControlNet.Below are the prompts i will be using.


a cute cat in the garden,a masterpiece

Negative Prompt:


Set the image settings like height, width and other settings.

Controlnet image1

ControlNet settings:

Now, lets move on to the ControlNet settings.Upload the image in the image canvas.

Check ✅ the Enable checkbox, Pixel Perfect, Allow preview.

Select a specific model in the Control Type, the corresponding model and its related Pre-processors will be automatically filtered and loaded into the respective dropdown, along with other default setting options.

By clicking on Run Preprocessor 💥 , you will be able to see a preview of the input image after it has been processed.

Controlnet image1

Now, Click on the Generate button to start generating images using the ControlNet.

Finally, the GUI looks as below.

Controlnet image1


When you are done, uncheck the Enable checkbox to disable the ControlNet extension


Canny, a classic edge detector, utilizes multi-stage algorithms to extract outlines from input images.It performs preprocessing to maintain the original composition of the input image. It then generates an output image that includes the corresponding outlines.

Controlnet image1


Depth, pre-process an input to a grayscale image with black representing deep areas and white representing shallow areas.

Controlnet image1

There are multiple preprocessors available in depth model.

  • depth_midas
  • depth_leres
  • depth_leres++
  • depth_zoe

Below are the images that have used the depth preprocessors to generate a woman cop image based on the given prompt, with the input image being 'milkman'.

Controlnet image1


Openpose detects human key points like head,shoulders,legs,etc.It is useful for copying the human poses.In simple it is the skeleton view of the image.

The OpenPose preprocessors

  • OpenPose
  • OpenPose_face
  • OpenPose_faceonly
  • OpenPose_hand
  • OpenPose_full

Controlnet image1


OpenPose serves as the base preprocessor that detects various human body parts such as the head, hands, legs, nose, ears, knees, ankles, shoulders, etc., from the provided input image.

It then generates an output image with the same pose.

Controlnet image1


OpenPose_face performs all the essential functions of the base preprocessor and extends its capabilities by detecting facial expressions.

Controlnet image1


OpenPose_faceonly specializes in detecting facial expressions while excluding other key points. This feature is particularly useful for capturing and replicating facial expressions.

Controlnet image1


In addition to base preprocessor, OpenPose_hand detects the key points of hands and fingers.

Controlnet image1


OpenPose_full detects everything OpenPose_face and OpenPose_hand do.

Controlnet image1


MLSD is a straight-line detector used to detect straight lines and edges. This preprocessor is particularly useful for architectural elements like room interiors, streets, frames,etc.Any curves will be ignored.

Below is the image that have used the MLSD preprocessors to generate a classic room image with input image of study room.

Controlnet image1


Scribble preprocessors turns the image to a scribble, similar to the one drawn by hand.

The available scribble preprocessors are

  • Scribble HED : This is base preprocessor and suitable for recoloring and restyling of image
  • Scribble Pidinet: This preprocessor detects curves and straight edges in addition to the base preprocessor and results more clear lines with details.
  • Scribble XDoG: This is an edge detection technique.Need to tune the XDoG threshold and observe the output.

Below are the images that have been generated using scribble multiple Pre-processors to generate the images.

Controlnet image1


Segmentation labels the type of objects in the input image.These are used to replicate the shape of objects in the generated images.

The avaialble Segmentation preprocessors are

  • seg_ofade20k
  • seg_ofcoco
  • seg_ufade20k

Below are the images that have been generated using the Seg Preprocessor.

Controlnet image1


Normal map provides the information about the orientation of a surface in the image.Image pixels represents the direction in which the surface is facing instead of color values.This is used to replicate the 3d composition of the given image.

The Normal map preprocessors

  • Normal Midas: The Midas preprocessor is good for isolating subject form the background.
  • Normal Bae: The Bae preprocessor is used to render details of both subject and background.

Below are the images that have been generated using the Normal map Preprocessor.

Controlnet image1


Lineart analyses the image and generate a black and white sketch that appears as scanned images.

The Lineart preprocessors are

  • lineart_anime: Analyze the anime images and allows coloring on top of it.
  • lineart_anime_denoise: Analyze the anime images with more image details.
  • lineart_realistic: Analyze the images with realistic-style lines.
  • lineart_coarse: Analyze the images with realistic-style lines and more image details.

Below are the images that have been generated using

Controlnet image1

Controlnet image1