Image to Image using automatic1111

In automatic1111, you can either convert one image to another or enhance the quality of an existing image. It has different configuration like image-to-sketch and inpainting. We'll be learning all about these in this documentation. Ready for the ride ? 🏎️

I hope you have a running instance, If not then create new one in follow this documentation Launch instance.


  • Upload an image or drag and drop an image.
  • In the resize mode, choose the Just resize option. We will learn the details of the resize mode later in this documentation.
  • I prefer using DPM++ SDE Karras for the Sampling method and set the Sampling steps to 20.
  • Specify your desired height and width. Adjust the CFG Scale; I usually set it between 6 to 7.
  • I recommend setting the Denoising strength to around 0.5 to ensure the model doesn't alter the image too drastically.
  • Click the generate button to preview the new image.

Resize mode

We can resize the original image using Resize mode option. Let's see what each option will do.

Just resize

In this mode, the original image is stretched to fit the new size, keeping its original content and details intact.


Crop and resize

This will crop the original image to match the dimensions of the new image. It deletes overflowed image pixels when resizing smaller and stretches the image when resizing larger.


Resize and fill

If we resize the original image to a larger size, this option will fill the extra space with blurry colors that match the original image's colors.


Just resize (latent upscale)

It will resize the original image during the processing stage, attempting to manipulate the latent space. If we increase the denoise strength then we can see more clear images from the original image. See the demo.

resize3.png :::note I set the denoise strength below 0.5; that's why the image is somewhat blurry. Let's learn what Denoise strength is :::

Denoising strength

Denoising strength allows us to control how much the model changes the original image. Differences become more noticeable, especially when set above 0.5. Check out the demo below.


Model :  Dreamshaper_631bakedvae.safetensor

Prompt : change the background into cherry blossom falling, highly detailed, 4k

Sampling method : DPM++ SDE Karras

Sampling steps : 20

What exactly happening behind the scene ?

when we set the denoise strength around 0.6 we can see the model change the original image slightly but in DS 0.7 the image is completely changed into something, Over DS 0.7 the model only cares and strict to the prompt.


We can draw something and the model will convert the drawing into image for us. I prefer to use black background because it will highlight the content from the background but you can use any contrast colors.


prompt : a beautiful looking apple, highly detailed,


Inpainting allows us to improve certain areas of an existing image. Which makes it helpful without recreate the image. I hope you already learned this if not then follow this docs Inpainting. We're not going learn that again


Inpaint sketch

We can sketch a existing image with Inpaint, This works like combination of inpainting and sketch. I try to convert the curly hair into blonde hair.

original image




Inpaint Upload

We can upload our mask drawing directly and the original image. The model will take care all the other things.



Prompt : blue diamonds, crystals  stones, highly detailed, 4k

Model : Dreamshaper_631bakedvae.safetensor

See the settings


End Result


Tip: Always explore and experiment with different parameters. This way, you'll gain a deeper understanding and learn more.