Easy and Complete guide to Automatic1111

Jarvislabs.ai Create Stable Diffusion

Use A1111 - Stable Diffusion web UI on Jarvislabs out of the box in less than 60+ seconds ⏰. It comes with 20+ preloaded models.

Table of Contents

Introduction

Automatic1111 is a tool on the web that helps you use stable Diffusion easily. When you open it in your web browser, you'll see a webpage where you can control everything. I believe it's easier than using terminal to run Stable Diffusion.

Launch an instance

Start a instance with framework as automatic1111. Choose required GPUs and storage.

Access the web UI

Once the instance is up and running, right click on your running instance and select the API endpoint.

Jarvislabs.ai Web Stable Diffusion

This may take a few seconds to load. You can start with your creative journey.

Launch/Modify the parameters used to launch

When the instance is started, we start the launch.py from inside a tmux session. You can check that from the JupyterLab terminal.

From the terminal run the below command

tmux attach -t automatic

The output would look something like this.

Jarvislabs.ai Web Stable Diffusion

If you do a 'CTRL + C', the application will stop. You can launch it again using the below command.

cd /home/stable-diffusion-webui/
python launch.py --xformers

Use Automatic1111 API

You can use Automatic1111 through API's too. To launch through API, follow the below steps.

tmux kill -t automatic
cd /home/stable-diffusion-webui/
python launch.py --xformers --api

Access the API endpoint from the UI.

Jarvislabs.ai Web Stable Diffusion

Right-click on the instance to get the API endpoint, to access the API.

Example:

curl "API_ENDPOINT/sdapi/v1/sd-models"

Learn more about Automatic1111 api here.

Text2image

Text-to-image synthesis is a transformative technique that empowers users to generate images based on textual input. Utilizing advanced models, this process enables precise control over the visual content creation. In the Automatic1111 framework, our guide provides valuable insights into the various techniques employed in text-to-image synthesis. Follow the docs to Learn more about it text2img

Image2image(inpainting)

Image-to-image synthesis is a cool way to make pictures using special computer programs. You can create new images based on pictures you already have. Our guide in the Automatic1111 framework explains how this works and gives you useful tips on different techniques for making these images. Check it out to learn more! img2img

Face Restore

In Automatic1111, face restoration is a cool tool that helps you bring back missing details on faces. It uses fancy technology to give you control over how the restored faces look. Check out our guide for easy tips on using face restoration in Automatic1111. click this face restore to learn more.

upscale image

In Automatic1111, upscaling works like a magic tool that enhances your pictures by adding more details. It uses advanced technology, allowing you to choose how much improvement you want. Take a look at our guide for simple tips on making your pictures clearer and more awesome using the upscaling feature in Automatic1111. click here to learn more

ControlNet

ControlNet is a popular technique, that gives you control over the images generated using StableDiffusion. We have created a guide for you to help with you to understand the various techniques you can use ControlNet in Automatic1111. You can find the guide here.

Preloaded Models

We have loaded 50+ models for you to start with. You can find the list of models here.

FAQs

Automatic1111 is a powerful web user interface (WebUI) specifically designed for Stable Diffusion, a !text-to-image AI model. It provides a user-friendly and customizable platform to create stunning images based on your written prompts.
There are two main ways to run Automatic1111: on your own PC or using an online GPU instance. The requirements vary slightly for each method.

On Your Own PC:

Hardware

  • GPU: A decent NVIDIA GPU with at least 8GB of VRAM (e.g., RTX 3060 Ti or higher).
  • CPU: A modern processor (i5 or Ryzen 5 or higher).
  • RAM: 16GB or more.

Software

  • Operating System: Windows 10/11 or Linux.
  • Python: Python 3.10 with libraries like PyTorch and CUDA Toolkit pre-installed.
  • Command Line: Basic comfort with command prompts.

Additional

  • Download files for the Stable Diffusion model (e.g., 1.5 or 2.1) and Automatic1111 itself.
  • Sufficient storage space on your PC for models and generated images.

Using an Online GPU Instance:

Hardware

  • No powerful PC needed; processing happens on cloud servers.

Software

  • Web Browser: Any modern browser like Chrome, Firefox, Edge, etc.
  • Internet Connection: Stable and high-speed for smooth processing and file transfers.
  • Cloud Account: Sign up for a service providing access to powerful GPUs for a fee (e.g., Jarvislabs, Google Colab, Paperspace).

Choose the option that best suits your budget, technical skills, and available hardware.

If you want to run Automatic1111 on your own PC, the source code and installation instructions are available on GitHub: Automatic1111. Or you can use an online GPU instance like Jarvislabs.
Yes, both Automatic1111 and Stable Diffusion are open-source and free to use. However, you may need to pay for cloud computing resources depending on your hardware limitations.
Yes, you can use Automatic1111 to edit existing images in Image2Image for more details about editing existing images.
Previous
vLLM