Comparing coding guides for three stable AI diffusion models (v1.5, v2-base & sd3-medium) using Gradio to side-by-side diffusion functionality in Google Colab

In this hands-on tutorial, we will unlock the creative potential of the industry-leading diffusion model of the stability AI industry, stable diffusion v1.5, stable AI V2 base and cutting-edge stable diffusion 3 media to produce compelling images. Will run completely in Google Colab with Gradio interface, we will compare three powerful pipelines side by side, fast and timely iteration and seamless GPU-accelerated inference side by side. Whether we are developers looking to enhance the brand’s visual narrative or eager for a prototype AI-driven content workflow, the tutorial shows how the open source model of stable AI can be deployed immediately and can be deployed at infrastructure-free costs, allowing you to focus on storytelling, interacting, interacting and driving real-world outcomes.
!pip install huggingface_hub
from huggingface_hub import notebook_login
notebook_login()
We installed the huggingface_hub library and then imported and called the Notebook_login() function, which prompts you to authenticate your notebook verification session using the hugging Face account, allowing you to seamlessly access and manage models, datasets, and other hub resources.
!pip uninstall -y torchvision
!pip install --upgrade torch torchvision --index-url
!pip install --upgrade diffusers transformers accelerate safetensors gradio pillow
We first force-uninstalls any existing torchvision to clear potential conflicts, then reinstalls torch and torchvision from the CUDA 11.8–compatible PyTorch wheels, and finally upgrades key libraries, difficults, transformers, accelerate, safetensors, gradio, and pillow, to ensure you have the latest versions for building and running GPU-accelerated generative pipelines and web demos.
import torch
from diffusers import StableDiffusionPipeline, StableDiffusion3Pipeline
import gradio as gr
device = "cuda" if torch.cuda.is_available() else "cpu"
We import Pytorch from the diffuser library along with stable diffusion V1 and V3 pipelines, as well as Gradio for building interactive demonstrations. It then checks the availability of CUDA and sets the device variable to “CUDA” if the GPU exists; otherwise, it will go back to “CPU” to ensure your model runs on the best hardware.
pipe1 = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16,
safety_checker=None
).to(device)
pipe1.enable_attention_slicing()
We load the stable diffusion V1.5 model into half-precision (FLOAT16) without a built-in security checker, transfer it to the selected device (GPU, if available), and then draw attention to slices to reduce peak VRAM usage during image generation.
pipe2 = StableDiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-base",
torch_dtype=torch.float16,
safety_checker=None
).to(device)
pipe2.enable_attention_slicing()
We load the stable diffusion V2 “base” model to 16-bit precision without a default security filter, transfer it to the device of your choice, and activate attention slices to optimize memory usage during inference.
pipe3 = StableDiffusion3Pipeline.from_pretrained(
"stabilityai/stable-diffusion-3-medium-diffusers",
torch_dtype=torch.float16,
safety_checker=None
).to(device)
pipe3.enable_attention_slicing()
We raise the AI’s stable stable diffusion 3″ in 3″ checkpoint with 16-bit accuracy (skip the built-in security checker), transfer it to the selected device, and make attention to slice slices to reduce GPU memory usage during generation.
def generate(prompt, steps, scale):
img1 = pipe1(prompt, num_inference_steps=steps, guidance_scale=scale).images[0]
img2 = pipe2(prompt, num_inference_steps=steps, guidance_scale=scale).images[0]
img3 = pipe3(prompt, num_inference_steps=steps, guidance_scale=scale).images[0]
return img1, img2, img3
This feature now runs the same text prompt through all three loading pipelines (Pipe1, pipe2, pipe3) using the specified inference steps and guidance scales, and then returns the first image from each load, which is great for relatively stable diffusion v1.5, v2-base and v3-medium v1.5, v3-medium.
def choose(selection):
return f"✅ You selected: **{selection}**"
with gr.Blocks() as demo:
gr.Markdown("## AI Social-Post Generator with 3 Models")
with gr.Row():
prompt = gr.Textbox(label="Prompt", placeholder="A vibrant beach sunset…")
steps = gr.Slider( 1, 100, value=50, step=1, label="Inference Steps")
scale = gr.Slider( 1.0, 20.0, value=7.5, step=0.1, label="Guidance Scale")
btn = gr.Button("Generate Images")
with gr.Row():
out1 = gr.Image(label="Model 1: SD v1.5")
out2 = gr.Image(label="Model 2: SD v2-base")
out3 = gr.Image(label="Model 3: SD v3-medium")
sel = gr.Radio(
["Model 1: SD v1.5","Model 2: SD v2-base","Model 3: SD v3-medium"],
label="Select your favorite"
)
txt = gr.Markdown()
btn.click(fn=generate, inputs=[prompt, steps, scale], outputs=[out1, out2, out3])
sel.change(fn=choose, inputs=sel, outputs=txt)
demo.launch(share=True)
Finally, the Gradio application builds a three-column UI where you can enter text prompts, adjust the reasoning steps and guidance scales, and then generate and display images of groups in SD V1.5, V2 basics, and V3. It also has a radio selector, allowing you to select the preferred model output and display a simple confirmation message when selected.
In short, by integrating the latest diffusion architecture of stable AI into an easy-to-use Gradio application, you’ve seen you can prototyping, comparing, and deploying amazing visuals effortlessly to resonate on today’s platforms. From the creative direction of A/B testing to large-scale automated campaign assets, stability AI provides performance, flexibility and dynamic community support to transform your content pipeline.
Check COLAB notebook. Don’t forget to follow us twitter And join us Telegram Channel and LinkedIn GrOUP. Don’t forget to join us 90K+ ml reddit. To facilitate and partnership, please talk to us.
🔥 [Register Now] Minicon Agesic AI Virtual Conference: Free Registration + Certificate of Attendance + 4-hour Short Event (May 21, 9am-1pm) + Hands-On the Workshop

Nikhil is an intern consultant at Marktechpost. He is studying for a comprehensive material degree in integrated materials at the Haragpur Indian Technical College. Nikhil is an AI/ML enthusiast and has been studying applications in fields such as biomaterials and biomedical sciences. He has a strong background in materials science, and he is exploring new advancements and creating opportunities for contribution.