easydiffusion
Easiest 1-click way to create beautiful artwork on your PC using AI, with no tech knowledge. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image.
Top Related Projects
Stable Diffusion web UI
Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
High-Resolution Image Synthesis with Latent Diffusion Models
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
Quick Overview
EasyDiffusion is an open-source project that provides a user-friendly interface for running Stable Diffusion, a powerful AI image generation model. It offers a web-based UI that allows users to generate, edit, and enhance images using various AI models and techniques, making advanced AI image manipulation accessible to a wider audience.
Pros
- Easy-to-use web interface for Stable Diffusion and other AI image models
- Supports multiple AI models and techniques for image generation and editing
- Cross-platform compatibility (Windows, macOS, Linux)
- Active community and frequent updates
Cons
- Requires significant computational resources for optimal performance
- Limited customization options compared to more advanced tools
- May have a steeper learning curve for users unfamiliar with AI image generation concepts
- Dependency on external AI models and their limitations
Getting Started
-
Clone the repository:
git clone https://github.com/easydiffusion/easydiffusion.git
-
Install dependencies:
cd easydiffusion pip install -r requirements.txt
-
Download the Stable Diffusion model:
python scripts/download_models.py
-
Run the web UI:
python scripts/run_webui.py
-
Open a web browser and navigate to
http://localhost:9000
to access the EasyDiffusion interface.
Competitor Comparisons
Stable Diffusion web UI
Pros of stable-diffusion-webui
- More extensive feature set and advanced options
- Larger community and more frequent updates
- Greater flexibility and customization through extensions
Cons of stable-diffusion-webui
- Steeper learning curve for beginners
- More complex installation process
- Higher system requirements
Code Comparison
stable-diffusion-webui:
def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments=None, iteration=0, position_in_batch=0):
index = position_in_batch + iteration * p.batch_size
clip_skip = getattr(p, 'clip_skip', opts.CLIP_stop_at_last_layers)
generation_params = {
"Steps": p.steps,
"Sampler": p.sampler_name,
"CFG scale": p.cfg_scale,
"Seed": all_seeds[index],
"Face restoration": (opts.face_restoration_model if p.restore_faces else None),
"Size": f"{p.width}x{p.height}",
"Model hash": getattr(p, 'sd_model_hash', None),
"Model": (None if not opts.add_model_name_to_info or not shared.sd_model.sd_checkpoint_info.model_name else shared.sd_model.sd_checkpoint_info.model_name.replace(',', '').replace(':', '')),
"Variation seed": (None if p.subseed_strength == 0 else all_subseeds[index]),
"Variation seed strength": (None if p.subseed_strength == 0 else p.subseed_strength),
"Seed resize from": (None if p.seed_resize_from_w == 0 or p.seed_resize_from_h == 0 else f"{p.seed_resize_from_w}x{p.seed_resize_from_h}"),
"Denoising strength": getattr(p, 'denoising_strength', None),
"Conditional mask weight": getattr(p, "inpainting_mask_weight", shared.opts.inpainting_mask_weight) if p.is_using_inpainting_conditioning else None,
"Clip skip": None if clip_skip <= 1 else clip_skip,
"ENSD": None if opts.eta_noise_seed_delta == 0 else opts.eta_noise_seed_delta,
}
easydiffusion:
def create_infotext(p, all_prompts, all_seeds, all_subseeds):
index = 0
generation_params = {
"Steps": p.steps,
"Sampler": p.sampler_name,
"CFG scale": p.cfg_scale,
"Seed": all_seeds[index],
"Face restoration": (opts.face_restoration_model if p.restore_faces else None),
"Size": f"{p.width}x{p.height}",
"Model hash": getattr(p, 'sd_model_hash', None),
"Model": shared.sd_model.sd_checkpoint_info.model_name,
"Variation seed": (None if p.subseed_strength == 0 else all_subseeds[index]),
"Variation seed strength": (None if p.subseed_strength == 0 else p.subseed_strength),
"Denoising strength": getattr(p, 'denoising_strength', None),
}
Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
Pros of InvokeAI
- More advanced features like inpainting, outpainting, and img2img
- Supports multiple model architectures (SD 1.5, SD 2.x, SDXL)
- Active development with frequent updates and new features
Cons of InvokeAI
- Steeper learning curve for beginners
- Requires more system resources
- Installation process can be more complex
Code Comparison
InvokeAI:
from invokeai.app.invocations.baseinvocation import BaseInvocation
class CustomInvocation(BaseInvocation):
def invoke(self, context):
# Custom logic here
EasyDiffusion:
from easydiffusion.utils import generate_image
result = generate_image(prompt="A beautiful landscape", steps=50)
Summary
InvokeAI offers more advanced features and flexibility, making it suitable for experienced users and complex projects. EasyDiffusion, as the name suggests, focuses on simplicity and ease of use, making it more accessible for beginners. InvokeAI supports a wider range of models and techniques but requires more resources and setup time. EasyDiffusion provides a straightforward approach to image generation with a simpler API, but may lack some advanced capabilities found in InvokeAI.
High-Resolution Image Synthesis with Latent Diffusion Models
Pros of stablediffusion
- More advanced and feature-rich, offering a wider range of image generation capabilities
- Better documentation and community support, making it easier for developers to integrate and customize
- Regularly updated with the latest advancements in AI image generation techniques
Cons of stablediffusion
- Higher computational requirements, potentially limiting accessibility for users with less powerful hardware
- Steeper learning curve, which may be challenging for beginners or those new to AI image generation
- More complex setup process compared to the simpler, more user-friendly approach of easydiffusion
Code Comparison
stablediffusion:
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
image = pipe("A beautiful sunset over the ocean").images[0]
image.save("generated_image.png")
easydiffusion:
from easydiffusion import generate_image
image = generate_image("A beautiful sunset over the ocean")
image.save("generated_image.png")
Note: The code examples are simplified for comparison purposes and may not reflect the exact implementation in each repository.
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
Pros of Diffusers
- More comprehensive library with support for various diffusion models
- Extensive documentation and integration with Hugging Face ecosystem
- Active development and frequent updates
Cons of Diffusers
- Steeper learning curve for beginners
- May require more setup and configuration
Code Comparison
EasyDiffusion:
from easydiffusion import generate_image
image = generate_image("A beautiful sunset over the ocean")
image.save("sunset.png")
Diffusers:
from diffusers import StableDiffusionPipeline
import torch
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
pipe = pipe.to("cuda")
image = pipe("A beautiful sunset over the ocean").images[0]
image.save("sunset.png")
EasyDiffusion provides a simpler API for quick image generation, while Diffusers offers more flexibility and control over the diffusion process. Diffusers requires explicit model loading and device management, but allows for more advanced customization.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Easy Diffusion 3.0
The easiest way to install and use Stable Diffusion on your computer.
Does not require technical knowledge, does not require pre-installed software. 1-click install, powerful features, friendly community.
ï¸âð¥ð New! Support for SDXL, ControlNet, multiple LoRA files, embeddings (and a lot more) have been added!
Installation guide | Troubleshooting guide | User guide | (for support queries, and development discussions)
Installation
Click the download button for your operating system:
Hardware requirements:
- Windows: NVIDIA graphics card¹ (minimum 2 GB RAM), or run on your CPU.
- Linux: NVIDIA¹ or AMD² graphics card (minimum 2 GB RAM), or run on your CPU.
- Mac: M1 or M2, or run on your CPU.
- Minimum 8 GB of system RAM.
- Atleast 25 GB of space on the hard disk.
¹) CUDA Compute capability level of 3.7 or higher required.
²) ROCm 5.2 support required.
The installer will take care of whatever is needed. If you face any problems, you can join the friendly Discord community and ask for assistance.
On Windows:
- Run the downloaded
Easy-Diffusion-Windows.exe
file. - Run
Easy Diffusion
once the installation finishes. You can also start from your Start Menu, or from your desktop (if you created a shortcut).
If Windows SmartScreen prevents you from running the program click More info
and then Run anyway
.
Tip: On Windows 10, please install at the top level in your drive, e.g. C:\EasyDiffusion
or D:\EasyDiffusion
. This will avoid a common problem with Windows 10 (file path length limits).
On Linux/Mac:
- Unzip/extract the folder
easy-diffusion
which should be in your downloads folder, unless you changed your default downloads destination. - Open a terminal window, and navigate to the
easy-diffusion
directory. - Run
./start.sh
(orbash start.sh
) in a terminal.
To remove/uninstall:
Just delete the EasyDiffusion
folder to uninstall all the downloaded packages.
Easy for new users, powerful features for advanced users
Features:
User experience
- Hassle-free installation: Does not require technical knowledge, does not require pre-installed software. Just download and run!
- Clutter-free UI: A friendly and simple UI, while providing a lot of powerful features.
- Task Queue: Queue up all your ideas, without waiting for the current task to finish.
- Intelligent Model Detection: Automatically figures out the YAML config file to use for the chosen model (via a models database).
- Live Preview: See the image as the AI is drawing it.
- Image Modifiers: A library of modifier tags like "Realistic", "Pencil Sketch", "ArtStation" etc. Experiment with various styles quickly.
- Multiple Prompts File: Queue multiple prompts by entering one prompt per line, or by running a text file.
- Save generated images to disk: Save your images to your PC!
- UI Themes: Customize the program to your liking.
- Searchable models dropdown: organize your models into sub-folders, and search through them in the UI.
Powerful image generation
- Supports: "Text to Image", "Image to Image" and "InPainting"
- ControlNet: For advanced control over the image, e.g. by setting the pose or drawing the outline for the AI to fill in.
- 16 Samplers:
PLMS
,DDIM
,DEIS
,Heun
,Euler
,Euler Ancestral
,DPM2
,DPM2 Ancestral
,LMS
,DPM Solver
,DPM++ 2s Ancestral
,DPM++ 2m
,DPM++ 2m SDE
,DPM++ SDE
,DDPM
,UniPC
. - Stable Diffusion XL and 2.1: Generate higher-quality images using the latest Stable Diffusion XL models.
- Textual Inversion Embeddings: For guiding the AI strongly towards a particular concept.
- Simple Drawing Tool: Draw basic images to guide the AI, without needing an external drawing program.
- Face Correction (GFPGAN)
- Upscaling (RealESRGAN)
- Loopback: Use the output image as the input image for the next image task.
- Negative Prompt: Specify aspects of the image to remove.
- Attention/Emphasis:
+
in the prompt increases the model's attention to enclosed words, and-
decreases it. E.g.apple++ falling from a tree
. - Weighted Prompts: Use weights for specific words in your prompt to change their importance, e.g.
(red)2.4 (dragon)1.2
. - Prompt Matrix: Quickly create multiple variations of your prompt, e.g.
a photograph of an astronaut riding a horse | illustration | cinematic lighting
. - Prompt Set: Quickly create multiple variations of your prompt, e.g.
a photograph of an astronaut on the {moon,earth}
- 1-click Upscale/Face Correction: Upscale or correct an image after it has been generated.
- Make Similar Images: Click to generate multiple variations of a generated image.
- NSFW Setting: A setting in the UI to control NSFW content.
- JPEG/PNG/WEBP output: Multiple file formats.
Advanced features
- Custom Models: Use your own
.ckpt
or.safetensors
file, by placing it inside themodels/stable-diffusion
folder! - Stable Diffusion XL and 2.1 support
- Merge Models
- Use custom VAE models
- Textual Inversion Embeddings
- ControlNet
- Use custom GFPGAN models
- UI Plugins: Choose from a growing list of community-generated UI plugins, or write your own plugin to add features to the project!
Performance and security
- Fast: Creates a 512x512 image with euler_a in 5 seconds, on an NVIDIA 3060 12GB.
- Low Memory Usage: Create 512x512 images with less than 2 GB of GPU RAM, and 768x768 images with less than 3 GB of GPU RAM!
- Use CPU setting: If you don't have a compatible graphics card, but still want to run it on your CPU.
- Multi-GPU support: Automatically spreads your tasks across multiple GPUs (if available), for faster performance!
- Auto scan for malicious models: Uses picklescan to prevent malicious models.
- Safetensors support: Support loading models in the safetensor format, for improved safety.
- Auto-updater: Gets you the latest improvements and bug-fixes to a rapidly evolving project.
- Developer Console: A developer-mode for those who want to modify their Stable Diffusion code, modify packages, and edit the conda environment.
(and a lot more)
Easy for new users, powerful features for advanced users:
Task Queue
How to use?
Please refer to our guide to understand how to use the features in this UI.
Bugs reports and code contributions welcome
If there are any problems or suggestions, please feel free to ask on the discord server or file an issue.
If you have any code contributions in mind, please feel free to say Hi to us on the discord server. We use the Discord server for development-related discussions, and for helping users.
Credits
- Stable Diffusion: https://github.com/Stability-AI/stablediffusion
- CodeFormer: https://github.com/sczhou/CodeFormer (license: https://github.com/sczhou/CodeFormer/blob/master/LICENSE)
- GFPGAN: https://github.com/TencentARC/GFPGAN
- RealESRGAN: https://github.com/xinntao/Real-ESRGAN
- k-diffusion: https://github.com/crowsonkb/k-diffusion
- Code contributors and artists on the cmdr2 UI: https://github.com/cmdr2/stable-diffusion-ui and Discord (https://discord.com/invite/u9yhsFmEkB)
- Lots of contributors on the internet
Disclaimer
The authors of this project are not responsible for any content generated using this interface.
The license of this software forbids you from sharing any content that:
- Violates any laws.
- Produces any harm to a person or persons.
- Disseminates (spreads) any personal information that would be meant for harm.
- Spreads misinformation.
- Target vulnerable groups.
For the full list of restrictions please read the License. You agree to these terms by using this software.
Top Related Projects
Stable Diffusion web UI
Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
High-Resolution Image Synthesis with Latent Diffusion Models
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot