Taj S.

Browser-Blank-Scree

Resolve ChatGPT Blank Screen Issue with these 5 Methods

Experiencing a ChatGPT blank screen can be distracting and frustrating at times. Discussing this common issue among AI enthusiasts, we will try to find a solution to this problem.

ChatGPT is a powerful text-to-text AI model that can impress you with its intelligence while answering your queries. However, ChatGPT blank screen is reported as a common issue among the users where they experience a blank screen in between a conversation with ChatGPT. Of course, this is not a pleasant experience and the users want to know the reasons and fix the issue. And that is exactly what you are here for. So let’s get started.

 

ChatGPT-Blank-Screen Demonstration

 

The Impact of ChatGPT Blank Screen on User Experience

ChatGPT has been under the limelight since it was launched, as it was the first one of its kind. Since everyone wanted to experience the new thing, the AI model gained an immense amount of users which could be beyond the predictions of the team OpenAI. This could be a possible reason for ChatGPT technical issues and errors now and then. However, these issues, causing hindrance and inconvenience for the users, should be tried to be fixed at the first priority.

 

Reasons for the ChatGPT Blank Screen

ChatGPT blank screen is a similar issue which can cause frustration and inconvenience among the users. Its exact reason is unknown, however, it few possible reasons are suggested by the users. According to some users, a ChatGPT blank screen can appear when the server is overloaded or experiencing technical difficulties. Some other users suggest the possible reason for ChatGPT blank screen could be a question the AI doesn’t want to answer. It doesn’t seem realistic but it was experienced multiple times when such questions were asked, the AI model hid itself behind the shade of a blank screen.

 

5 Ways to Fix ChatGPT Blank Screen

You can try these methods to fix the ChatGPT blank screen.

 

1. Check your Internet Connection

The very first thing you need to look for is an internet connection. Check if your device is connected with a stable internet connection.

 

2. Restart the Page

Simply try to restart the website or the page. This can fix the temporary ChatGPT blank screen issue.

 

3. Clear your Browser Cache

Cache data is stored within the browser while surfing the web. This can also cause CHatGPT to blank screen oftentimes. Simply clearing the browser cache could also be an option.

Simply go to browser settings and head “Privacy and security” and select “Clear Browsing Data”. Leave the “Cache image and files” checked while unchecking all the other options. Click on clear data. The issue would be most probably resolved after this.

 

4. Change Browser

Another thing you can do is switch the browser. Try using ChatGPT on another browser which can potentially fix the issue.

 

5. Check the Extensions

Browser extensions can sometimes interfere with the web application. Try disabling the extensions you added recently.

6. Contact Support

If none of the above methods are working, you still have an option to contact the support team of OpenAI and share with them the issue you are fixing. They would probably be a helping hand for you in this regard.

 

Conclusion

ChatGPT was in the limelight even before it was released. Since it’s the first one of its kind, it gained an immense number of users in no time. This, however, sometimes causes technical problems and issues due to overuse of the service. One of the issues users face while using the AI model is the ChatGPT blank screen. This can cause distraction and frustration among the users. However, it comes with several fixes which you can try to get back on track.

 

Also Read: How AskYourPDF Website Can Answer All Your Queries from a PDF Document

using askyourpdf website on laptop

Here is How AskYourPDF Website Can Answer All Your Queries from a PDF Document

Have you ever wondered if AI can answer critical questions from a PDF? You can actually do that at the AskYourPDF website. This article is going to be all about that. We will see how you can make AI to answer your question from a PDF document. So let’s get started to discover how you can easily find answers to your questions from a PDF document by following a few simple steps.

 

What is the AskYourPDF Website?

As the name suggests, you can ask questions from a PDF using this website. In simple terms, you can upload your PDF document to this website. Once you have done that, the AI will analyze your uploaded document and will be able to answer all the questions you ask. This can really be helpful if you are not finding any help for queries from a PDF document. Let’s see how you can use the AskYourPDF website to let you keep going.

 

How to Use AskYourPDF Website

  • First of all, you need to visit the AskYourPDF website. You can either Google it or simply follow the attached link.
  • Once you are on the website, Click on “Get Started” to create a free account.

 

AskYourPDF Website - Getting Started

  • Sign up by providing the relevant information.
  • After signing up, you will have to verify your account from the email you provided.
  • You will have to open your email inbox and click on “Verify Account” in the most recent email.

 

AskYourPDF Website-Verify-Account

 

  • Once you have signed up, you will be redirected to a new page as demonstrated below.
  • Click on “Start Conversation”.

 

AskYourPDF-Website-First-Screen

 

  • You will now be redirected to the page where you can either upload your document or provide a link to the document.
  • Once the upload is completed, the AI will provide you with the initial information about the document. You can now start to ask your questions.

 

AskYourPDF Website-Conversation

 

Is the AskYourPDF Website Reliable?

I tried asking both simple moderate and complex questions. AskYourPDF website perfectly answered all the questions according to the document I provided. By the way, the document was a children story book which you can read yourself to get a better idea of the demonstrations below.

Limitations of AskYourPDF Website

Despite being useful, the AskYourPDF website comes with restricted services with a free account. However, you can upgrade to a premium subscription which provides you unrestricted useable access to the service. Here are the details of the premium subscription the AskYourPDF website offers.

AskYourPDF Website -Pro-Plans.jpg

 

Conclusion

AskYourPDF is a fabulous tool that uses the power of AI to solve your queries from a PDF document. You can simply upload a PDF document on the website and the website will be able to answer any type of the questions you ask.

Also Read: 5 Amazing Stable Diffusion Men Models You Should Try

Stable Diffusion Men Model Cover Photo

Here are 5 Amazing Stable Diffusion Men Models You Should Try

Stable Diffusion is a popular text-to-image AI model that can turn your generative thoughts into actual images. One of the features that sets it apart from its competitors is it comes with 1000+ custom models which you can use to boost your creativity. In this blog, we will be discussing 5 best Stable Diffusion men models which are perfect if you are looking to generate male characters. We will also discuss the workings of each model in detail. So let’s get started.

 

Set Up Stable Diffusion WebUI

The first thing you need to experience Stable Diffusion men models is Stable Diffusion WebUI. Following a few simple steps, you can install Stable Diffusion on your own machine. Once you have installed Stable Diffusion on your PC, you can try these Stable Diffusion men models for ultimate creativity and fun.

 

Stable Diffusion Men Models

Here is the list of the 5 Best Stable Diffusion Men Models

 

BlueBoys_2D

Stable Diffusion Men Model Blue Boys 2D

As the name suggests, this model creates 2D anime-style male images according to your input prompts. The model focuses on simple, clear, and flat 2D style designs with vibrant and clear colors. If you are a 2D anime fan, this model is going to be a good treat for you.

Here are a few recommended settings for BlueBoys_2D to best work for you. Keep sampling method of Eular a / DPM++ SDE Karras, a clip skip of 2, and Hires. fix upscaler of R-ESRGAN 4x+Anime6B. Additionally, a CFG scale of 7 to 11 and a VAE of vae-ft-mse-840000-ema-pruned / kl-f8-anime2 will work best in most of the cases.

 

The Three Kingdoms

Stable Diffusion Men Model The Three Kingdoms

The Three Kingdoms is one of the best Stable Diffusion men models for those who love fairy tale, ancient, king-type characters. The model will provide you with an output of characters that are similar to classic villains or heroes.

One of the best parts is, that The Three Kingdoms is being updated on a regular basis. If you are not satisfied with the results of your initial attempts, you can always try later. Additionally, the model doesn’t need any trigger words, which makes it really user-friendly.

 

PastelBoys_2D

Stable Diffusion Men Model Pastel Boys 2D

PastelBoys_2D is another powerful Stable Diffusion model which can amaze you with its results. It can be a fantastic utility if you are looking to generate a handsome anime male character. The model is better than its previous version, however, need some improvements in a few areas. Overall the model performance is impressive.

The best settings for the model to generate stunning outputs are as follows. Sampling method – Eular a / DPM++ SDE Karras, Clip skip – 2, Hires.fix upscaler – R-ESRGAN 4x+Anime6B, CFG Scale – 7~9, and VAE – vae-ft-mse-840000-ema-pruned / kl-f8-anime2.

 

Pretty Boys

Stable Diffusion Men Model Blue Pretty Boys

The Pretty Boys is the first realistic LORA Stable Diffusion men model which can help you create realistic male models. The AI model is trained to generate handsome faces without a beard. You can control the output of the model by using terms like Caucasian, Black, Asian, or Indian.

The Pretty Boys Stable Diffusion model is trained on Stable Diffusion 1.5. You can get better results if you use VAE sd-vae-ft-mse-original. You will need to update the WebUI by using git pull to use LORA’s in auto1111. It is worth noting the LORA file should be copied to stable-diffusion-webui/models/lora directory. The weight should be adjusted according to the instructions as well.

 

Refdalorange

Stable Diffusion Men Model Refdalorange

Refdalorange can create the male characters with a perfect balance between 2D and 3D design. Although the model is trained to generate male characters, it can also generate female characters pretty well. The model uses orangemix.pt VAE, which is very effective for generating high-quality character designs.

The feature that sets Refdalorange apart from other Stable Diffusion men models is it can generate male characters in almost every situation. You can create your character as a warrior, as a scholar, or anything you want.

 

Conclusion

Stable Diffusion is an ocean of amazing output production ranging from imaginative sceneries to realistic human characters. The AI model offers tons of models which can be used for custom generation purposes. If you are looking to generate some male characters, you got tens of Stable Diffusion men models. These models range from 2D to 3D to realistic male characters. Each model is discussed with its pros and cons in the blog.

Stable Diffusion Mask Blur Featured Image

Use Stable Diffusion Mask Blur & InPaint Feature to Alter the Specific Parts of the Image 2023

Stable Diffusion is an evolving text-to-image AI model that has recently been very popular. A few of its features set it apart from other similar AI models like Midjourney. One of these features is Stable Diffusion Mask Blur which you can use for inpainting the images. We will be discussing this feature further in this article. In detail, we will discover, how you can use this feature to boost your creativity and yield the desired results. So let’s get started!

 

What is Stable Diffusion Mask Blur?

In simple terms, if you want to change a specific part of an image, you can use that with the Stable Diffusion mask blur feature. Suppose you generated an image. You are overall satisfied with the result but there is one odd element that you want to alter in the whole image without interrupting the rest of the image. Here is where Stable Diffusion mask blur comes into play. Let’s discover in detail, how you can do that.

 

Lets Use Stable Diffusion Mask Blur

There are a few steps you will need to follow to successfully alter the desired part of your results. Follow along the steps to yield imaginative results.

 

Install Stable Diffusion WebUI

First, you will need some medium to use Stable Diffusion. The best option is to install Stable Diffusion on your own device. Make sure you follow the guide and get Stable Diffusion WebUI on your PC. Once done, you can move on to the next step.

 

Get an Input Image to Use in Stable Diffusion Mask Blur

Once you have installed Stable Diffusion WebUI, you have to find an image that you have to use as input. Probably you already have the image for what you are reading this article. However, in odd cases, if you don’t have the input image, you can generate one. Here are a few of the best Stable Diffusion prompts you can use to get started.

We will be using the following image. I am happy with the rocks and the trees but the water stream doesn’t appeal to me well. I want to make it more realistic. Here is where Stable Diffusion mask blur will be helpful. Let’s do it!

Smooth Stream Stable Diffusion Mask Blur

 

Getting Started with Stable Diffusion Mask Blur

Stable Diffusion Mask Blur Interface

Since you have got the image, let’s move on to our next step. Simply run the Stable Diffusion WebUI and you should see the interface like above. Head to the img2img section. By the way, you can turn your rough drawing into pure art using the img2img feature. However, coming back to our topic, in the img2img tab, select the inpaint sub-section below the prompts field.

 

Prompt

Turn and write down your imagination in the detailed description under the prompt section. Try to add as many details as you can.

 

Negative Prompt

Add everything you don’t want Stable Diffusion to add in the output image. Here are 600+ Stable Diffusion negative prompts to assist you.

 

Upload Input Image

Upload your input image in the “Drop Image Here” section. You can either drag and drop the image or simply click the box and browse the image from your PC.

 

Start Painting the Mask

Once you have imported the image, start painting on the areas you want to alter. Use the hard black circular brush. Everything under this mask, or say, under this painting is what Stable Diffusion will alter. The rest will remain the same. So make sure you paint carefully. Here is what our final mask looks like.

Stable Diffusion Mask Blur Black Mask

 

Generate the Mask

Once done, click on the “Generate” button. And you are done! Here is the result we got.

Stable Diffusion Mask Blur Final Output

 

Let’s compare the both images side by side to get a good idea of what we actually did!

stable diffusion img2img feature image

Use Stable Diffusion img2img Feature to Transform Your Hand Sketch into a Professional Art

Do you want to draw but lack the natural artistic capabilities? Stable Diffusion is here to help! Stable Diffusion img2img feature can amazingly turn your hand drawn sketches into professional work of art leaving the viewers astonished.

Stable Diffusion is popular text-to-image AI model which comes with a list of amazing features setting it apart from other AI models. One of these features is Stable Diffusion img2img (image-to-image). Further in this article, we will discuss this feature in details. We will see how it works, it’s benefits and more. So let’s dive in!

 

What is Stable Diffusion img2img Feature?

Stable Diffusion img2img (image-to-image) is a feature which allows you to provide an image as input along with a text command. This input image works as a guide for the output, accompanying the text command. Stable Diffusion only follows the style, color, and composition from input image.

 

How does Stable Diffusion img2img Feature work?

Stable Diffusion img2img feature takes image as input along with text command, and generates a result combining both inputs. Even if your input lacks vibrancy and details, Stable Diffusion can enhance it to produce amazing results aligning with the text command you provided.

We will use a rough hand drawn sketch of an apple in this blog as an example. We will discover how, along with a text command, we can use the following image as our input (the right one). And how it can turn out to be a perfect drawing (the left one).

Stable Diffusion img2img side by side comparision

 

Setting up Stable Diffusion img2img Feature

To use Stable Diffusion img2img, you will need to install Stable Diffusion GUI called Automatic1111. Follow this step by step guide on how to install Stable Diffusion GUI on your machine.

 

Enabling Color Sketch Tool

Once you have installed Stable Diffusion GUI, you will have to enable color sketch tool for the better working of Stable Diffusion img2img feature. By default, the color sketch tool is inactive. However, you can active it by following these steps.

  • Go to the location where you installed Automatic1111.
  • Head to “stable-diffusion-webui-folder”
  • Locate “webui-user.bat” file
  • Right click on the file and click “Edit”.
  • Now change the following line

set COMMANDLINE_ARGS=

to

set COMMANDLINE_ARGS=--gradio-img2img-tool color-sketch

 

  • If any argument already exists after the equal sign, add --gradio-img2img-tool color-sketch with a space in the beginning.

Double click the file to start the GUI.

 

Using Color Sketch Tool

To use color sketch tool, head to img2img tab. Set up the starting image on the canvas. Select color palette icon and then solid color button. You should see the color sketch tool as demonstrated below.

Stable Diffusion img2img Color Sketch Tool

Now you are free to draw anything you want!

 

Transforming a Rough Drawing in to Professional Image

We will be using an apple as a subject to transform a rough hand drawn image to a professional drawing. You need to follow these simple steps to go along.

 

1. Creating a Background

You can use a canvas of 512×512 pixels with either a white or black background as per your satisfaction.

 

2. Draw Your Subject

Now draw your subject in the canvas. We will be using an apple as a subject. Please note you don’t need to spend too much time drawing the subject. The main focus here is to provide Stable Diffusion img2img feature the colors and composition of the subject for the reference. Rest is up to Stable Diffusion. We draw something like this.

Stable Diffusion img2img sample image, drawing

 

3. Stable Diffusion img2img Feature

For Stable Diffusion v1.5 Model, select v1-5-pruned-emaonly.ckpt in checkpoint dropbox. You can also play around with other models.

Next up, you need to provide a prompt that best describes your imagination of the final image as accurately as possible. Here is the sample prompt used in our demonstration.

photo of perfect green apple with stem, water droplets, dramatic lighting

Then, use the settings demonstrated below.

stable diffusion img2img all settings

 

The two parameters you can play around are CFG Scale and Denoising strength. Getting started, set CFG scale to 11 and Denoising strength to 0.75.

Now, hit Generate to get the results. Here is what we got.

Stable Diffusion img2img result

You can experiment with CFC scale and Denoising strength after you have read the complete guide on CFG scale.

Once you are happy with the results you are getting, save the image.

 

Another round for Stable Diffusion img2img

If you are totally satisfied with your results, you can stop here. Otherwise, you can go for another round of img2img. This time, however, use the final image as your first sample and repeat the process. Here is what second round of img2img provided us as output.

Stable Diffusion img2img second round result

Use these 3 Methods Helpful Methods to Make Stable Diffusion Restore Faces

Artificial intelligence is becoming very popular nowadays. There is significant progress in the field of text to image technology . Images generated with AI are becoming increasingly realistic, and many applications use those images, from advertising to movies to video games. However, the problem with AI-generated images is that sometimes they can produce undesirable or unexpected results. For Example, the issue of faces being “messed up” or being distorted in images generated by a stable diffusion model.

 

Why are Faces Being Distorted on Stable Diffusion?

Stable diffusion is a model that is trained with a large dataset of images using a neural network to generate high-quality images. But, one of the limitations of this technique is that sometimes it produces “messed up” or distorted faces. The reason behind this is the neural network is not able to get the all details and variations in human faces, which causes unreal or distorted results.

 

How do Make Stable Diffusion Restore Faces?

Luckily, there are some ways to restore faces that have been “messed up” or distorted by stable diffusion. The most effective way to do this is by using the AUTOMATIC1111 stable-diffusion-webui. This is an open-source tool that is particularly designed to generate images with stable diffusion, and it offers multiple features and options for get the best possible results.

 

Restore Faces with AUTOMATIC1111 stable-diffusion-webui

Some features of the stable-diffusion-webui are:
• Inpainting:

You can fill missing or distorted parts of the image with this feature, such as eyes or mouth.

• Color correction

You can adjust the color of the image to get a more natural look using this feature.

• Image enhancement

You can sharpen the image and improve its overall quality.

• Face restoration

This feature allows you to improve faces in pictures using either CodeFormer or GFPGAN.

(You can find all of its features at this link.)

 

Stable Diffusion restore faces

 

For creating new images, you can just select the “Restore Faces” option in the menu:

 

Stable Diffusion restore faces

 

If you want to fix the eyes of an already existing image, just go to the “Extras” tab and upload your image.

 

Stable Diffusion restore faces

 

Set the impact (from 0 to 1) of CodeFormer or GFPGAN. You might need different configurations depending on the image.

 

Inpainting with AUTOMATIC1111 stable-diffusion-webui

Inpainting is a powerful feature of the AUTOMATIC1111 stable-diffusion-webui that lets you fill in distorted or missing parts of the image. This feature is specifically used for stable diffusion restore faces that have been “messed up” or distorted The inpainting feature lets you select the area of the image that you wish to fill in, and then this tool will automatically generate a new image that fills in the missing parts.

For this, go to the img2img tab and choose “inpaint”

 

Stable Diffusion restore faces

 

You select the eyes and the missing piece will be filled.

 

Stable Diffusion restore faces

 

The following configuration works well for me.

 

Stable Diffusion restore faces

 

How will Stable Diffusion Restore Faces without AUTOMATIC1111 WebUI?

To make stable diffusion restore faces without AUTOMATIC1111 stable-diffusion-webui, you can install the same tools CodeFormers and GFPGAN they used on your implementation.

Here you can find the huggingface/sczhou space doing stable diffusion restore faces using CodeFormers. And here is a tutorial by EdXD on explaining the use of GFPGAN.

Also Read: Here is How to Download and Install Stable Diffusion’s Anything V4.5 Model

What is CFG Scale in Stable Diffusion? Here is How You can Use it for Best Results

Ever heard of the CFG scale and wondered what is CFG scale in Stable Diffusion? Keep on reading to find out!

Stable Diffusion has recently been a very popular text-to-image AI model which leaves its users in surprise with its image generative capabilities. The benefit Stable Diffusion has is, it is open source and available to use with 1000+ models for anyone. This means you can install and run Stable Diffusion on your own machine.

Classifier-free guidance (CFG) scale in Stable Diffusion is another feature that makes it stand out of its competitive AI models. And that is today’s topic as well. In this article, we will be discovering what is CFG scale in Stable Diffusion, how it works, how you can use it to improve your experience with Stable Diffusion, and more! So let’s not wait any further.

 

What is CFG Scale in Stable Diffusion?

Classifier-Free Guidance scale or CFG scale is a parameter that determines the degree of similarity between user input and the generated image. Using the CFG scale, users can tweak the level of similarity, and the quality of generated image.

 

What is CFG Scale Stable Diffusion Meaning?

If you are still wondering what is CFG scale in Stable Diffusion, here is your answer. The simple CFG Scale Stable Diffusion meaning is, your generated images will be compared to the prompt you provided. The higher the CFG Scale will be, the more generated image will match your input. However, in that case, the quality of the image may be compromised.

For better-quality images, the CFG scale may be kept lower. But that may lead to decreased resemblance between the prompt and the generated image. It’s totally up to you. If you want the output to be more aligned with the input, you can choose a higher CFG scale. If you care about the quality and the resemblance doesn’t matter to you, you can go for a low CFG scale.

 

How to Use CFG Scale in Stable Diffusion

Here are some simple steps you can follow to use the CFG scale in Stable Diffusion

Choose your Platform

You can use the CFG scale on platforms like DreamStudio, Playground AI, or Lexica. Decide where you want to use the CFG scale.

 

Sign up or Log in

For DreamStudio or Playground AI, you will need to Sign up or log in. If you are using Lexica, you are good to go.

Enter your prompt.

Now you have to put in your imaginative prompt to get output from Stable Diffusion. If you are unsure about how to craft good prompts, here is a guide for you.

Locate CFG Scale

According to the platform you are using, you will need to locate the CFG scale.

In DreamStudio you can find the “CFG Scale” slider on the right-hand side.

What is CFG Scale in Stable Diffusion

 

For Playground AI, you will find CFG Scale as “Prompt Guidance” on the right-hand side.

What is CFG Scale in Stable Diffusion

 

For Lexica, you will find CFG Scale as a “Guidance Scale” after clicking the “Generate” button.

What is CFG Scale in Stable Diffusion

 

Adjust the CFG Scale

The next step is to adjust the CFG scale value according to your preferences. As discussed earlier, you can use a high CFG scale value for more resemblance of the prompt and output, while compromising on the quality, and vice versa.

 

Generate the Image

Once you have set the scale as per your preferences, you can generate the image. You will find the button to generate the image according to the platform you are using. It can be “Generate”, “Dream” or anything else.

 

What is the best CFG Scale Value?

The best CFG scale value depends on your requirements. However, the optimum value is said to be around 7 to 11. You can set the value according to your preferences.

 

Conclusion

Stable Diffusion is one of the most popular text-to-image AI models which comes with some extra and handy features. One of these features is the CFG scale. In this article, we talked about what is CFG scale in Stable Diffusion, how it works, and how you can use it for the best results for your images.

 

Here is How to Download and Install Stable Diffusion’s Anything V4.5 Model

If you are into Stable Diffusion models, you must be familiar with one of the best Stable Diffusion anime models, Anything V3. You will get to know some insights about the model as we proceed further in the blog. Moreover, you will learn what is Anything V4.5 Stable diffusion model, how to download it, and install on your machine. If you love anime, you are in for a treat. So let’s get started!

 

The Origin of Anything V3 and Anything V4.5 Stable Diffusion

Anything V3 model is famous for its fantastic output in the anime genre. It is considered one of the best anime Stable Diffusion models to exist. Surprisingly, very few people know where it came from. Someone from China anonymously launched it over the internet. Despite its popularity, no one claimed to be its creator.

Building on this, someone took Anything V3 a step further, leading it to Anything V4, and eventually Anything V4.5 Stable Diffusion. So it’s obvious, Anything V4.5 Stable Diffusion wasn’t developed by the creator of Anything V3. Instead, it is developed by someone who used the existing source code of Anything V3.

 

Anything V3 Vs Anything V4.5 Stable Diffusion

Here is a demonstration of how both models are practically different from each other. While they share similarities, you will notice more details in the Anything V4.5 model.

anything v3
Anything V3 – PC: Pirate Diffusion
anything v4.5 stable diffusion
Anything V4.5
Anything V3 A
Anything V3 A
Anything V4.5 A
Anything V4.5 A

How to Download and Install Anything V4.5 Stable Diffusion

To download Anything V4.5 Stable Diffusion, you have to follow these simple steps.

  • Got to Hugging Face and Login.
  • If you don’t have an account, sign up.
  • Search for “Anything V4.5” in the search bar.
  • Choose the “Airic/Anything-V4.5”

Airic/Anything-V4.5 Stable Diffusion

  • Head to Files and Versions Section.
  • Download your preferred model format (.ckpt or .safetensor).
  • Check out the difference between the model formats here.
  • Click here to check out the step-by-step guide on how to install the model on your PC.

Conclusion

Anything V3 is considered to be one of the best anime Stable Diffusion models ever. It was originally launched by someone in China anonymously. Taking it to the next step, the Anything V4.5 Stable Diffusion model was launched recently. Some of the differences between the two are demonstrated in the blog. Additionally, you can download and install Stable Diffusion on your machine by following few simple steps shared in the blog.

10 Best Stable Diffusion Models You Should Try Now!

Stable Diffusion is a popular text-to-image model which can convert your imaginable words into images. As the technology is maturing, the outputs generated by the model are improving. To add, Stable Diffusion comes with 1000+ models which are trained with custom approaches. In this article, we will discuss what Stable Diffusion models are, and how they work. I will also share list of 10 Best Stable Diffusion models you should try. So with a lot of excitement, let’s get started!

 

What is Stable Diffusion Model?

In simple words, a Stable Diffusion model works on a next level customization demanded by user. For example, if you want to generate an anime art using Stable Diffusion, there is a separate model for it. Similarly, you get a dedicated model for generating oil painting art in Stable Diffusion.

In this way, dedicated models provide full customization for users. Further in the article, I have provided top 10 best Stable Diffusion Models that are worth trying. So make sure to read till the end.

 

Difference between “.ckpt” and “.safetensor”

Machine learning models use model data files to store the information they learn. There are two types of model data files, “.ckpt” and .safetensor. As the safetensor models does not use pickle modules to process they are relatively safer.

However, the pickle model in .ckpt file is vulnerable to exploiters and threat actors. So if you are going to use .ckpt model, make sure to get it from a trusted source.

 

10 Best Stable Diffusion Models

Now let’s discuss each of the best Stable Diffusion models one by one.

AbyssOrangeMix3 (AOM3)

abyssmodel best stable diffusion model

PC: SoftwareKeep

AbyssOrangeMix3 is an amazing Stable Diffusion Model which produces the best results with minimal prompting. If you are into less realistic, anime-style illustrations, this model is for you.

You don’t need to provide too many details for your prompt. Providing generic arguments in your prompt will amazingly generate detailed results for you. If you are

 

Anything V3

best stable diffusion models
PC: SoftwareKeep

 

Anything V3 is a model trained for “Anime Style” image generations. This is one of the best Stable Diffusion models which fall under the Anime genre. Anime fans can enjoy creating detailed and stunning anime illustrations using this model.

On issuing a simple prompt of a girl, the model generated a young anime girl with its illustration-based style without requiring any additional or specified details. If you are an anime addict, you can enjoy generating your imaginative characters.

 

DreamShaper

best stable diffusion models
PC: SoftwareKeep

DreamShaper is more of an illustration model which generates amazing illustration artwork through your imaginative words. The DreamShaper is also able to create stunning landscapes with vibrant colors. If you are looking to create illustrations using AI, DreamShaper can be one of the best Stable Diffusion models for you can go for.

 

MeinaMix

PC: SoftwareKeep

This is yet another model for creating stunning anime characters from your imaginative thoughts. MeinaMix is similar to Anything V3, however it is good in creating more complex illustrations with distinct painting quality.

This model is a collection of several other models. It generates output using information from other different models. So you can get the best output from all the relevant models.

 

Elldreths Retro Mix

PC: SoftwareKeep

This model is inspired by vintage artwork and uses a combination of colors, shapes, and textures to create the perfect retro vibe for your artwork. So if you are a fan of retro and vintage artwork, Elldreths Retro Mix is one of the best Stable Diffusion models for you.

 

OpenJourney

PC: SoftwareKeep

As the name suggests, this model is inspired by MidJourney AI. This model mainly focuses on creating realistic and stunning images. However, it is important to note that the more detailed prompts you will provide, the more realistic the output will be.

If you are a fan of MidJourney but can’t afford it for some reason, this model is absolutely for you. Moreover, you can check out the differences between MidJourney and Stable Diffusion here.

 

Protogen

PC: SoftwareKeep

Protogen is yet another Stable Diffusion model which focuses on creating realistic images. It is more focused on generating people instead of things, or landscapes. The model uses a machine-learning strategy that focuses more on fine-tuning the lesson instead of making sweeping adjustments to the output.

If you are looking to create stunning and realistic human faces, this is undoubtfully the best Stable Diffusion model for you.

 

Deliberate

PC: SoftwareKeep

According to the creator of this model, you can generate anything you want with this model. You have to provide a detailed prompt for this model to give you optimum performance. Deliberate blends digital art and realism to provide realistic outputs.

 

Realistic Vision

PC: SoftwareKeep

As humans, we are subconsciously trained to recognize imperfection or unrealism. For a machine, it is hard to produce something that is way too realistic for us. However, realistic vision is a model which uses all the mechanical powers to generate realistic images from our input. This model will produce almost perfect output according to your prompt.

 

Modelshoot

Best Stable Diffusion Models
PC: SoftwareKeep

Modelshoot generates amazing output from simple inputs. The output from this model seems like it is a photograph captured by a premium camera. If you are looking to generate such images with AI, Modelshoot can be one of the best Stable Diffusion models for you.

Step by Step Guide to Install Stable Diffusion on Windows 10

Are you tired of using Stable Diffusion on Discord? Do you want to push your creativity limits? Well, you are in the right place. In this blog, we will learn how to install Stable Diffusion on Windows 10 and 11. We will cover the topic in detail and will go through all the steps one by one. So without waiting any further, let’s get started.

 

System Requirements to Install Stable Diffusion on Windows 10

To run Stable Diffusion on your PC, your system needs to match the following requirements:

  • Any good AMD or Intel CPU.
  • A minimum of 16 GB of RAM.
  • At least 256 GB SSD.
  • At least 10 GB of free disk space.
  • A GeForce RTX GPU with a minimum of 8 GB of GDDR6 memory.

 

What you will need?

Here is the list of things you will need to install Stable Diffusion on Windows 10.

 

Hugging Face Account

First of all, you will need an account on Hugging Face. You can create one for free by simply clicking here and entering the relevant information.

 

GitHub Account

The second thing you will need is an account on GitHub. You can create it for free too. Just click here and sign up by following the on-screen instructions.

 

Stable Diffusion Models

You will need to download the Stable Diffusion model from Hugging Face. For that, head to the Hugging Face, and in the search bar, type “stable diffusion”. You will see different models.

how to install stable diffusion on windows 10

Each of them is a bit different from each other. You can discover details about each model on the Internet. However, for this tutorial, I installed the Stable Diffusion v1.4 Original model.

You can search for the same model on hugging face, or simply click here. Under “Download the weights” click on “sd-v14.ckpt” and it should start downloading the model.

how to install stable diffusion in windows 10

Git for Windows

The next thing you will need is a software called Git for Windows. If you are familiar with Linux OS, you must have heard the command “git clone”. That is exactly what this software is going to do for us in Windows.

To download the software, simply click here. Click on the Download button and the software should start downloading.

Once the software is downloaded, double-click on the setup to install it and complete the installation process by following the on-screen instructions.

 

Python

The next thing you will need to install Stable Diffusion in Windows 10 is Python programming language. For that, click here and scroll down until you see “Files”. Click on “Windows Installer | Windows | Recommended” as shown below.

how to install stable diffusion on windows 10

It will start downloading Python for you.

Once the downloading is completed, double-click on the Python setup file and install it by following the on-screen instructions.

That was all we needed to install Stable Diffusion on Windows 10.

Now let’s begin installing Stable Diffusion.

 

Location to Install Stable Diffusion on Windows 10

First of all you need to define the location where you want to install the Stable Diffusion. Remember, you will need at least 10 GB of free space to install the software. It is highly recommended to install it in an SSD for better performance.

Once you have decided where to install Stable Diffusion, simply create a new folder in that location. You can name it whatever you want. For this tutorial, we will name it “AI – Stable Diffusion”. Now copy this folder

 

Changing Directory

Now open Git for Windows and type cd[paste]. Replace Paste with the directory of the “AI – Stable Diffusion” folder by right-clicking and selecting “Paste”. It will change Git for Windows directory to the folder where we want to install Stable Diffusion in Windows 10.

how to install stable diffusion on windows 10

 

Cloning Stable Diffusion in Windows 10

Now, you will need to clone Stable Diffusion 1111 in your Windows. For that, visit this link and click on “Code”. You should see the “Copy” button next to the code. Simply click this button, or just copy the code directly from here  https://github.com/AUTOMATIC1111/stable-diffusion-webui.git.

Now go back to Git for Windows and issue this command  “git clone [code]”. Replace [code] with the code you just copied.

Hit Enter. It will take a minute or so. Wait for it to be completed.

 

Putting in Checkpoints

Now copy the sd-v14.ckpt you downloaded earlier and head to the “AI – Stable Diffusion” folder. Navigate to the path “Location\AI - Stable Diffuion\stable-diffusion-webui\models\Stable-diffusion”.

In this folder, you will see a text file saying “Put Stable Diffusion checkpoints here”. Paste the file you copied earlier.

 

Downloading Torch

Navigate to the path “Location\AI - Stable Diffuion\stable-diffusion-webui” and scroll to the end until you see this file called “webui-user.bat”.

how to install stable diffusion on windows 10

Double-click this file and it should initiate the download. Wait for it to be completed.

how to install stable diffusion on windows 10

 

Here You Go!

Once that’s completed, you should see “Running on Local Machine” and a web address next to it.

How to Install Stable Diffusion on Windows 10

Simply copy the address and paste it into a new tab in your browser.

Bingo! You are inside the Stable Diffusion Web UI.

You can generate anything that you imagine without any restrictions! There are a lot of settings that you can play around to find the best for you.

If you are looking for amazing prompts to get started with, here they are. Additionally, if you run into any error while running Stable Diffusion on your PC, you can find a solution here. If you encounter a “Cuda out of Memory Error”, here are 7 ways to resolve that error.

 

Scroll to Top