sdxl refiner comfyui. Part 3 - we will add an SDXL refiner for the full SDXL process. sdxl refiner comfyui

 
 Part 3 - we will add an SDXL refiner for the full SDXL processsdxl refiner comfyui 这一期呢我们来开个新坑,来讲sd的另一种打开方式,也就是这个节点化comfyUI。那熟悉我们频道的老观众都知道,我一直是用webUI去做演示和讲解的

Workflows included. 1. ago. 0 Base model used in conjunction with the SDXL 1. 5 base model vs later iterations. Embeddings/Textual Inversion. (early and not finished) Here are some more advanced examples: “Hires Fix” aka 2 Pass Txt2Img. 0 checkpoint. ago. 2. ·. SDXL places very heavy emphasis at the beginning of the prompt, so put your main keywords. 0下载公布,本机部署教学-A1111+comfyui,共用模型,随意切换|SDXL SD1. SDXL clip encodes are more if you intend to do the whole process using SDXL specifically, they make use of. Favors text at the beginning of the prompt. 5 refined model) and a switchable face detailer. To simplify the workflow set up a base generation and refiner refinement using two Checkpoint Loaders. Exciting SDXL 1. The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. Skip to content Toggle navigation. You can type in text tokens but it won’t work as well. Currently, a beta version is out, which you can find info about at AnimateDiff. Model Description: This is a model that can be used to generate and modify images based on text prompts. 0. You can use this workflow in the Impact Pack to regenerate faces with the Face Detailer custom node and SDXL base and refiner models. SDXL Refiner 1. These files are placed in the folder ComfyUImodelscheckpoints, as requested. Today, I upgraded my system to 32GB of RAM and noticed that there were peaks close to 20GB of RAM usage, which could cause memory faults and rendering slowdowns in a 16gb system. Load成功后你应该看到的是这个界面,你需要重新选择一下你的refiner和base modelI was having very poor performance running SDXL locally in ComfyUI to the point where it was basically unusable. You generate the normal way, then you send the image to imgtoimg and use the sdxl refiner model to enhance it. {"payload":{"allShortcutsEnabled":false,"fileTree":{"ComfyUI-Experimental/sdxl-reencode":{"items":[{"name":"1pass-sdxl_base_only. Img2Img ComfyUI workflow. 1:39 How to download SDXL model files (base and refiner). The initial image in the Load Image node. 0 or 1. 0 Part 5: Scale and Composite Latents with SDXL Part 6: SDXL 1. Create and Run SDXL with SDXL. Click "Install Missing Custom Nodes" and install/update each of the missing nodes. Adds support for 'ctrl + arrow key' Node movement. Now with controlnet, hires fix and a switchable face detailer. There are two ways to use the refiner: ; use the base and refiner models together to produce a refined image まず前提として、SDXLを使うためには web UIのバージョンがv1. To use the Refiner, you must enable it in the “Functions” section and you must set the “End at Step / Start at Step” switch to 2 in the “Parameters” section. Basically, it starts generating the image with the Base model and finishes it off with the Refiner model. I've a 1060 GTX, 6gb vram, 16gb ram. No the 'SDXL refiner' must be separately selected, loaded, and run (In the Img2Img Tab) after the initial output is generated using the SDXL base model in Txt2Img Tab. , Realistic Stock Photo)In fact, ComfyUI is more stable than WEBUI(As shown in the figure, SDXL can be directly used in ComfyUI) @dorioku. Not positive, but I do see your refiner sampler has end_at_step set to 10000, and seed to 0. I've been having a blast experimenting with SDXL lately. 9-usage This repo is a tutorial intended to help beginners use the new released model, stable-diffusion-xl-0. Opening_Pen_880. SDXL has 2 text encoders on its base, and a specialty text encoder on its refiner. In the Comfyui SDXL workflow example, the refiner is an integral part of the generation process. 5 and send latent to SDXL BaseIn this video, I dive into the exciting new features of SDXL 1, the latest version of the Stable Diffusion XL: High-Resolution Training: SDXL 1 has been t. The SDXL Discord server has an option to specify a style. download the SDXL models. Is the best balanced I could find between image size (1024x720), models, steps (10+5 refiner), samplers/schedulers, so we can use SDXL on our laptops without those expensive/bulky desktop GPUs. 手順4:必要な設定を行う. 0 base and refiner models with AUTOMATIC1111's Stable Diffusion WebUI. png","path":"ComfyUI-Experimental. In Automatic1111's high-res fix and ComfyUI's node system, the base model and refiner use two independent k-samplers, which means the momentum is largely wasted, and the sampling continuity is broken. Do I need to download the remaining files pytorch, vae and unet? also is there an online guide for these leaked files or do they install the same like 2. @bmc-synth You can use base and/or refiner to further process any kind of image, if you go through img2img (out of latent space) and proper denoising control. Note that for Invoke AI this step may not be required, as it’s supposed to do the whole process in a single image generation. 9 was yielding already. 15:49 How to disable refiner or nodes of ComfyUI. 0 with both the base and refiner checkpoints. Step 3: Download the SDXL control models. Install SDXL (directory: models/checkpoints) Install a custom SD 1. I describe my idea in one of the post and Apprehensive_Sky892 showed me it's arleady working in ComfyUI. . Comfyroll Custom Nodes. 5 from here. 9" what is the model and where to get it? Reply reply Adventurous-Abies296 The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. In fact, ComfyUI is more stable than WEBUI(As shown in the figure, SDXL can be directly used in ComfyUI) @dorioku. but ill add to that, currently only people with 32gb ram and a 12gb graphics card are going to make anything in a reasonable timeframe if they use the refiner. You can use the base model by it's self but for additional detail you should move to the second. Natural langauge prompts. I think this is the best balanced I. Designed to handle SDXL, this ksampler node has been meticulously crafted to provide you with an enhanced level of control over image details like never before. im just re-using the one from sdxl 0. com Open. Txt2Img is achieved by passing an empty image to the sampler node with maximum denoise. ComfyUI was created by comfyanonymous, who made the tool to understand how Stable Diffusion works. 9 Research License. Run ComfyUI with colab iframe (use only in case the previous way with localtunnel doesn't work) You should see the ui appear in an iframe. 9. 6. For me its just very inconsistent. 9 the latest Stable. The video also. SDXL two staged denoising workflow. Fooocus and ComfyUI also used the v1. 手順5:画像を生成. im just re-using the one from sdxl 0. 9 was yielding already. Updated ComfyUI Workflow: SDXL (Base+Refiner) + XY Plot + Control-LoRAs + ReVision + ControlNet XL OpenPose + Upscaler . g. SDXL Prompt Styler Advanced: New node for more elaborate workflows with linguistic and supportive terms. 5. 5 and always below 9 seconds to load SDXL models. SDXLをGoogle Colab上で簡単に使う方法をご紹介します。 Google Colabに既に設定済みのコードを使用することで、簡単にSDXLの環境をつくりあげす。また、ComfyUIも難しい部分は飛ばし、わかりやすさ、応用性を意識した設定済みのworkflowファイルを使用することで、すぐにAIイラストを生成できるように. To get started, check out our installation guide using Windows and WSL2 ( link) or the documentation on ComfyUI’s Github. It's official! Stability. 0_fp16. Hi all, As per this thread it was identified that the VAE on release had an issue that could cause artifacts in fine details of images. It takes around 18-20 sec for me using Xformers and A111 with a 3070 8GB and 16 GB ram. but if I run Base model (creating some images with it) without activating that extension or simply forgot to select the Refiner model, and LATER activating it, it gets OOM (out of memory) very much likely when generating images. 3. I recommend you do not use the same text encoders as 1. After gathering some more knowledge about SDXL and ComfyUI, and experimenting few days with both I've ended up with this basic (no upscaling) 2-stage (base + refiner) workflow: It works pretty well for me - I change dimensions, prompts, and samplers parameters, but the flow itself stays as it is. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. Basic Setup for SDXL 1. 1. 详解SDXL ComfyUI稳定工作流程:我在Stability使用的AI艺术内部工具接下来,我们需要加载我们的SDXL基础模型(改个颜色)。一旦我们的基础模型加载完毕,我们还需要加载一个refiner,但是我们会稍后处理这个问题,不用着急。此外,我们还需要对从SDXL输出的clip进行一些处理。SDXL generations work so much better in it than in Automatic1111, because it supports using the Base and Refiner models together in the initial generation. 5 models) to do. Installing ControlNet for Stable Diffusion XL on Windows or Mac. Refiner same folder as Base model, although with refiner i can't go higher then 1024x1024 in img2img. The two-model setup that SDXL uses has the base model is good at generating original images from 100% noise, and the refiner is good at adding detail at 0. Download and drop the. json: sdxl_v1. None of them works. This produces the image at bottom right. I used it on DreamShaper SDXL 1. Yup, all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have that, though). 5 + SDXL Refiner Workflow but the beauty of this approach is that these models can be combined in any sequence! You could generate image with SD 1. 20 Steps shouldn't wonder anyone, for Refiner you should use maximum the half amount of Steps you used to generate the picture, so 10 should be max. 9 Tutorial (better than Midjourney AI)Stability AI recently released SDXL 0. Examples. 8s (create model: 0. 1 for the refiner. Otherwise, I would say make sure everything is updated - if you have custom nodes, they may be out of sync with the base comfyui version. Idk why a1111 si so slow and don't work, maybe something with "VAE", idk. json: 🦒 Drive. 0 and upscalers. Creating Striking Images on. safetensors + sd_xl_refiner_0. 左上角的 Prompt Group 內有 Prompt 及 Negative Prompt 是 String Node,再分別連到 Base 及 Refiner 的 Sampler。 左邊中間的 Image Size 就是用來設定圖片大小, 1024 x 1024 就是對了。 左下角的 Checkpoint 分別是 SDXL base, SDXL Refiner 及 Vae。 This is the most well organised and easy to use ComfyUI Workflow I've come across so far showing difference between Preliminary, Base and Refiner setup. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. 0 Base should have at most half the steps that the generation has. 0 base and have lots of fun with it. It is if you have less then 16GB and are using ComfyUI because it aggressively offloads stuff to RAM from VRAM as you gen to save on memory. x for ComfyUI ; Table of Content ; Version 4. T2I-Adapter is an efficient plug-and-play model that provides extra guidance to pre-trained text-to-image models while freezing the original large text-to-image models. ComfyUI: An open source workflow engine, which is spezialized in operating state of the art AI models for a number of use cases like text to image or image to image transformations. Mostly it is corrupted if your non-refiner works fine. 9 in ComfyUI, with both the base and refiner models together to achieve a magnificent quality of image generation. Although SDXL works fine without the refiner (as demonstrated above) you really do need to use the refiner model to get the full use out of the model. I've been trying to find the best settings for our servers and it seems that there are two accepted samplers that are recommended. They compare the results of Automatic1111 web UI and ComfyUI for SDXL, highlighting the benefits of the former. 0 Checkpoint Models beyond the base and refiner stages. 9: The base model was trained on a variety of aspect ratios on images with resolution 1024^2. After gathering some more knowledge about SDXL and ComfyUI, and experimenting few days with both I've ended up with this basic (no upscaling) 2-stage (base + refiner) workflow: It works pretty well for me - I change dimensions, prompts, and samplers parameters, but the flow itself stays as it is. I also have a 3070, the base model generation is always at about 1-1. 1. 0 links. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. There’s also an install models button. conda activate automatic. The workflow should generate images first with the base and then pass them to the refiner for further refinement. 9 fine, but when I try to add in the stable-diffusion-xl-refiner-0. 0 Resource | Update civitai. 9版本的base model,refiner model. import torch from diffusers import StableDiffusionXLImg2ImgPipeline from diffusers. The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. r/StableDiffusion. 0—a remarkable breakthrough. Model loaded in 5. By default, AP Workflow 6. x, SD2. 0. I had experienced this too, dont know checkpoint is corrupted but it’s actually corrupted Perhaps directly download into checkpoint folderI tried SDXL in A1111, but even after updating the UI, the images take veryyyy long time and don't finish, like they stop at 99% every time. Let me know if this is at all interesting or useful! Final Version 3. 0以降 である必要があります(※もっと言うと後述のrefinerモデルを手軽に使うためにはv1. I've been working with connectors in 3D programs for shader creation, and the sheer (unnecessary) complexity of the networks you could (mistakenly) create for marginal (i. Fixed SDXL 0. 5 model (directory: models/checkpoints) Install your loras (directory: models/loras) Restart. If this is. json file which is easily loadable into the ComfyUI environment. 5 models for refining and upscaling. at least 8GB VRAM is recommended. 色々細かいSDXLによる生成もこんな感じのノードベースで扱えちゃう。 852話さんが生成してくれたAnimateDiffによる動画も興味あるんですが、Automatic1111とのノードの違いなんかの解説も出てきて、これは使わねばという気持ちになってきました。1. Note that in ComfyUI txt2img and img2img are the same node. It is not AnimateDiff but a different structure entirely, however Kosinkadink who makes the AnimateDiff ComfyUI nodes got it working and I worked with one of the creators to figure out the right settings to get it to give good outputs. ) These images are zoomed-in views that I created to examine the details of the upscaling process, showing how much detail. Installation. 20:43 How to use SDXL refiner as the base model. Fine-tuned SDXL (or just the SDXL Base) All images are generated just with the SDXL Base model or a fine-tuned SDXL model that requires no Refiner. 3-中文必备插件篇,stable diffusion教学,stable diffusion进阶教程3:comfyui深度体验以及照片转漫画工作流详解,ComfyUI系统性教程来啦!简体中文版整合包+全新升级云部署!预装超多模块组一键启动!All images were created using ComfyUI + SDXL 0. For me, it has been tough, but I see the absolute power of the node-based generation (and efficiency). Navigate to your installation folder. #stablediffusionart #stablediffusion #stablediffusionai In this Video I have Explained you Hi-Res Fix Upscaling in ComfUI In detail. 5 models. A switch to choose between the SDXL Base+Refiner models and the ReVision model A switch to activate or bypass the Detailer, the Upscaler, or both A (simple) visual prompt builder To configure it, start from the orange section called Control Panel. Members Online •. Discover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom nodes, refining images with advanced tool. I recommend you do not use the same text encoders as 1. ComfyUI seems to work with the stable-diffusion-xl-base-0. . 1 Workflow - Complejo - for Base+Refiner and Upscaling; 1. SDXL Examples The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. Im new to ComfyUI and struggling to get an upscale working well. Here Screenshot . Setup a quick workflow to do the first part of the denoising process on the base model but instead of finishing it stop early and pass the noisy result on to the refiner to finish the process. Run update-v3. Refiner > SDXL base > Refiner > RevAnimated, to do this in Automatic1111 I would need to switch models 4 times for every picture which takes about 30 seconds for each switch. With SDXL I often have most accurate results with ancestral samplers. 9 (just search in youtube sdxl 0. Installing. Fully supports SD1. Tedious_Prime. x, SDXL and Stable Video Diffusion; Asynchronous Queue systemComfyUI installation. 9, I run into issues. 10. Activate your environment. A historical painting of a battle scene with soldiers fighting on horseback, cannons firing, and smoke rising from the ground. It fully supports the latest Stable Diffusion models including SDXL 1. Usually, on the first run (just after the model was loaded) the refiner takes 1. I wanted to see the difference with those along with the refiner pipeline added. Improved AnimateDiff integration for ComfyUI, initially adapted from sd-webui-animatediff but changed greatly since then. 1 Base and Refiner Models to the ComfyUI file. ) Sytan SDXL ComfyUI. BTW, Automatic1111 and ComfyUI won't give you the same images except you changes some settings on Automatic1111 to match ComfyUI because the seed generation is different as far as I Know. png","path":"ComfyUI-Experimental. I am using SDXL + refiner with a 3070 8go. 4. 9 Tutorial (better than Midjourney AI)Stability AI recently released SDXL 0. SDXL uses natural language prompts. For good images, typically, around 30 sampling steps with SDXL Base will suffice. Users can drag and drop nodes to design advanced AI art pipelines, and also take advantage of libraries of existing workflows. png . 根据官方文档,SDXL需要base和refiner两个模型联用,才能起到最佳效果。 而支持多模型联用的最佳工具,是comfyUI。 使用最为广泛的WebUI(秋叶一键包基于WebUI)只能一次加载一个模型,为了实现同等效果,需要先使用base模型文生图,再使用refiner模型图生图。3. 5s/it, but the Refiner goes up to 30s/it. Make the following changes: In the Stable Diffusion checkpoint dropdown, select the refiner sd_xl_refiner_1. ComfyUI is having a surge in popularity right now because it supported SDXL weeks before webui. sd_xl_refiner_0. Place upscalers in the folder ComfyUI. I have updated the workflow submitted last week, cleaning up a bit the layout and adding many functions I wanted to learn better. 9, I run into issues. Using the SDXL Refiner in AUTOMATIC1111. For instance, if you have a wildcard file called. 0, now available via Github. make a folder in img2img. 4. I think this is the best balanced I could find. But, as I ventured further and tried adding the SDXL refiner into the mix, things. ai has released Stable Diffusion XL (SDXL) 1. A (simple) function to print in the terminal the. In the case you want to generate an image in 30 steps. just using SDXL base to run a 10 step dimm ksampler then converting to image and running it on 1. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. Setup a quick workflow to do the first part of the denoising process on the base model but instead of finishing it stop early and pass the noisy result on to the refiner to finish the process. 17:38 How to use inpainting with SDXL with ComfyUI. 15. Launch the ComfyUI Manager using the sidebar in ComfyUI. But if SDXL wants a 11-fingered hand, the refiner gives up. SDXLの導入〜Refiner拡張導入のやり方をシェアします。 ①SDフォルダを丸ごとコピーし、コピー先を「SDXL」などに変更 今回の解説はすでにローカルでStable Diffusionを起動したことがある人向けです。 ローカルにStable Diffusionをインストールしたことが無い方は以下のURLが環境構築の参考になります。AP Workflow 3. 4s, calculate empty prompt: 0. This aligns the node (s) to the set ComfyUI grid spacing size and move the node in the direction of the arrow key by the grid spacing value. 35%~ noise left of the image generation. The workflow I share below is based upon an SDXL using base and refiner models both together to generate the image and then run it through many different custom nodes to showcase the different. Create a Load Checkpoint node, in that node select the sd_xl_refiner_0. ComfyUI is also has faster startup, and is better at handling VRAM, so you can generate. 0 introduces denoising_start and denoising_end options, giving you more control over the denoising process for fine. Download the SD XL to SD 1. Automate any workflow Packages. 点击 run_nvidia_gpu来启动程序,如果你是非N卡,选择cpu的bat来启动. License: SDXL 0. 17:18 How to enable back nodes. 0 Resource | Update civitai. 0. In Automatic1111's high-res fix and ComfyUI's node system, the base model and refiner use two independent k-samplers, which means the momentum is largely wasted, and the sampling continuity is broken. In the Kohya interface, go to the Utilities tab, Captioning subtab, then click WD14 Captioning subtab. Upcoming features:This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. Prerequisites. Yes only the refiner has aesthetic score cond. Note that in ComfyUI txt2img and img2img are the same node. And I'm running the dev branch with the latest updates. For those of you who are not familiar with ComfyUI, the workflow (images #3) appears to be: Generate text2image "Picture of a futuristic Shiba Inu", with negative prompt "text, watermark" using SDXL base 0. x for ComfyUI; Table of Content; Version 4. base and refiner models. 9. SDXL Base+Refiner All images are generated using both the SDXL Base model and the Refiner model, each automatically configured to perform a certain amount of diffusion. In Prefix to add to WD14 caption, write your TRIGGER followed by a comma and then your CLASS followed by a comma like so: "lisaxl, girl, ". 🧨 Diffusersgenerate a bunch of txt2img using base. This uses more steps, has less coherence, and also skips several important factors in-between. You may want to also grab the refiner checkpoint. A technical report on SDXL is now available here. 9 base+refiner, my system would freeze, and render times would extend up to 5 minutes for a single render. ComfyUI allows processing the latent image through the refiner before it is rendered (like hires fix), which is closer to the intended usage than a separate img2img process. 5 models. Please keep posted images SFW. X etc. json. Host and manage packages. 0. refiner_output_01033_. So if ComfyUI / A1111 sd-webui can't read the image metadata, open the last image in a text editor to read the details. web UI(SD. 1 - and was Very wacky. It now includes: SDXL 1. SDXL-refiner-1. ComfyUI, you mean that UI that is absolutely not comfy at all ? 😆 Just for the sake of word play, mind you, because I didn't get to try ComfyUI yet. refinerモデルを正式にサポートしている. 1. 0 base model. I need a workflow for using SDXL 0. Locate this file, then follow the following path: Is there an explanation for how to use the refiner in ComfyUI? You can just use someone elses workflow of 0. 9 - How to use SDXL 0. This node is explicitly designed to make working with the refiner easier. Examples shown here will also often make use of these helpful sets of nodes: This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. WAS Node Suite. SDXL0. Yet another week and new tools have come out so one must play and experiment with them. Upscale the refiner result or dont use the refiner. . 0! This workflow is meticulously fine tuned to accommodate LORA and Controlnet inputs, and demonstrates interactions with. 1024 - single image 25 base steps, no refiner 1024 - single image 20 base steps + 5 refiner steps - everything is better except the lapels Image metadata is saved, but I'm running Vlad's SDNext. 57. そこで、GPUを設定して、セルを実行してください。. I mean, it's also possible to use it like that, but the proper intended way to use the refiner is a two-step text-to-img. 0_comfyui_colab のノートブックが開きます。. 0 in ComfyUI I've come across three different methods that seem to be commonly used: Base Model with Latent Noise Mask, Base Model using InPaint VAE Encode and using the UNET "diffusion_pytorch" InPaint specific model from Hugging Face. My research organization received access to SDXL. Installing. Stability is proud to announce the release of SDXL 1. Stable diffusion-TensorRT安装教程,看完省一张显卡钱!,fooocus完全体2. . Click “Manager” in comfyUI, then ‘Install missing custom nodes’. ( I am unable to upload the full-sized image. My hardware is Asus ROG Zephyrus G15 GA503RM with 40GB RAM DDR5-4800, two M. Your image will open in the img2img tab, which you will automatically navigate to. The refiner is although only good at refining noise from an original image still left in creation, and will give you a blurry result if you try to add. AnimateDiff in ComfyUI Tutorial. Here is the rough plan (that might get adjusted) of the series: How To Use Stable Diffusion XL 1. Prerequisites. Created with ComfyUI using Controlnet depth model, running at controlnet weight of 1. 5 models and I don't get good results with the upscalers either when using SD1. from_pretrained(. Restart ComfyUI. Second picture is base SDXL, then SDXL + Refiner 5 Steps, then 10 Steps and 20 Steps. I recommend trying to keep the same fractional relationship, so 13/7 should keep it good. 9-refiner Model の併用も試されています。. Holding shift in addition will move the node by the grid spacing size * 10. This repo contains examples of what is achievable with ComfyUI. json: sdxl_v0. Study this workflow and notes to understand the. New comments cannot be posted. 0. I’m going to discuss…11:29 ComfyUI generated base and refiner images. AP Workflow v3 includes the following functions: SDXL Base+RefinerA good place to start if you have no idea how any of this works is the: ComfyUI Basic Tutorial VN: All the art is made with ComfyUI. SDXL_LoRA_InPAINT | SDXL_With_LoRA | SDXL_Inpaint | SDXL_Refiner_Inpaint . 5 models unless you really know what you are doing. , as I have shown in my tutorial video here. 1 latent. Thanks for your work, i'm well into A1111 but new to ComfyUI, is there any chance you will create an img2img workflow?Drawing inspiration from StableDiffusionWebUI, ComfyUI, and Midjourney’s prompt-only approach to image generation, Fooocus is a redesigned version of Stable Diffusion that centers around prompt usage, automatically handling other settings. ComfyUIでSDXLを動かす方法まとめ. Study this workflow and notes to understand the basics of ComfyUI, SDXL, and Refiner workflow. The SDXL 1. Which makes it usable on some very low end GPUs, but at the expense of higher RAM requirements. Having issues with refiner in ComfyUI. safetensors and sd_xl_base_0. 5-38 secs SDXL 1. Part 1: Stable Diffusion SDXL 1. I just uploaded the new version of my workflow. Eventually weubi will add this feature and many people will return to it because they don't want to micromanage every detail of the workflow.