Here's what I've found: When I pair the SDXL base with my LoRA on ComfyUI, things seem to click and work pretty well. +Use SDXL Refiner as Img2Img and feed your pictures. Restart ComfyUI. 0, an open model representing the next evolutionary step in text-to-image generation models. It allows users to apply predefined styling templates stored in JSON files to their prompts effortlessly. What's new in 3. 9 with updated checkpoints, nothing fancy, no upscales, just straight refining from latent. 9: The refiner has been trained to denoise small noise levels of high quality data and as such is not expected to work as a text-to-image model;. I don't want it to get to the point where people are just making models that are designed around looking good at displaying faces. ComfyUI provides a powerful yet intuitive way to harness Stable Diffusion through a flowchart interface. AP Workflow v3 includes the following functions: SDXL Base+RefinerA good place to start if you have no idea how any of this works is the: ComfyUI Basic Tutorial VN: All the art is made with ComfyUI. Explain the Ba. png","path":"ComfyUI-Experimental. SDXL Resolution. If you want a fully latent upscale, make sure the second sampler after your latent upscale is above 0. 9 and Stable Diffusion 1. Here is the rough plan (that might get adjusted) of the series: How To Use Stable Diffusion XL 1. The two-model setup that SDXL uses has the base model is good at generating original images from 100% noise, and the refiner is good at adding detail at 0. 9モデル2つ(BASE, Refiner) 2. could you kindly give me. では生成してみる。. ( I am unable to upload the full-sized image. What I have done is recreate the parts for one specific area. 次にSDXLのモデルとVAEをダウンロードします。 SDXLのモデルは2種類あり、基本のbaseモデルと、画質を向上させるrefinerモデルです。 どちらも単体で画像は生成できますが、基本はbaseモデルで生成した画像をrefinerモデルで仕上げるという流れが一般的なよう. 0 ComfyUI. The sudden interest with ComfyUI due to SDXL release was perhaps too early in its evolution. In any case, just grabbing SDXL. Efficiency Nodes for ComfyUI A collection of ComfyUI custom nodes to help streamline workflows and reduce total node count. By becoming a member, you'll instantly unlock access to 67 exclusive posts. 4. The two-model setup that SDXL uses has the base model is good at generating original images from 100% noise, and the refiner is good at adding detail at 0. I'm creating some cool images with some SD1. Restart ComfyUI. I'll keep playing with comfyui and see if I can get somewhere but I'll be keeping an eye on the a1111 updates. Fully supports SD1. The node is located just above the “SDXL Refiner” section. Make the following changes: In the Stable Diffusion checkpoint dropdown, select the refiner sd_xl_refiner_1. I’ve created these images using ComfyUI. You really want to follow a guy named Scott Detweiler. Table of contents. 11:56 Side by side Automatic1111 Web UI SDXL output vs ComfyUI output. This workflow uses similar concepts to my iterative, with multi-model image generation consistent with the official approach for SDXL 0. {"payload":{"allShortcutsEnabled":false,"fileTree":{"":{"items":[{"name":"LICENSE","path":"LICENSE","contentType":"file"},{"name":"README. 这才是SDXL的完全体。stable diffusion教学,SDXL1. I recommend you do not use the same text encoders as 1. Here Screenshot . 8s)Chief of Research. 5 models unless you really know what you are doing. Your results may vary depending on your workflow. This aligns the node (s) to the set ComfyUI grid spacing size and move the node in the direction of the arrow key by the grid spacing value. 9 was yielding already. 6B parameter refiner model, making it one of the largest open image generators today. 0 Features: Shared VAE Load: the loading of the VAE is now applied to both the base and refiner models, optimizing your VRAM usage and enhancing overall performance. ComfyUI-CoreMLSuite now supports SDXL, LoRAs and LCM. My hardware is Asus ROG Zephyrus G15 GA503RM with 40GB RAM DDR5-4800, two M. It's official! Stability. 0以降 である必要があります(※もっと言うと後述のrefinerモデルを手軽に使うためにはv1. Always use the latest version of the workflow json file with the latest version of the custom nodes! SDXL 1. Members Online •. 0. (especially with SDXL which can work in plenty of aspect ratios). 你可以在google colab. This gives you the option to do the full SDXL Base + Refiner workflow or the simpler SDXL Base-only workflow. SDXL-OneClick-ComfyUI . For example: 896x1152 or 1536x640 are good resolutions. I know a lot of people prefer Comfy. Extract the zip file. Simplified Interface. These files are placed in the folder ComfyUImodelscheckpoints, as requested. Aug 2. SDXL Prompt Styler. However, the SDXL refiner obviously doesn't work with SD1. So I gave it already, it is in the examples. 0. I think this is the best balanced I. Model type: Diffusion-based text-to-image generative model. Well, SDXL has a refiner, I'm sure you're asking right about now - how do we get that implemented? Although SDXL works fine without the refiner (as demonstrated above) you really do need to use the refiner model to get the full use. Join to Unlock. So I have optimized the ui for SDXL by removing the refiner model. It might come handy as reference. 20 Steps shouldn't wonder anyone, for Refiner you should use maximum the half amount of Steps you used to generate the picture, so 10 should be max. Refiner same folder as Base model, although with refiner i can't go higher then 1024x1024 in img2img. 0 Base should have at most half the steps that the generation has. but ill add to that, currently only people with 32gb ram and a 12gb graphics card are going to make anything in a reasonable timeframe if they use the refiner. just tried sdxl setup with. With SDXL as the base model the sky’s the limit. Yes, 8Gb card, ComfyUI workflow loads both SDXL base & refiner models, separate XL VAE, 3 XL LoRAs, plus Face Detailer and its sam model and bbox detector model, and Ultimate SD Upscale with its ESRGAN model and input from the same base SDXL model all work together. I had experienced this too, dont know checkpoint is corrupted but it’s actually corrupted Perhaps directly download into checkpoint folderI tried SDXL in A1111, but even after updating the UI, the images take veryyyy long time and don't finish, like they stop at 99% every time. make a folder in img2img. In ComfyUI this can be accomplished with the output of one KSampler node (using SDXL base) leading directly into the input of another KSampler node (using. useless) gains still haunts me to this day. Basic Setup for SDXL 1. At that time I was half aware of the first you mentioned. SDXL Base 1. With SDXL I often have most accurate results with ancestral samplers. Unlike the previous SD 1. Installing. But the clip refiner is built in for retouches which I didn't need since I was too flabbergasted with the results SDXL 0. There are significant improvements in certain images depending on your prompt + parameters like sampling method/steps/CFG scale etc. 0. x, SD2. Study this workflow and notes to understand the. but if I run Base model (creating some images with it) without activating that extension or simply forgot to select the Refiner model, and LATER activating it, it gets OOM (out of memory) very much likely when generating images. 0 through an intuitive visual workflow builder. Explain the Basics of ComfyUI. Automatic1111–1. In the Kohya interface, go to the Utilities tab, Captioning subtab, then click WD14 Captioning subtab. 0. 9 safetensors + LoRA workflow + refiner The text was updated successfully, but these errors were encountered: All reactionsComfyUI got attention recently because the developer works for StabilityAI and was able to be the first to get SDXL running. Re-download the latest version of the VAE and put it in your models/vae folder. Examples. I hope someone finds it useful. 5 models) to do. 5 and always below 9 seconds to load SDXL models. Yes only the refiner has aesthetic score cond. 9 and Stable Diffusion 1. 6. safetensors. Lora. IDK what you are doing wrong to wait 90 seconds. 2占最多,比SDXL 1. . 5 model (directory: models/checkpoints) Install your loras (directory: models/loras) Restart. Using the SDXL Refiner in AUTOMATIC1111. The ONLY issues that I've had with using it was with the. 0以降が必要)。しばらくアップデートしていないよという方はアップデートを済ませておきましょう。 generate a bunch of txt2img using base. BRi7X. A (simple) function to print in the terminal the. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. workflow custom-nodes stable-diffusion comfyui sdxl Updated Nov 13, 2023; Python;. I can't emphasize that enough. Especially on faces. そこで、GPUを設定して、セルを実行してください。. There are several options on how you can use SDXL model: How to install SDXL 1. Sample workflow for ComfyUI below - picking up pixels from SD 1. 1. 23:48 How to learn more about how to use ComfyUI. SDXL VAE. 5s/it, but the Refiner goes up to 30s/it. SDXLの特徴の一つっぽいrefinerを使うには、それを使うようなフローを作る必要がある。. SD-XL 0. I've been having a blast experimenting with SDXL lately. Save the image and drop it into ComfyUI. Im new to ComfyUI and struggling to get an upscale working well. NOTE: You will need to use linear (AnimateDiff-SDXL) beta_schedule. I wanted to share my configuration for ComfyUI, since many of us are using our laptops most of the time. Outputs will not be saved. ComfyUIでSDXLを動かす方法まとめ. Thanks for your work, i'm well into A1111 but new to ComfyUI, is there any chance you will create an img2img workflow?This notebook is open with private outputs. SDXL includes a refiner model specialized in denoising low-noise stage images to generate higher-quality images from the base model. In ComfyUI this can be accomplished with the output of one KSampler node (using SDXL base) leading directly into the input of another KSampler. Yes 5 seconds for models based on 1. 0! Usage This workflow is meticulously fine tuned to accommodate LORA and Controlnet inputs, and demonstrates interactions with embeddings as well. Judging from other reports, RTX 3xxx are significantly better at SDXL regardless of their VRAM. 3. x for ComfyUI. x for ComfyUI ; Table of Content ; Version 4. With Vlad releasing hopefully tomorrow, I'll just wait on the SD. 1. png","path":"ComfyUI-Experimental. base and refiner models. 4s, calculate empty prompt: 0. All the list of Upscale model is. safetensors files to the ComfyUI file which is present with name ComfyUI_windows_portable file. This node is explicitly designed to make working with the refiner easier. Start with something simple but that will be obvious that it’s working. Note: I used a 4x upscaling model which produces a 2048x2048, using a 2x model should get better times, probably with the same effect. 5 and 2. Currently, a beta version is out, which you can find info about at AnimateDiff. Table of Content. Hires isn't a refiner stage. ComfyUI seems to work with the stable-diffusion-xl-base-0. 0 Alpha + SD XL Refiner 1. 6B parameter refiner. VRAM settings. And to run the Refiner model (in blue): I copy the . 5 renders, but the quality i can get on sdxl 1. 23:06 How to see ComfyUI is processing the which part of the. 5 + SDXL Refiner Workflow : StableDiffusion. I’m going to discuss…11:29 ComfyUI generated base and refiner images. 999 RC August 29, 2023 20:59 testing Version 3. SDXLをGoogle Colab上で簡単に使う方法をご紹介します。 Google Colabに既に設定済みのコードを使用することで、簡単にSDXLの環境をつくりあげす。また、ComfyUIも難しい部分は飛ばし、わかりやすさ、応用性を意識した設定済みのworkflowファイルを使用することで、すぐにAIイラストを生成できるように. All images are generated using both the SDXL Base model and the Refiner model, each automatically configured to perform a certain amount of diffusion steps according to the “Base/Refiner Step Ratio” formula defined in the dedicated widget. Upcoming features:This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. do the pull for the latest version. While the normal text encoders are not "bad", you can get better results if using the special encoders. In the second step, we use a specialized high-resolution model and apply a technique called SDEdit. Using the refiner is highly recommended for best results. ComfyUI allows processing the latent image through the refiner before it is rendered (like hires fix), which is closer to the intended usage than a separate img2img process. It didn't work out. I'm not sure if it will be helpful to your particular use case because it uses SDXL programmatically and it sounds like you might be using the ComfyUI? Not totally. 9 and Stable Diffusion 1. safetensors”. I've been having a blast experimenting with SDXL lately. json file to ComfyUI window. SDXL base → SDXL refiner → HiResFix/Img2Img (using Juggernaut as the model, 0. Got playing with SDXL and wow! It's as good as they stay. In the Comfyui SDXL workflow example, the refiner is an integral part of the generation process. SDXL Lora + Refiner Workflow. 5 method. StabilityAI have release Control-LoRA for SDXL which are low-rank parameter fine tuned ControlNet for SDXL which. ComfyUIを使ってみる勇気 以上です。 「なんか難しそうで怖い…🥶」という方は、まず私の動画を見てComfyUIのイメトレをしてから望むのも良いと思います。I just wrote an article on inpainting with SDXL base model and refiner. 0 with ComfyUI's Ultimate SD Upscale Custom Node in this illuminating tutorial. Now with controlnet, hires fix and a switchable face detailer. This is pretty new so there might be better ways to do this, however this works well and we can stack Lora and Lycoris easily, then generate our text prompt at 1024x1024 and allow remacri to double. Put the model downloaded here and the SDXL refiner in the folder: ComfyUI_windows_portable\ComfyUI\models\checkpoints. refiner is an img2img model so you've to use it there. 5 refiner node. safetensors and then sdxl_base_pruned_no-ema. ComfyUI allows processing the latent image through the refiner before it is rendered (like hires fix), which is closer to the intended usage than a separate img2img process. Adds 'Reload Node (ttN)' to the node right-click context menu. ( I am unable to upload the full-sized image. Launch the ComfyUI Manager using the sidebar in ComfyUI. When trying to execute, it refers to the missing file "sd_xl_refiner_0. I tried Fooocus yesterday and I was getting 42+ seconds for a 'quick' generation (30 steps). There is an SDXL 0. 9 refiner node. 9" what is the model and where to get it? Reply reply Adventurous-Abies296 The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. 5 models. . Not positive, but I do see your refiner sampler has end_at_step set to 10000, and seed to 0. You can get it here - it was made by NeriJS. — NOTICE: All experimental/temporary nodes are in blue. The refiner is although only good at refining noise from an original image still left in creation, and will give you a blurry result if you try to add. 0 Base model used in conjunction with the SDXL 1. launch as usual and wait for it to install updates. 第一、风格控制 第二、base模型以及refiner模型如何连接 第三、分区提示词控制 第四、多重采样的分区控制 comfyui节点流程这个东西一通百通,逻辑正确怎么连都可以,所以这个视频我讲得并不仔细. 0. png . You can type in text tokens but it won’t work as well. 0 Refiner model. Updated with 1. Updated ComfyUI Workflow: SDXL (Base+Refiner) + XY Plot + Control-LoRAs + ReVision + ControlNet XL OpenPose + Upscaler . Just wait til SDXL-retrained models start arriving. json: sdxl_v1. 5 models. Base sdxl mixes openai clip and openclip, while the refiner is openclip only. 1 Workflow - Complejo - for Base+Refiner and Upscaling; 1. 0 almost makes it. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. I describe my idea in one of the post and Apprehensive_Sky892 showed me it's arleady working in ComfyUI. And the refiner files here: stabilityai/stable. Basically, it starts generating the image with the Base model and finishes it off with the Refiner model. 0 introduces denoising_start and denoising_end options, giving you more control over the denoising process for fine. 9. AnimateDiff for ComfyUI. Traditionally, working with SDXL required the use of two separate ksamplers—one for the base model and another for the refiner model. . download the SDXL models. x, SDXL and Stable Video Diffusion; Asynchronous Queue system ComfyUI installation. see this workflow for combining SDXL with a SD1. Yup, all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have that, though). How To Use Stable Diffusion XL 1. This seems to give some credibility and license to the community to get started. If you want it for a specific workflow you can copy it from the prompt section # of the image metadata of images generated with ComfyUI # keep in mind ComfyUI is pre alpha software so this format will change a bit. Fooocus and ComfyUI also used the v1. Do I need to download the remaining files pytorch, vae and unet? also is there an online guide for these leaked files or do they install the same like 2. ComfyUI was created by comfyanonymous, who made the tool to understand. Includes LoRA. Second picture is base SDXL, then SDXL + Refiner 5 Steps, then 10 Steps and 20 Steps. 0: SDXL support (July 24) The open source Automatic1111 project (A1111 for short), also known as Stable Diffusion WebUI, is a. The workflow should generate images first with the base and then pass them to the refiner for further. The refiner refines the image making an existing image better. I also desactivated all extensions & tryed to keep some after, dont. json file which is easily loadable into the ComfyUI environment. Use at your own risk. 1 Base and Refiner Models to the ComfyUI file. It is not AnimateDiff but a different structure entirely, however Kosinkadink who makes the AnimateDiff ComfyUI nodes got it working and I worked with one of the creators to figure out the right settings to get it to give good outputs. 0 seed: 640271075062843 大家好,我是小志Jason。一个探索Latent Space的程序员。今天来深入讲解一下SDXL的工作流,顺便说一下SDXL和过去的SD流程有什么区别 官方在discord上chatbot测试的数据,文生图觉得SDXL 1. Stability. 3分ほどで のような Cloudflareのリンク が現れ、モデルとVAEのダウンロードが終了し. I think you can try 4x if you have the hardware for it. SDXL-OneClick-ComfyUI (sdxl 1. The refiner is entirely optional and could be used equally well to refine images from sources other than the SDXL base model. Other than that, same rules of thumb apply to AnimateDiff-SDXL as AnimateDiff. 0 base and refiner and two others to upscale to 2048px. 9. If you have the SDXL 1. bat file. To experiment with it I re-created a workflow with it, similar to my SeargeSDXL workflow. 0. 0. 0 with SDXL-ControlNet: Canny Part 7: This post!Wingto commented on May 9. 9 base+refiner, my system would freeze, and render times would extend up to 5 minutes for a single render. Open comment sort options. Do I need to download the remaining files pytorch, vae and unet? also is there an online guide for these leaked files or do they install the same like 2. Thanks. ComfyUI is a powerful modular graphic interface for Stable Diffusion models that allows you to create complex workflows using nodes. 今天,我们来讲一讲SDXL在comfyui中更加进阶的节点流逻辑。. x for ComfyUI. 0 A1111 vs ComfyUI 6gb vram, thoughts self. Here Screenshot . VAE selector, (needs a VAE file, download SDXL BF16 VAE from here, and VAE file for SD 1. I also have a 3070, the base model generation is always at about 1-1. 0. download the SDXL VAE encoder. 9 comfyui (i would prefere to use a1111) i'm running a rtx 2060 6gb vram laptop and it takes about 6-8m for a 1080x1080 image with 20 base steps & 15 refiner steps edit: im using Olivio's first set up(no upscaler) edit: after the first run i get a 1080x1080 image (including the refining) in Prompt executed in 240. Software. When you define the total number of diffusion steps you want the system to perform, the workflow will automatically allocate a certain number of those steps to each model, according to the refiner_start. With some higher rez gens i've seen the RAM usage go as high as 20-30GB. SDXL_1 (right click and save as) workflow has the SDXL setup with refiner with best settings. The other difference is 3xxx series vs. Img2Img ComfyUI workflow. Both ComfyUI and Foooocus are slower for generation than A1111 - YMMW. Reply reply litekite_ For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples Installing ComfyUI Features. Generating 48 in batch sizes of 8 in 512x768 images takes roughly ~3-5min depending on the steps and the sampler. , Realistic Stock Photo)In fact, ComfyUI is more stable than WEBUI(As shown in the figure, SDXL can be directly used in ComfyUI) @dorioku. md. 1/1. import torch from diffusers import StableDiffusionXLImg2ImgPipeline from diffusers. 0 is configured to generated images with the SDXL 1. 1 - Tested with SDXL 1. In Automatic1111's high-res fix and ComfyUI's node system, the base model and refiner use two independent k-samplers, which means the momentum is largely wasted, and the sampling continuity is broken. 0 Resource | Update civitai. This aligns the node (s) to the set ComfyUI grid spacing size and move the node in the direction of the arrow key by the grid spacing value. In Image folder to caption, enter /workspace/img. SEGSDetailer - Performs detailed work on SEGS without pasting it back onto the original image. ComfyUI ControlNet aux: Plugin with preprocessors for ControlNet, so you can generate images directly from ComfyUI. 5 models. The refiner is although only good at refining noise from an original image still left in creation, and will give you a blurry result if you try to add. Saved searches Use saved searches to filter your results more quickly下記は、SD. 1024 - single image 25 base steps, no refiner 1024 - single image 20 base steps + 5 refiner steps - everything is better except the lapels Image metadata is saved, but I'm running Vlad's SDNext. 5 512 on A1111. 61 To quote them: The drivers after that introduced the RAM + VRAM sharing tech, but it creates a massive slowdown when you go above ~80%. At that time I was half aware of the first you mentioned. x, SDXL and Stable Video Diffusion; Asynchronous Queue systemComfyUI installation. The refiner is although only good at refining noise from an original image still left in creation, and will give you a blurry result if you try. Searge-SDXL: EVOLVED v4. Warning: the workflow does not save image generated by the SDXL Base model. But if SDXL wants a 11-fingered hand, the refiner gives up. I trained a LoRA model of myself using the SDXL 1. 5 from here. x, SD2. ControlNet Workflow. 9. Setup a quick workflow to do the first part of the denoising process on the base model but instead of finishing it stop early and pass the noisy result on to the refiner to finish the process. 9) Tutorial | Guide 1- Get the base and refiner from torrent. SEGSPaste - Pastes the results of SEGS onto the original. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. . I just uploaded the new version of my workflow. Stability. Download the SD XL to SD 1. A second upscaler has been added. In my ComfyUI workflow, I first use the base model to generate the image and then pass it. Run update-v3. Run ComfyUI with colab iframe (use only in case the previous way with localtunnel doesn't work) You should see the ui appear in an iframe. Stable Diffusion XL 1. Play around with different Samplers and different amount of base Steps (30, 60, 90, maybe even higher). 6. AI Art with ComfyUI and Stable Diffusion SDXL — Day Zero Basics For an Automatic1111 User. Searge SDXL Nodes The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. 9 VAE; LoRAs. On the ComfyUI. But it separates LORA to another workflow (and it's not based on SDXL either). The refiner refines the image making an existing image better. Prerequisites. 🧨 DiffusersHere's the guide to running SDXL with ComfyUI. Below the image, click on " Send to img2img ". 0 ComfyUI Workflow With Nodes Use Of SDXL Base & Refiner ModelIn this tutorial, join me as we dive into the fascinating worl. 3. SDXL 專用的 Negative prompt ComfyUI SDXL 1. Although SDXL works fine without the refiner (as demonstrated above) you really do need to use the refiner model to get the full use out of the model. 9 Tutorial (better than Midjourney AI)Stability AI recently released SDXL 0. 236 strength and 89 steps for a total of 21 steps) 3. The second setting flattens it a bit and gives it a more smooth appearance, a bit like an old photo. I am using SDXL + refiner with a 3070 8go. It'll load a basic SDXL workflow that includes a bunch of notes explaining things. 左上角的 Prompt Group 內有 Prompt 及 Negative Prompt 是 String Node,再分別連到 Base 及 Refiner 的 Sampler。 左邊中間的 Image Size 就是用來設定圖片大小, 1024 x 1024 就是對了。 左下角的 Checkpoint 分別是 SDXL base, SDXL Refiner 及 Vae。 This is the most well organised and easy to use ComfyUI Workflow I've come across so far showing difference between Preliminary, Base and Refiner setup. The goal is to become simple-to-use, high-quality image generation software. Please keep posted images SFW. Commit date (2023-08-11) My Links: discord , twitter/ig . 5 refined model) and a switchable face detailer. Improved AnimateDiff integration for ComfyUI, initially adapted from sd-webui-animatediff but changed greatly since then. Set the base ratio to 1. 99 in the “Parameters” section. License: SDXL 0. So, with a little bit of effort it is possible to get ComfyUI up and running alongside your existing Automatic1111 install and to push out some images from the new SDXL model. Working amazing. Put into ComfyUImodelsvaeSDXL and ComfyUImodelsvaeSD15). SDXL clip encodes are more if you intend to do the whole process using SDXL specifically, they make use of. ai has now released the first of our official stable diffusion SDXL Control Net models. 9 safetesnors file. Explain COmfyUI Interface Shortcuts and Ease of Use. You can disable this in Notebook settingsComfyUI is a powerful and modular GUI for Stable Diffusion that lets you create advanced workflows using a node/graph interface. 5x), but I can't get the refiner to work. json: sdxl_v0. Link. (introduced 11/10/23). 这一期呢我们来开个新坑,来讲sd的另一种打开方式,也就是这个节点化comfyUI。那熟悉我们频道的老观众都知道,我一直是用webUI去做演示和讲解的.