Skip to main content

Local 940X90

Comfyui pony workflow reddit


  1. Comfyui pony workflow reddit. Using the basic comfy workflow from huggingface, the sd3_medium_incl_clips model, latest version of comfy, all default workflow settings, on M3 Max MBP, all I can produce are these noise images. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. It encapsulates the difficulties and idiosyncrasies of python programming by breaking the problem down in units which are represented as nodes. It’s becoming very overwhelming and counterproductive to my workflow. I share many results and many ask to share. I use a lot of the merges on CivitAI, and one other key I've found is using a low CFG. You can also easily upload & share your own ComfyUI workflows, so that others can build on top Jul 9, 2024 · How the workflow progresses: Initial image generation; Hands fix; Watermark removal; Ultimate SD Upscale; Eye detailer; Save image; This workflow contains custom nodes from various sources and can all be found using comfyui manager. Pony Diffusion and EpicRealism seem to be my “go to” options, but then I try something like Juggernaut or RealVis and I’m back to racking my brain. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to Welcome to the unofficial ComfyUI subreddit. A sample workflow for running CosXL Edit models, such as my RobMix CosXL Edit checkpoint. Hey Reddit! I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. Offers various art styles. What im thinking of is setting up a workflow that uses Pony then run it back again for a second pass with IP Adapter img2img with the image from the pony pipeline and see how that goes. Number 1: This will be the main control center. Ending Workflow. After all: Default workflow still uses the general clip encoder, ClipTextEncode Welcome to the unofficial ComfyUI subreddit. Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from there. What samplers should I use? How many steps? What am I doing wrong? Some people there just post a lot of very similar workflows just to show of the picture which makes it a bit annoying when you want to find new interesting ways to do things in comfyUI. So, I just made this workflow ComfyUI. A higher clipskip (in A1111, lower in ComfyUI's terms, or more negative) equates to LESS detail in CLIP (not to be confused by details in the image). Just upload the JSON file, and we'll automatically download the custom nodes and models for you, plus offer online editing if necessary. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. Please share your tips, tricks, and workflows for using this software to create your AI art. For your all-in-one workflow, use the Generate tab. I hope that having a comparison was useful nevertheless. For a dozen days, I've been working on a simple but efficient workflow for upscale. Just my two cents. Here goes the philosophical thought of the day, yesterday I blew my ComfyUI (gazilions of custom nodes, that have wrecked the ComfyUI, half of the workflows did not worked, because dependency difference between the packages between those workflows were so huge, that I had to do basically a full-blown reinstall). Also, if this is new and exciting to you, feel free to post comfy uis inpainting and masking aint perfect. 0 and upscalers Welcome to the unofficial ComfyUI subreddit. I just released version 4. Its default workflow works out of the box, and I definitely appreciate all the examples for different work flows. Very proficient in furry, feet, almost every NSFW stuffs etc Beside that, if you have a large workflow built out, but want to add in a section from someone else's workflow, open the other workflow in another tab, you can hold shift and select each node individually to select a bunch (or hold down ctrl and drag around a group of nodes you want to copy. 5 not XL) I know you can do this by generating an image of 2 people using 1 lora (it will make the same person twice) and then inpainting the face with a different lora and use openpose / regional prompter. io/ComfyUI_examples/flux/flux_dev_example. ComfyUI needs a stand alone node manager imo, something that can do the whole install process and make sure the correct install paths are being used for modules. The graphic style I think it was 3DS Max. I call it 'The Ultimate ComfyUI Workflow', easily switch from Txt2Img to Img2Img, built-in Refiner, LoRA selector, Upscaler & Sharpener. I'm not sure if IP Adapter will. Starting workflow. Nobody needs all that, LOL. ) Ctrl C, then in your workflow Ctrl V. I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. Please keep posted images SFW. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". Nothing fancy. 3 - At least to my eyes, 2 step lora @ 5 step is better than 4 step lora @ 5 steps. Also, if this is new and exciting to you, feel free to post Hello good people! I need your advice or some ready-2-go workflow to recreate this one workflow from A1111 in Comfy: 1 step: generating images with adding some (2-3) additional LORAs. 9(just search in youtube sdxl 0. 0 for ComfyUI (SDXL Base+Refiner, XY Plot, ControlNet XL w/ OpenPose, Control-LoRAs, Detailer, Upscaler, Prompt Builder) Tutorial | Guide I published a new version of my workflow, which should fix the issues that arose this week after some major changes in some of the custom nodes I use. You can't change clipskip and get anything useful from some models (SD2. We would like to show you a description here but the site won’t allow us. Less is more approach. Share, discover, & run thousands of ComfyUI workflows. Help me make it better! Welcome to the unofficial ComfyUI subreddit. BTW , 1step lora's are unusable on both. Take a Lora of person A and a Lora of Person B, place them into the same photo (SD1. It's simple and straight to the point. AP Workflow v3. Not a specialist, just a knowledgeable beginner. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. . (I've also edited the post to include a link to the workflow) Under 4K: generate base SDXL size with extras like character models or control nets -> face / hand / manual area inpainting with differential diffusion -> Ultrasharp 4x -> unsampler -> second ksampler with a mixture of inpaint and tile controlnet (I found only using tile control net blurs the image) Pony is weird. 2 step loras @ 2 step also very bland, 4 step loras @ 4 step , same. If you see any red nodes, I recommend using comfyui manager's "install missing custom nodes" function. So, up until today, I figured the "default workflow" was still always the best thing to use. I don't have much time to type but: The first is to use a model upscaler, which will work out of your image node, and you can download those from a website that has dozens of models listed, but a popular one is some sort is Ergan 4X. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Welcome to the unofficial ComfyUI subreddit. It's not for beginners, but that's OK. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) 10 upvotes · comments I’m finding it hard to stick with one and I’m constantly trying different combinations of Loras with Checkpoints. ComfyUI is a completely different conceptual approach to generative art. A CosXL Edit model takes a source image as input alongside a prompt, and interprets the prompt as an instruction for how to alter the image, similar to InstructPix2Pix. The ui feels professional and directed. Hey everyone, We've built a quick way to share ComfyUI workflows through an API and an interactive widget. Uncharacteristically, it's not as tidy as I'd like, mainly due to a challenge I have with passing the checkpoint/model name through reroute nodes. Belittling their efforts will get you banned. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. I need a img2img pony Mar 23, 2024 · My Review for Pony Diffusion XL: Skilled in NSFW content. 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. Just load your image, and prompt and go. I've color-coded all related windows so you always know what's going on. 0 and Pony for example which, for Pony I think needs 2 always) because of how their CLIP is encoded. Comfy Workflows Comfy Workflows. May 19, 2024 · Download the workflow and open it in ComfyUI. Welcome to the unofficial ComfyUI subreddit. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. It was one of the earliest to add support for turbo, for example. 1 or not. A lot of people are just discovering this technology, and want to show off what they created. 0 of my AP Workflow for ComfyUI. What’s New in 4. com/ How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. This is gonna replace lightning lora's when using with pony at least for me. And above all, BE NICE. It's become such a different model that most of the loras don't work with it. There are plenty of ways, it depends on your needs, too many to count. YMMV but lower CFG with pony has TREMENDOUSLY reduced my frustration with it Anyone have a workflow to do the following. Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. ComfyUI is usualy on the cutting edge of new stuff. Upcoming tutorial - SDXL Lora + using 1. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. github. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. I've been especially digging the detail in the clothing more than anything else. It shines with LoRAs but I personally haven't used Pony itself for months. 5-5 most of the time. I wanted a very simple but efficient & flexible workflow. The "workflow" is different, but if you're willing to put in the effort to thoroughly learn a game like that and enjoy the process, then learning ComfyUI shouldn't be that much of a challenge Reply reply More replies More replies Welcome to the unofficial ComfyUI subreddit. Specializes in adorable anime characters. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. 2 - At least with pony hyper seems better. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. Hi. Oh, and if you would like to try out the workflow, check out the comments! I couldn't put it in the description as my account awaits verification. ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. Any suggestions? That's awesome! ComfyUI had been one of the two repos I keep installed, SD-UX fork of auto and this. But it's reasonably clean to be used as a learning tool, which is and will always remain the main goal of this workflow. png) Flux Schnell is a distilled 4 step model. problem with using the comfyUI manager is if your comfyui won't load you are SOL fixing it. I have a question about how to use Pony V6 XL in comfyUI? SD generates blurry images for me. hopefully this will be useful to you. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. but mine do include workflows for the most part in the video description. 0 I tried to find a good Inpaint workflow and just found a bunch of wild workflows that wanted a million nodes and had a bunch of different functions. Aug 2, 2024 · You can then load or drag the following image in ComfyUI to get the workflow: This image contains the workflow (https://comfyanonymous. Like 2. So I'm happy to announce today: my tutorial and workflow are available. I really really love how lightweight and flexible it is. Allo! I am beginning to work with ComfyUI moving from a1111 - I know there are so so many workflows published to civit and other sites- I am hoping to find a way to dive in and start working with ComfyUI without wasting much time with mediocre/redundant workflows and am hoping someone can help me by pointing be toward a resource to find some of the better developed comfy workflows You can just use someone elses workflow of 0. in your workflow HandsRefiner works as a detailer for the properly generated hands, it is not a "fixer" for wrong anatomy - I say it because I have the same workflow myself (unless if you are trying to connect some depth controlnet to that detailer node) Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. vize amqqsvj xuyy gxsarz dzz hjd zjxqq yfjpfbq fsan mmi