EVERY
LORA.
ZERO CLICKS.
A ComfyUI node that automatically steps through every LoRA in a directory on each generation run. Increment, decrement, randomize, or hold fixed. No rewiring. No manual switching. Queue and walk away.
Installation
cd ComfyUI/custom_nodes git clone https://github.com/OATH-Studio/comfy-LoRA-iterator
# Zero extra pip installs required. # comfy.sd · comfy.utils · folder_paths # are all part of standard ComfyUI.
# Find the node under loaders/lora → LoRA Iterator # Wire: LoRA Iterator → KSampler
# increment → walks every LoRA in order # randomize → random LoRA each run # Queue N runs and walk away.
Workflow Connection
Wire lora_name into your filename prefix node so every saved image is automatically named after the LoRA that generated it.
Features
Directory Scoped
Dropdown shows every subdirectory in your loras folder. Scope to a model family like "zimage" so mismatched LoRAs never run on the wrong model.
Increment Mode
Steps forward one LoRA after each generation. Queue 12 runs and all 12 LoRAs run automatically — alphabetical order, no manual switching.
Decrement Mode
Steps backward through the list. Useful when you want to re-test in reverse order or walk back from a known good result.
Randomize Mode
Picks a random LoRA from the directory on each run. Good for discovery runs where you want unpredictable variation across a prompt.
Fixed Mode
Locks to the current selection and never advances. ComfyUI caches the patched model so it does not reload unnecessarily between runs.
IS_CHANGED Aware
Uses IS_CHANGED correctly — non-fixed modes always re-execute so the index actually advances. Fixed mode returns a stable hash so caching still works.
Mode Behaviour
Each mode determines which LoRA is selected on the next run. State is tracked per node instance so multiple nodes in the same graph iterate independently.
Node Inputs
Node Outputs
Directory Layout
Organise LoRAs into subdirectories by model family. The directory dropdown lets you scope iteration to only the LoRAs that are compatible with the model currently loaded in your workflow.
Recommended Structure
ComfyUI/models/loras/
zimage/
AnimeMix_Zturbo.safetensors
Realism_Zturbo.safetensors
Portrait_Zturbo.safetensors
sdxl/
detail_enhancer.safetensors
cinematic_lighting.safetensors
flux/
style_transfer.safetensors
my_lora.safetensorsDropdown Result
Selecting zimage means only those 3 LoRAs are iterated — no risk of running a Flux LoRA on a zimage model.
FAQ
Can I have multiple LoRA Iterator nodes in one graph?
+Yes. Each node instance tracks its own index independently using ComfyUI's unique_id hidden input. Two nodes set to increment will walk through their respective directories separately without interfering with each other.
What happens when it reaches the last LoRA?
+It wraps. Increment goes back to index 0, decrement goes back to the last index. You can queue more runs than you have LoRAs and it will cycle through again from the start.
Does it work with nested subdirectories?
+The directory dropdown shows immediate subdirectories of your loras root only. Files inside those subdirectories are all included in the filtered list. Deeper nesting (subdirectories inside subdirectories) is not currently scanned.
How do I embed the LoRA name in my saved filenames?
+Wire the lora_name STRING output into a filename prefix node or a text concatenation node before your SaveImage node. The output is the full relative path as ComfyUI knows it, e.g. "zimage/AnimeMix.safetensors".
Does fixed mode reload the LoRA file every run?
+No. Fixed mode returns a stable hash from IS_CHANGED so ComfyUI sees no change and uses its cached output. The patched model is not rebuilt on every run, which is the correct behaviour for a static selection.
Need Custom
AI Tooling?
This node is a small example of what we build. We design and develop custom AI pipelines, local inference tooling, ComfyUI integrations, and production workflows for studios and independent creators who want control over their stack.
- Custom ComfyUI nodes and workflow automation
- Local LLM integration and prompt engineering
- vLLM / Ollama deployment and optimisation
- End-to-end AI image and video pipelines
- On-premise — your hardware, your data
Technical details
Free · Open Source · MIT License
QUEUE IT.
WALK AWAY.
Built by OATH Studio. We make open tools for AI artists and studios, and take on custom development work for teams who need something specific. No cloud dependencies. No subscriptions.