Using a node based workflow with comfyUI, also being able to draw, also being able to train on your own images in a lora, and effectively using control nets and masks: different story...
I see, in the near future, a workflow by artists, where they themselves draw a sketch, with composition information, then use that as a base for 'rendering' the image drawn, with clean up with masking and hand drawing. lowering the time to output images.
Commercial artists will be competing, on many aspects that have nothing to do with the quality of their art itself. One of those factors is speed, and quantity. Other non-artistic aspects artists compete with are marketing, sales and attention.
Just like the artisan weavers back in the day were competing with inferior quality automatic loom machines. Focusing on quality over all others misses what it means to be in a society and meeting the needs of society.
Sometimes good enough is better than the best if it's more accessible/cheaper.
I see no such tooling a-la comfyUI available for text generation... everyone seems to be reliant on one-shot-ting results in that space.
Aside for the terrible name, what does comfyUI add? This[1] all screams AI slop to me.
Basically it's way beyond just "typing a prompt and pressing enter" you control every step of the way
[1]https://blog.comfy.org/p/nano-banana-via-comfyui-api-nodes
They’re about as similar as oil and water.
One that surprised me was that "-amputee" significantly improved Stable Diffusion 1.5 renderings of people.