> The problem is that it's an AI program's impression, not an artist's impression.
I just made a similar reply, but I disagree with this. The artist iterates with their prompts to the AI tool to get what they wanted. So when they stopped tweaking the prompt, they were satisfied with the result to be their impression
"AI art generators enable the creation of ignorant and lazy illustrations by outsourcing understanding to an idiot robot."
"Yes, but is it not the intent of the artist to be ignorant and lazy?"
It is possible to repeatedly iterate AI art gen and get what you want, but that's not what happened here. And even so, it's not at all the same thing as drawing a picture: "iterating on what you want" is equivalent to curating art, not creating it. In the US you can copyright curation and that extends to curation of AI art - the US Copyright Office correctly said that tweaking prompts is the same thing as tweaking a Google Images search string for online image curation. But you can't copyright the actual AI-gen pictures, they are automatically public domain (unless they infringe someone else's copyright).
I am specifically talking about DALL-E or Stable Diffusion, your link describes something very different. The point was the "Google Images" analogy, which applies to 99.999% of AI art but this is an exception.
> I am specifically talking about DALL-E or Stable Diffusion, your link describes something very different.
No, it doesn't. It describes artwork done on Invokeai, one of the popular hosted web frontends for Stable Diffusion (and some similar models), with a process very much like what many AI art hobbyists use (whether with hosted frontends or locally-hostable ones like A1111, Forge, ComfyUI, or Fooocus.)
I don't understand your ridiculous pedantry! I am talking about DALL-E and Stable Diffusion. I am not talking about other front ends to these services, nor did I dispute that your example deserved copyright protection. Invoke is very very different from plain text-to-image generation, WHICH IS WHAT I WAS TALKING ABOUT.
I think it's best if I log off and ignore your replies.
> I am talking about DALL-E and Stable Diffusion. I am not talking about other front ends to these services
Stable Diffusion is a series of models, not a service. There are various services (including the first-party Stable Assistant service from StabilityAI) and self-hosted frontends that use the models, most of which (including Stable Assistant) support considerably more interaction than simple text-to-image.
See the other reply for a half-counterexample, but the major difference is the specific software is more like generative PhotoShop, and the final image involved a lot of manual human work. Simply tweaking a prompt is not enough - again you can get copyright for curation, just not the images."
Of course AI can't be credited with copyright - neither can a random-character generator, even if it monkeys its way into a masterpiece. You need legal standing to sue or be sued in order to hold copyright.
Isn't it also possible that it wasn't an artist at all, instead someone whose job was never illustrating scientific articles (like a manager or a random intern), who put the text in the prompt and went "that looks pretty sciency, good enough" and the person responsible for the publication went "great, we just got a sciency image and saved $XX!"
Yeah the intern is now an "artist" but I think lumping them together is muddling the discussion.
I just made a similar reply, but I disagree with this. The artist iterates with their prompts to the AI tool to get what they wanted. So when they stopped tweaking the prompt, they were satisfied with the result to be their impression