"Prompt engineer" didn't exactly make sense to me until a coworker (graphic design) talked about the prompts that he'd see in Midjourney's discord server. Particularly when he mentioned specifications around camera lenses and perspective and other things I'm not familiar with. Very specific choices, continuing to be added and refined.
Then seeing what guidance[0] is intended to do (LLM-prompt templating) it becomes obvious what people have in mind for this. It won't obviate understanding the lower-level code -- in fact, I expect the value of that skill to increase over time as fewer people bother to learn it -- but it will cut out the time it takes to scaffold an application.
If anything, mid journey is an argument against prompt engineering. Each successive iteration of mid journey (currently at v5) has made it progressively easier and dispensed with the need for stable diffusion specific terminology.
I’ve heard similar things from the same individual about the progress of results. Indeed, they said the things I mentioned in my first comment a few months ago. Still, it sounds like a current problem is that it’s difficult to reliably get visually nearly-identical output with incremental changes. That’s one area where LLM prompt templating can be useful.
This lack of reliability is often mentioned about these generative technologies. It seems to me that “how do I get the same T-Rex with lasers shooting out of its eyes but different color lasers?” and “how do I get the same JSON structure in my output and always only the JSON with no other text?” are roughly similar problems in “prompt engineering”.
Then seeing what guidance[0] is intended to do (LLM-prompt templating) it becomes obvious what people have in mind for this. It won't obviate understanding the lower-level code -- in fact, I expect the value of that skill to increase over time as fewer people bother to learn it -- but it will cut out the time it takes to scaffold an application.
[0] https://github.com/microsoft/guidance