When I dictate to a PM how they should present my stuff in their useless PowerPoints, that result could be called the PM’s impression. More specifically, their impression of my impression of the subject.
But after I’ve iterated with them N times, fixing their various misunderstandings of my impression, eventually the PM’s impression impact approaches zero, and it’s clearly representative solely of my impression.
Sufficient iterations and prompts against the model, and you can’t really say it’s the AI’s fault the result is erroneous. The first pass, sure, but not the 100th.
Except I guess in the aspects where you can’t fix the PM’s/AI’s understanding. Then you could say their impression isn’t fully removable
And why are we assuming that someone lazy enough to use AI isn't doing more than 1 prompt to make something and going with it? They've lost all benefit of the doubt in my eyes.
I’m not making that assumption. I’m suggesting that the difference between AI impression and Artist Impression is number of iterations — the solution to this disagreement.
Both terms are acceptable to use; this argument is fundamentally about how lazy you should assume these users to be. So argue about that directly.
Except, despite all of your contempt project managers are people who can learn. LLMs are trained and can't learn anything after that. They have a very very short sliding window of context that will start to be dropped when you add more information.
Why does it matter whether they can learn? If I let them run off after a single pass, then yes their understanding of my understanding is relevant. After the Nth review, it’s not. The latter is ideal, else you have the game of telephone
The question is whether their understanding still contributes to the end product, not who does the mechanical action of entering data or drawing images.
…what? If I tell the PM the info needed, and he sufficiently produces the PPT without error, then it’s done? Why do I need further back n’ forths to make it consequential?
The number of passes/reviews is directly tied to error rate. If the PM/AI is able to produce my impression at 100% success, then great, there’s no further work to do.
The only thing learning matters for is with sufficient learning, they might reach a state where they no longer need to be reviewed, because they’ve sufficiently learned to not inject their own interpretation of things into it. They are now a straightforward extension of my own being, and have generated their documents as I would have done (had I the requisite mechanical time/skill/interest for producing whatever is in question).
But after I’ve iterated with them N times, fixing their various misunderstandings of my impression, eventually the PM’s impression impact approaches zero, and it’s clearly representative solely of my impression.
Sufficient iterations and prompts against the model, and you can’t really say it’s the AI’s fault the result is erroneous. The first pass, sure, but not the 100th.
Except I guess in the aspects where you can’t fix the PM’s/AI’s understanding. Then you could say their impression isn’t fully removable