> The constant here is "agency". LLMs inherently lack it. So, it has to come from somewhere.
I don't understand your use of "inherently" here. Even if you define LLMs as not having agency, I don't see any inherent limitation against tacking agency on top of them. As you alluded to even just a basic loop of `if (!goalAchieved()) {promptWithToolCalling()}` is arguably agency, no?
You actually suggested connecting the LLM directly between the product and the customer, such that the customer specifies the goal. What's stopping tech from going in this direction?
I agree with the fact that LLM's inherently lack agency and I think it's a pretty subtle distinction but it's an extremely consequential one. AI cannot self-initiate anything, they are only able to produce an output as a response to input. The impetus for their action is always an idea that came from a humans mind, no matter how indirect that is. That much is inarguable imo, the possible point of contention is whether the same can be said about humans, and that's a very philosophical question but I am very convinced it's not the case(I won't get into my own philosophical beliefs here).
Despite the fact that the distinction is very philosophical, I think the implications are very practical. Without it's own initiating energy everything an AI produces will be a response to an input, and it's response will be constrained by the bounds implied by that input. The specific type of dialectic between the programmer and the person giving requirements, which leads to creating the ACTUAL requirements, is impossible to happen with an AI, because a dialectic requires two opposed agents/forces while an AI is incapable of being an opposing force because it is only a derivative or product of whatever force is providing it's input; basically, it is constrained inside a box defined by the input it is given, and precisely what is needed for true synthesis(new ideas/thoughts, as opposed to an analytic breaking down of the already proposed ideas) is a whole separate box to interact with the one defined by the input.
My explanation is extremely abstract and will probably only make sense to someone who almost agrees with me already, but that's the best I could do. I'm sure there is a more down-to-earth way to explain this but I guess my understanding isn't good enough to find it yet. In my defense I do think this particular issue of agency in AI is one of the most subtle and philosophical problems in the world right now that actual has practical implications.
I agree with most of this. The part I question is if both sides of the requirements exploring process need to have agency, or if it is sufficient to have just one human participant (ideally the customer/user).
My thinking being that the adversarial or opposing agency could by provided by way of strict business rules written with human intent. So, when the customer makes a request to the model, a tool could get called that raises an exception also designed by a human. But, physically only one person is involved in that transaction.
I don't understand your use of "inherently" here. Even if you define LLMs as not having agency, I don't see any inherent limitation against tacking agency on top of them. As you alluded to even just a basic loop of `if (!goalAchieved()) {promptWithToolCalling()}` is arguably agency, no?
You actually suggested connecting the LLM directly between the product and the customer, such that the customer specifies the goal. What's stopping tech from going in this direction?