> no real point in comparing the intelligence of a person vs. an LLM
Say you're deciding whether to employ an AI on a fairly open-ended family of tasks, or put out a want-ad to hire someone. What do you call what you're doing?
You’re not comparing their intelligence in some general sense; you’re comparing their ability to perform a collection of tasks that define a role. And more likely, you’ll do both because the answer is that each is better at some tasks that comprise the role. Eg, instead of hiring two positions, you’ll hire one and task the LLM with some of the work.
I said "fairly open-ended" and didn't limit the question to a mid-2023 LLM.
The point I'm aiming at is that an increasing ability to solve an open-ended range of problems affects the choices we have to make, no matter whether you object to calling it "intelligence".
Yeah, I agree, and that's one problem with focusing on what "intelligence" means: it kind of connotes, as you say, abstract or academic problems over practical shrewdness. (It doesn't have to have this restriction but people sure seem to take it that way a lot.)
Say you're deciding whether to employ an AI on a fairly open-ended family of tasks, or put out a want-ad to hire someone. What do you call what you're doing?