I just wanted to share an essay I liked. I didn't think you'd pay it much mind. But I can see now that you are a person devoted to science. If you want to know what I believe, I think computers in the 50's were intelligent. I think gpt2 probably qualified as agi if you take the meaning of the acronym literally. At this point we've blown so far past all expectations in terms of intelligence that I've come to agree with Karpathy that the time has come to start moving the goalposts to other words, like agency, since agents are an unsolved problem, and agency is proving to possibly be more important/powerful/rare/difficult than intelligence.
I reacted negatively to the idea earlier that agency should be considered an aspect of intelligence. I think separating the concepts helps me better understand people, their unique strengths, and puzzles like why sometimes people who aren't geniuses who know everything and can rotate complex shapes are sometimes very successful, but most importantly, why LLMs continue to feel like they're lacking something, compared to people, even though they're so outrageously intelligent. It's one thing to be smart, another thing entirely to be useful.
> I reacted negatively to the idea earlier that agency should be considered an aspect of intelligence.
In the hopes of clarifying any misunderstandings of what I mean... I said "agent" in Russell's sense -- a system with goals that has sensors and actuators in some environment. This is a common definition in CS and robotics. (I tend to shy away from using the word "agency" because sometimes it brings along meaning I'm not intending. For example, to many, the word "agency" suggests free will combined with the ability to do something with it.)
> My own motivation for studying AI is to create and understand intelligence as a general property of systems, rather than as a specific attribute of humans. I believe this to be an appropriate goal for the field as a whole...
To continue my earlier comment... I prefer not to call an LLM "intelligent" much less "outrageously intelligent". Why? The main reason is communication clarity -- and by communication I mean the notion of a sender communicating a meaning to a receiver. Not just symbolic information (a la Shannon), but a faithful representation in the recipient. The phrase "outrageously intelligent" can have many conflicting interpretations in one's audience. Doing so generates more confusion than clarity.
To say my point a different way, intelligence is contextual. I'm not using "contextual" as some sort of vague excuse to avoid getting into the details. I'm not saying that intelligence cannot be quantified at all. Quite the opposite. Intelligence can be quantified fairly well (in the statistical sense) once a person specifies what they are talking about. Like Russell, I'm saying intelligence is multifaceted and depends on the agent (what sensors it has, what actuators it has), the environment, and the goal.
So what language would I use instead? Rather than speaking about "intelligence" as one thing that people understand and agree on, I would point to task- and goal-specific metrics. How well does a particular LLM do on the GRE? The LSAT?
Sooner or later, people will want to generalize over the specifics. This is where statistical reasoning comes in. With enough evaluations, we can start to discuss generalizations in a way that can be backed up with data. For example, might say things like "LLM X demonstrates high competence on text summarization tasks, provided that it has been pretrained on the relevant concepts" or "LLM Y struggles to discuss normative philosophical issues without falling into sycophancy, unless extensive prompt engineering protocols are used".
I think it helps to remember this: if someone asks "Is X intelligent?", one has the option to reframe the question. One can use it as an opportunity to clarify and teach and get into a substantive conversation. The alternative is suboptimal. But alas, some people demand short answers to poorly framed questions. Unfortunately, the answers they get won't help them.
Intelligence is closely related to the concept of attractiveness and gravitas. You say it depends on the agent. I say it's in the eye of the beholder. People aren't very good at explaining what attracts them either.
The closest thing we have to a definition for intelligence is probably the LLMs themselves. They're very good at predicting words that attract people. So clearly we've figured it out. It's just such a shame that this definition for intelligence is a bunch of opaque tensors that we can't fully explain.
LLMs don't just defy human reasoning and understanding. They also challenge the purpose of intelligence itself. Why study and devise systems, when gradient descent can figure it out for you? Why be cleverer when you can just buy more compute?
I don't know what's going to make the magical black pill of machine learning more closely align with our values. But I'm glad we have them. For example, I think it's good that people still hold objectivity as a virtue and try to create well-defined benchmarks that let us rank the merits of LLMs using numbers. I'm just skeptical about how well our efforts to date have predicted the organic processes that ultimately decide these things.
> Intelligence is closely related to the concept of attractiveness and gravitas.
Interesting. I wonder why you make this connection. Do you know?
Your choice of definition seems to be what I would call "perception of intelligence". But why add that extra layer of indirection; why require an observer? I claim this extra level of indirection is not necessary. I eschew definitions with unnecessary complexity (a.k.a "accidental complexity" in the phrasing of Rich Hickey).
Here are some examples that might reveal problems with the definition above:
- DeepBlue (decisively beating Kasparov in 1997) showed a high level of intelligence in the game of chess. The notion of "being good at the game" is simpler (conceptually) than the notion of "being attractive to people who like the game of chess". See what I mean?
- A group of Somali pirates working together may show impressive tactical abilities, including the ability to raid larger ships, which I would be willing to call a form of tactical intelligence to achieve their goals. I grant the intelligent behavior even though I don't find it "attractive", nor do I think the pirates need any level of "gravitas" to do it. Sure, the pirates might use leadership, persuasion, and coordination to accomplish their goals but these concepts are a means to an end accomplishing the goal. But these traits are not necessary. Since intelligent behavior can be defined without using those concepts, why include them? Why pin them to the definition?
- The human brain is widely regarded as a intelligent organ in a wide variety of contexts relating to human survival. Whether or not I find it "attractive" is irrelevant w.r.t. intelligence, I say. If the neighboring tribe wants to kill me and my tribe (using their tribally-oriented brains), I would hardly call their brains attractive or their methods being nuanced enough to use "gravitas".
My claim is then: Intelligence should be defined by functional capability which leads to effectiveness at achieving goals, not by how we feel about the intelligence or those displaying it.
Intelligence is an amalgamation of things. I read somewhere once that scientists tried to figure out which gene is the high IQ gene and found many contributed. It isn't a well defined game like chess. Being good at chess is to intelligence like having great legs might be to attractiveness.
You're don't like pirates? You're either in the Navy or grandstanding. People love pirates and even killers. But only if they're successful. Otherwise One Piece wouldn't be the most popular manga of all time.
Achieving goals? Why not define it as making predictions? What makes science science? The ability to make predictions. What does the brain organ and neural networks do? They model the world to make predictions. So there you have it.
This whole conversation has been about reducing intelligence to its defining component. So I propose this answer to your question. Take all the things you consider intelligent, and order them topologically. Then define intelligence as whatever thing comes out on top. Achieving goals depends on the ability to make predictions. Therefore it's a better candidate for defining intelligence.
> Achieving goals? Why not define it as making predictions?
Because "achieving goals" subsumes "making predictions". Remember, Russell's goal is to find a definition of intelligence that is broader than humans -- and even broader than sentient beings. But using the "achieving goals" definition, one can include system that accomplishes goals, even if we can't find any way to verify it is making predictions. For example, even a purely reactive agent (e.g. operating on instincts) can display intelligent behavior if its actions serve its purposes.
If you are seeking one clear point of view about the nature of intelligence, I highly recommend Russell's writing. You don't have to "agree" with his definition, especially not at first, but if you give it a fair reading, you'll probably find it to be coherent and useful for the purposes he layes out.
Russell has been thinking about and teaching these topics for probably 40+ years in depth. So it is sensible to give his ideas serious consideration. Also, there are scholars who disagree with Russell's definition or accentuate different aspects. Wherever a person lands, these various scholars provide a clear foundation that is all too often lacking in everyday conversation.
> This whole conversation has been about reducing intelligence to its defining component.
Not really, but I can see why you might say this. Neither Russell nor I are attempting to define "the one component" of intelligence -- we're saying that there is no single kind of intelligence. Only when one defines a particular (agent, environment, goal) triple can one can start to analyze it statistically and tease apart the related factors. You and I agree that the result will be multifaceted.
I wouldn't say I'm trying to "reduce" anything. I would say I've been attempting to explain a general definition of intelligence that works for a wide variety of types of intelligence. The goal is to reduce unnecessary confusion about it. It simply requires taking some extra time to spell out the (agent, environment, goal).
Once people get specific about a particular triple, then we have a foundation and can start to talk about patterns across different triples. If one is so inclined, we can try to generalize across all intelligent behavior, but frankly, only a tiny fraction of people have put in the requisite thought to do this rigorously. Instead, many people latch onto one particular form of intelligence (e.g. abstract problem solving or "creativity" or whatever) and hoist these preferred qualities into their definition. This is the tail wagging the dog in my opinion. But this is another topic.
I reacted negatively to the idea earlier that agency should be considered an aspect of intelligence. I think separating the concepts helps me better understand people, their unique strengths, and puzzles like why sometimes people who aren't geniuses who know everything and can rotate complex shapes are sometimes very successful, but most importantly, why LLMs continue to feel like they're lacking something, compared to people, even though they're so outrageously intelligent. It's one thing to be smart, another thing entirely to be useful.