The Vatican published an interesting document on AI [1], which attributes a number of quotes to Pope Francis:
* As Pope Francis noted, the machine “makes a technical choice among several possibilities based either on well-defined criteria or on statistical inferences. Human beings, however, not only choose, but in their hearts are capable of deciding."
* In light of this, the use of AI, as Pope Francis said, must be “accompanied by an ethic inspired by a vision of the common good, an ethic of freedom, responsibility, and fraternity, capable of fostering the full development of people in relation to others and to the whole of creation.”
* As Pope Francis observes, “in this age of artificial intelligence, we cannot forget that poetry and love are necessary to save our humanity.”
* As Pope Francis observes, “the very use of the word ‘intelligence’” in connection with AI “can prove misleading”
I rarely feel this way about someone of Pope Francis' age and social position, but I've genuinely admired Francis as a thinker. He was a bona fide Jesuit, through and through. The next pope has big shoes to fill.
I have heard yesterday on some Catholic TV channel that Benedict had already done the theological clarification work during his mandate, and that Francis - who was already the runner-up to Benedict and knew he was likely to be next in line - knew his task would be more about preaching - thus his strong media game (and as he person it suited him well too, he seemed really approchable and outgoing).
Note that Antiqua et Nova was authored by the Church. With its profound philosophical tradition, the Church offers insights in this text that surpass anything ever written by Silicon Valley entrepreneurs.
I’ll also add that many of his admirers as well as his detractors exaggerated his virtues, his merits, and his flaws. He was both the victim of a media and film industry all too eager to spin him into the “progressive pope” — never shying away from quoting him out of context to push an agenda — and the issuer of problematic and ambiguous documents and off-the-cuff remarks that only served to generate confusion.
Intellectually, Benedict XVI and John Paul II were in a different league. As far as the Jesuits are concerned, I know that in the popular imagination, the Jesuits are imagined to be some kind of “progressive”, intellectually superior order, but historically, they were sort of the shock troops of the Church. They certainly have merits to their name. While they did become involved in education, they drew from the traditions of education in the Church. Education and scholarship, however, are not their charism. Compare that with the Dominicans, for example, who have teaching and education as their mission (Thomas Aquinas is probably their most famous member).
> But jesuits are historically linked to education and the sciences, this is a fact.
Isn't that what I said? I merely said it is not their charism, not their specialty. The point is that in the popular imagination, people elevate them above orders who not only have a better historical record, but whose mission is to educate, study, etc.
That isn't to downplay the good contributions of the Jesuits, but I can point you to Jesuits (who, as an order, are in poor shape these days, tbh) who would say the same thing. The popular imagination is simply ignorant or tendentious on this point in its exaggeration relative to the others.
Jesuit scholarship, especially in the last 100 years, is noteworthy for generating impressive literature while contributing close to nothing to the Church. See Rahner, Balthasar, de Lubac, Chardin... Garbage through and through
Knowledge is well described (not necessarily explained, but described) in information theory. Intelligence, sentience, consciousness, even whether something is alive, are fuzzy concepts.
Biology has a working definitions of "living organism" that includes a way to calculate likelihood that something is a living organism, but it still is probabilistic.
Understanding is another concept that depends on philosophy of the mind as opposed to concrete physical processes.
> At some point you will understand that you will never have absolute and complete axioms from which to build everything on [1], and you have to work with what you have.
To have hardware that displays blue, and code that manipulates blue, you must have a very clear and unambiguous definition of what blue means. Notice I did not say correct, only clear and unambiguous. Your whole point seems to be that words mean what a native speaker of the language understands them to mean, which is useful in linguistics and in the editing or dictionaries, but the context of this discussion is the representation of some concept in symbols that a computer can process, which is a different thing. Indeed, it's possible that the difference between code and 'vibes' will have to be in some way addressed by those very definitions of knowledge and intelligence, so I think these are relevant questions that can't be hand-waved away.
The code will function, in the sense of executing, whether the underlying concepts are sufficiently well-understood or not. Considering the ramifications of that statement might lead you to seeing why people want to understand what they're building before they build it.
I gave you the benefit of the doubt that you were asking questions in good faith, but I'm not sure that's true anymore, so good luck.
Yes, but blue doesn't have a "Definitions of Blue" Wikipedia page.
There are nuances to definitions of common words "what is blue, what is a bicycle, what is a dollar, really?", but the magnitude of variance in definition is not shared with something like "knowledge" or "intelligence."
With these high-level concepts, most people are operating only on a "I know it when I see it" test (to reference the Supreme Court case on obscenity).
>Yes, but blue doesn't have a "Definitions of Blue" Wikipedia page.
Oh, I understand, so the criteria is to have a Wikipedia page like that?
You know what's interesting, I couldn't find neither of these:
* تعريفات المعرفة
* 知識嘅定義
* Définitions de la connaissance
* Definiciones de conocimiento
Should we add "and it has to be written in English" as a requirement?
I know this is arguing ad absurdum, but the point is, again, that if you choose to be that strict, you wouldn't even be able to communicate with other people, because your desired perfect 1:1 map of concepts among them doesn't even exist.
No, I mean to illustrate that "blue" and "knowledge" have a vastly different degree in variation in definition.
Like you say, all words of course have different definitions between individuals, but you and I are obviously able to communicate without specifying every definition. There exists a spectrum between well-agreed-upon definitions (like "and") and fuzzier ones. The definition of "knowledge" is divisive enough that many people disagree vehemently on definitions, which is illustrated by the fact that there is a whole Wikipedia article on it.
If there is a "midwit trap" related to this, there is certainly a Sorites paradox trap as well - that because all words have varying definitions, that it is no use to point out that some words' definitions are more variable than others.
I think they mean there many hues that some people will cal blue and other will disagree. And definitely if you try to buy paint and just say you want "blue" there's a huge spectrum of things you might get
It is probably exactly because he spent a career considering the cognition behind language that he is not as impressed by LLMs as many others are. I'll readily admit to being expert in neither language and linguistics nor AI, but I am skeptical that anything going on inside an LLM is properly described as "cognition."
does it really matter if it can be described as cognition or not? to me these models are useful for how effective they are, and that's literally it. the processes going on within them are extremely complex and at times very impressive, and whether some arbitrarily undefined word applies or not does not really matter. I think sometimes people forget that words are not maths or logic. when words come into language, no one sits down and makes sure that they're 100% logically and philosophically sound, they just start to be used, usually based on a feeling, and slowly gather and lose meaning over time. perhaps when dictionaries were first written there was some effort to do this, but for lots of words its probably impossible or incredibly difficult even now, never mind 200 years ago, if they could even be bothered in the first place.
to give an example, a quite boring "philosophy question" that's bandied around, usually by children, is "if a tree falls in the forest and no one hears it, does it make a sound?". the answer is that "sound" is a word without a commonly-accepted, logically-derived meaning, for the reasons given above. so if to you the word sound is something human, then the answer is no, but if to you a sound is not something human, then the answer is yes. there's nothing particularly interesting or complex about the thought experiment, it's just a poorly defined word
does it really matter if it can be described as cognition or not?
Yes...it does. "AI" aka modern flavor LLMs as we understand them today are just doing certain things thats humans can do but orders of magnitude faster. What exactly is impressive about it being able to succinctly sum up any topic under the Sun aside from the speed? It will never create a new genre of music. It will never create a new style of art from the ground up. It lacks the human spark of ingenuity. To even suggest that what it does anything close to human cognition is egregiously insulting.
isn't it funny that half the time when you see criticism of LLMs it's almost like the words have been stolen from someone else?
the opinion you're parroting here completely misses the point of LLMs. their purpose is not to start artistic movements or liberally think for themselves and no one is claiming it is. their purpose is to accelerate information retrieval and translation and programming tasks, which they by and large are incredible at. even if they had the capacity to invent artistic movements, which in theory they most certainly do, starting an artistic movement is pretty much intrinsically a human thing, and it requires desire, inclination, trust and a grounding in the real world, such as it is. your "spark of ingenuity" is not lacking because of some issue or lack of creativity, it's lacking because it's not the point and no one wants it to be.
whether it is "cognition" or not is completely irrelevant to their purpose and use, and its a complete waste of time trying to litigate if it is or not because the word in itself is poorly defined. if you're trying to figure out if j=k but you can't define j or k, and you do know that k isn't a big factor in the usefulness of the system, then what is the point? is it jealousy? fear? I assure you, LLMs are not a threat to the special ingenuity of your mind
this opinion is the equivalent of watching the invention of the pocket calculator and complaining that it can't write calculus equations on a black board
their purpose is not to start artistic movements or liberally think for themselves and no one is claiming it is.
your "spark of ingenuity" is not lacking because of some issue or lack of creativity, it's lacking because it's not the point and no one wants it to be.
There are plenty of people/communities online that want it to be exactly that and want to remove the pesky human element from the equation. Dismissing them because it doesn't fit your argument doesn't mean they don't exist.
Re: "does it really matter if it can be described as cognition or not?"
To Chomsky? He'd have to speak for himself, but I suspect the answer is "yes, obviously, at least to be of interest to me."
Note that I'm not saying LLMs are useless or even that what they do is usefully described as "plagiarism."
But it seems entirely unsurprising to me that Chomsky would be unimpressed and uninterested -- even to the point of dismissiveness, he's pretty much like that -- precisely because they are unrelated to "cognition."
I suspect the disappointment wasn't about whenever LLMs exhibit cognitive-like properties or not, but rather about the negative connotations tied to the word "plagiarism". Yea, they replicate patterns from their training data. So do we (ok, to be fair I have no idea about others but I believe I know that I do), and that's normal.
I believe knowledge is what you know based on facts and experience; wind sensors could gather data and store it in a database without a human touching it beyond initial setup. With enough data, and basic information about where the sensors are located, the computer becomes very knowledgeable about wind in a region without human intervention.
I believe intelligence goes beyond that: knowing that such a system is a solution to an observed problem, architecting said system, using the output to solve a problem, analyzing the results, and deciding where to deploy additional systems.
I think both examples above can be done by AI (if not now, then soon)—but only after being prompted carefully by a human. However, a generalized AI that can do all of the above for any problem in the known universe is likely very far off.
if knowledge is a justified true belief, i’m down for saying LLMs have beliefs. to the extent that they are incorrigible, their faith may actually be superhuman.
This is what intentionality is about. No intentionality, no truth.
An LLM doesn't deal with propositions, and it is propositions that are the subjects of truth claims. LLMs produce strings of characters that, when interpreted by a human reader, can look like propositions and result in propositions in the human mind, and it is those propositions that have intentionality and truth value. But what the LLMs produce are not the direct expression of propositional content, only the recombination of a large number of expressions of propositional content authored by many human authors.
People are projecting subjective convention onto the objective. The objective truth about LLMs is far poorer in substance than the conventional readings we give to what LLMs generate. There is a good deal of superstition and magical thinking that surrounds LLMs.
I think this is wrong. For nearly a hundred years, popular media has been priming the public to understand that artificial intelligence is superficially intelligent but is very prone to malfunction in inhuman ways. All that media in which AIs go haywire used to make nerds roll their eyes, but resonated with the general public then and has proved prescient now.
Question, but perhaps, open your heart to recent research... There is neural tissue in and around the heart. There are studies showing personality changes and memory inculcation as a result of heart transplants. Recipients end up with memory and sometimes traits of the donor.
Are we sure? No. But neither should you be. Question but be open to answers you may not expect.
I'm not sure, but I'd bet £200 that within 10 years it'll still not be something that 99.9% of medical schools will teach. just because I'm not sure, it doesn't make both cases remotely equally likely.
He’s speaking to a popular audience in a poetic fashion. No one believes a pump in your chest is the seat of intellgence, even if it may be involved in some extended and removed manner with the expression of intelligence.
If Francis held Thomistic views on the subject, then even the brain, while needed for human intelligence, does not suffice for its operation, as functions like abstraction require the intellect, which cannot be entirely physical in operation since form cannot exist in matter without also instantiating the form, something by definition opposed to abstraction.
> He’s speaking to a popular audience in a poetic fashion. No one believes a pump in your chest is the seat of intellgence, even if it may be involved in some extended and removed manner with the expression of intelligence.
others on this very thread are proposing this exact extension.
You're misreading what I've written and being intentionally obtuse.
I didn't say the heart plays a role in intelligence. I simply allowed for the possibility for the sake of argument. The central claim is that no one (here, Francis and his writers) who uses the word "heart" colloquially is making the claim that the heart-as-organ is the seat of intelligence or what have you.
You're committing a vulgar equivocation fallacy that the average person with common sense would recognize. I have a difficult time believing you don't understand something so obvious.
People downvote this guy because obviously nobody actually thinks with their blood-pumping organ.
But as for the quote, it's incredibly clear how little empathy most people have towards others, to the point where AI will easily out-empathy them, both as a conversationalist and as a robotic assistant (such as a 24/7 robot nurse with no other patients).
* As Pope Francis noted, the machine “makes a technical choice among several possibilities based either on well-defined criteria or on statistical inferences. Human beings, however, not only choose, but in their hearts are capable of deciding."
* In light of this, the use of AI, as Pope Francis said, must be “accompanied by an ethic inspired by a vision of the common good, an ethic of freedom, responsibility, and fraternity, capable of fostering the full development of people in relation to others and to the whole of creation.”
* As Pope Francis observes, “in this age of artificial intelligence, we cannot forget that poetry and love are necessary to save our humanity.”
* As Pope Francis observes, “the very use of the word ‘intelligence’” in connection with AI “can prove misleading”
[1] https://www.vatican.va/roman_curia/congregations/cfaith/docu...