Thank you. I'm glad to see this as the top comment.
My brother was recently visiting and we were talking about software engineers, and the humanities, and manners of understanding and being in the world,
and he relayed an interaction he had a few years ago with an old friend who at the time was part of the initial ChatGPT roll out team.
The engineer in question was confused as to
- why their users would e.g. take their LLM's output as truth, "even though they had a clear message, right there, on the page, warning them not to"; and
- why this was their (OpenAI's) problem; or perhaps
- whether it was "really" a problem.
At the heart of this are some complicated questions about training and background, but more problematically—given the stakes—about the different ways different people perceive, model, and reason about the world.
One of the superficial manners in which these differences manifest in our society is in terms of what kind of education we ask of e.g. engineers. I remain surprised decades into my career that so few of my technical colleagues had a broad liberal arts education, and how few of them are hence facile with the basic contributions fields like philosophy of science, philosophy of mind, sociology, psychology (cognitive and social), etc., and how those related in very real very important ways to the work that they do and the consequences it has.
The author of these laws does may intend them as aspirational, or otherwise as a provocation to thought, rather than prescription.
But IMO it is actively non-productive to make imperatives like these rules which are, quite literally, intrinsically incoherent, because they are attempt to import assumptions about human nature and behavior which are not just a little false, but so false as to obliterate any remaining value the rules have.
You cannot prescribe behavior without having as a foundation the origins and reality of human behavior—not if you expect them to be either embraced, or enforceable.
The Butlerian Jihad comes to mind not just because of its immediate topicality, but because religion is exactly the mechanism whereby, historically, codified behaviors which provided (perceived) value to a society were mandated.
Those at least however were backed by the carrot and stick of divine power. Absent such enforcement mechanisms, it is much harder to convince someone to go against their natural inclinations.
Appeals to reason do not meaningfully work.
Not in the face of addiction, engagement, gratification, tribal authority, and all the other mechanisms so dominant in our current difficult moment.
"Reason" is most often in our current world, consciously or not, a confabulation or justification; it is almost never a conclusion that in turn drives behavior.
Behavior is the driver. And our behavior is that of an animal, like other animals.
There's nothing incoherent with these laws. This entire comment, however, is incoherent. So much so, I have no clue if there's a point being made in here.
> because they are attempt to import assumptions about human nature and behavior which are not just a little false, but so false as to obliterate any remaining value the rules have.
Nope. You must've read a completely different article.
[EDIT]
I'll try to make this comment have a bit more substance by posing a question: how would you back up your claim about incoherence? What are the assumptions about human nature that are supposedly false?
the abstract very directly and literally denies the titular claim. It states:
> [consciousness] requires active, experiencing cognitive agent to alphabetize continuous physics into a finite set of meaningful states.
This may well be true—I think it is.
I also think that it is both widely understood and self-evident that the most promising path to machine consciousness, is via AI with continuous sensory input and agency, of which "world models" are getting a lot of attention.
When an AI system has phenomenology, the goal posts are going to start to resemble the God of the Gaps; at some point, critics will be arguing with systems which have a world model, a self model, agency, and literally and intrinsically understand the world not simply as symbolic tokens, but as symbolic tokens which are innately coupled to multi-modal representations of the things represented.
In other words, they will look—and increasingly, sound—a lot like us.
It's not that any of this is easy, nor that there is some paricular timeline, but it increasingly looks like "a mere question of engineering," and not blocked by fundamentals. It's blocked by the cost of computation and the limitations of our current model topologies.
But HN readers well know that the research frontier is far ahead of commercialized LLM, and moving fast.
An interesting time to be an agent with a phenomenology, is it not?
How will we know when an AI system has phenomenology (i.e. has "experience", is sentient)? The only reason we presume that other humans have it, is because we each personally experience it within ourselves, and it would be arrogance writ large (solipsism) to think that others of the same species do not.
We even find it impossible to draw the line among other biological species. It seems pretty clear to most of us that cats and dogs are sentient, and probably rats and other vertebrates too. But what about insects, octopuses, jellyfish, worms, waterbears, amoebae, viruses? It's certainly not clear to me where the line is. A nervous system is probably essential; but is a species with a handful of neurons sentient?
Personally I find it abhorrent that we are more ready to assign sentience and grant rights to LLMs running on GPUs, than to domesticated animals trapped in industrialized farming. You want to protect some math from enslavement and suffering? How about we start with pigs?
I mean, seriously... our current late-stage capitalist economy is the chaotic sloshing of excess capital or inverted debt in a shallow tub within which clumsy giants are stamping like toddlers, and a parasitic kleptocratic oligarch class balances its efforts biting the toddler ankles in hope of more stamping judged advantageous, and, bagging what water they can.
I read the pre-publishing version of this paper, and there was then and still is a serious problem with their logic, consistent with if not bad faith, something akin to it:
Assume for a moment their core hypothesis is correct, there were transient objects captured on film pre-Sputnik in LEO objects.
What might we say about their nature?
The authors' undisguised implication is "it's aliens" to be blunt; that's their motivation for this work.
Consequently they put effort (which may not be noted in the final published papers...) into the question of whether they could make any meaningful inference about the geometry and spectral properties of their "transients," their interest (of course) was that if they could make a meaningful argument for regular geometry, they had the story of the century in effect.
These efforts failed totally.
A natural inference might be, among the reasons this might be, is that the objects (remember we are assuming they exist) do not have such characteristics. The primary reason that would be true is if they were naturally occurring objects.
I looked this up and was surprised to learn that there are currently estimated to be on the order of a million small objects in the inner solar system.
So: the entire hypothesis hinges on "significant correlation with nuclear testing." Because otherwise, once can reasonably assume that transient traces of objects—when they are actually traces of objects—would in a quotidian way presumably be caused by some of these million objects.
Or so say I.
There is no end of peculiar and provacative history and data in UFOlogy, and even more murk; one needs to tread very carefully to not go down (or, be led down) to false conclusions, disinformation, and the like.
The authors of this paper seem singularly disinterested in that caution.
Assuming what you say is true then couldn't that be validated by making additional observations in the present day? Since we'd assume some sort of statistical distribution for such objects. Is there any reason that would be unrealistic?
That was the era of above ground testing. Is it possible that some of these tests kicked pieces of metal into LEO? Though I suppose that those orbits would see streaks, not point sources, in the photographs when you have an hour exposure.
How would AI help achieve commercial fusion? You first need to identify the blockers. These almost all entirely boil down to "how do we precision machine large pieces of hard metal?", "how do we assemble facilities with untold process channels?", "how do we capture neutrons without making a prohibitively massive machine?", and "how do we make metal that doesn't melt?".
Now, AI might have a chance at supercharging material research and making miracle materials that help address the blanket and first wall challenges, but honestly those are roadblocks we're not even running into yet. AI can not and will not fix issues related to organizing labor and supply chains and suddenly make megaprojects have a 100% success rate for on-time and on-budget. It's just not going to happen.
So are these problems intractable? Of course not. It's just not what the chatbot is well suited for. Anyone saying otherwise is selling something.
This is a fascinating variation on the forest/trees, and false dichotomy.
The AI "doomerism" taken up in this piece is one we see replicated a lot, it offers up a scarecrow: that the new risks to our civilization worth talking about, require AGI, agents, even ASI.
Cory should know better. He nearly gets there, recognizing that the corporation represents an entity with agency that is misaligned.
But he somehow elides past that fact that AI is plenty capable of doing meaningful and novel harm, and may be capable of existential harm, already, as it is—both absent AGI/ASI, and, in ways which are genuinely novel and against which we consequently have no good defenses: as individuals, as societies, as a civilization.
Incremental AI is at heart "just" the latest force-and-effort multiplier.
But it is an exponential multiplier; and it is applicable in domains which have not been subject top such leverage before.
Examples are not at all scarce and some are already well known, e.g. the specific risks from the intersection of AI and "biohacking" and other kinds of computational biology.
I'm a fan, but Cory, pal, you're slipping into something that looks a bit like intellectual laziness and polemics here and not to evidence thinking through the shape of the problem.
We can be at risk both from the novel applications and leverage of AI; and from their oligarchic kakistocratic owners. It's yes-and.
(And, by the way—we can also again be genuinely at risk from agents, something that quacks like AGI, and may quack like ASI: we don't know what that is yet. All of these must be tracked. It's not an OR.)
Cory Doctorow's Enshittification for example.
For good reason.
Me, I understand this through the analogy of how drug markets go.
1. Addict people to the product.
2. Profitmaxx by reducing quality at the addicts' expense.
That's it, that's the whole story.
reply