If those demands made any sense they would be enforced by the languages themselves. It's mostly a way of claiming to be productive by renaming constants and moving code around.
Last month I worked on some new features (like unlocking units) and some fixes to my browser autobattler https://lfarroco.itch.io/mana-battle .
I've been working on adding an async PVP mode to it (using Supabase db and edge functions), should be up in the next few weeks.
Learned a lot about shipping Electron apps and using shaders with webgl, might write a blog post about it later
I work with American companies since 2019. I have experience with web applications e2e, from user interactions in the browser, down to endpoint processing and AWS provisioning. I'm also a hobbyist gamedev, but that was mostly a way of learning about running shaders in the browser (game name is Mana Battle, there's a free version on itch).
Creating an we autobattler game https://lfarroco.itch.io/mana-battle
It is being a good experience to learn how to work with shaders, and how well Electron apps run
You can replicate all calculations done by LLMs with pen and paper. It would take ages to calculate anything, but it's possible. I don't think that pen and paper will ever "think", regardless of how complex the calculations involved are.
And the counter argument is also exactly the same. Imagine you take one neuron from a brain and replace it with an artificial piece of electronics (e.g. some transistors) that only generates specific outputs based on inputs, exactly like the neuron does. Now replace another neuron. And another. Eventually, you will have the entire brain replaced with a huge set of fundamentally super simple transistors. I.e. a computer. If you believe that consciousness or the ability to think disappears somewhere during this process, then you are essentially believing in some religious meta-physics or soul-like component in our brains that can not be measured. But if it can not be measured, it fundamentally can not affect you in any way. So it doesn't matter for the experiment in the end, because the outcome would be exactly the same. The only reason you might think that you are conscious and the computer is not is because you believe so. But to an outsider observer, belief is all it is. Basically religion.
It seems like the brain "just" being a giant number of neurons is an assumption. As I understand it's still an area of active research, for example the role of glial cells. The complete function may or may not be pen and paper-able.
There are indeed many people trying to justify this magical thinking by seeking something, anything in the brain that is out of the ordinary. They've been unsuccessful so far.
Penrose comes to mind, he will die on the hill that the brain involves quantum computations somehow, to explain his dualist position of "the soul being the entity responsible for deciding how the quantum states within the brain collapse, hence somehow controlling the body" (I am grossly simplifying). But even if that was the case, if the brain did involve quantum computations, those are still, well, computable. They just involve some amount of randomness, but so what? To continue with grandparent's experiment, you'd have to replace biological neurons with tiny quantum computer neurons instead, but the gist is the same.
You wouldn't even need quantum computer neurons. We can simulate quantum nature on normal circuits, albeit not very efficiently. But for the experiment this wouldn't matter. The only important thing would be that you can measure it, which in turn would allow you to replicate it in some non-human circuit. And if you fundamentally can't measure this aspect for some weird reason, you will once again reach the same conclusion as above.
You can simulate it, but you usually use PRNG to decide how your simulated wave function "collapses". So in the spirit of the original thought experiment, I felt it more adequate to replace the quantum part (if it even exists) by another actually quantum part. But indeed, using fake quantum shouldn't change a thing.
It could well be the case that the brain can be simulated, but presently we don't know exactly what variables/components must be simulated. Does ongoing neuroplasticity for example need to be a component of simulation? Is there some as of yet unknown causal mechanisms or interactions that may be essential?
None of those examples cannot be done on pen and paper or otherwise simulated with a different medium, though.
AFAICT, your comment above would need some mechanism that is physically impossible and incalculable to make the argument, and then somehow have that happen in a human brain despite being physically impossible and incalculable.
> component in our brains that can not be measured.
"Can not be measured", probably not. "We don't know how to measure", almost certainly.
I am capable of belief, and I've seen no evidence that the computer is. It's also possible that I'm the only person that is conscious. It's even possible that you are!
That appears to be your own assumptions coming into play.
Everything I've seen says "LLMs cannot think like brains" is not dependent on an argument that "no computer can think like a brain", but rather on an understanding of just what LLMs are—and what they are not.
I don’t understand why people say the Chinese Room thing would prove LLMs don’t think, to me it’s obvious that the person doesn’t understand Chinese but the process does, similarly the CPU itself doesn’t understand the concepts an LLM can work with but the LLM itself does, or a neuron doesn’t understand concepts but the entire structure of your brain does
The concept of understanding emerges on a higher level from the way the neurons (biological or virtual) are connected, or the way the instructions being followed by the human in the Chinese room process the information
But really this is a philosophical/definitional thing about what you call “thinking”
Edit: I see my take on this is listed on the page as the “System reply”
If 100 top-notch philosophers disagree with you, that means you get 100 citations from top-notch philosophers. :-P
Check out eg Dennett.... or ... his opionions about Searle; Have fun with eg... this:
"By Searle’s own count, there are over a hundred published attacks on it. He can count them, but I guess he can’t read them, for in all those years he has never to my knowledge responded in detail to the dozens of devastating criticisms they contain;"
I don't see the relevance of that argument (which other responders to your post have pointed out as Searle's Chinese Room argument). The pen and paper are of course not doing any thinking, but then the pen isn't doing any writing on its own, either. It's the system of pen + paper + human that's doing the thinking.
The idea of my argument is that I notice that people project some "ethereal" properties over computations that happen in the... computer. Probably because electricity is involved, making things show up as "magic" from our point of view, making it easier to project consciousness or thinking onto the device. The cloud makes that even more abstract. But if you are aware that the transistors are just a medium that replicates what we already did for ages with knots, fingers, and paint, it gets easier to see them as plain objects.
Even the resulting artifacts that the machine produces are only something meaningful from our point of view, because you need prior knowledge to read the output signals. So yeah, those devices end up being an extension of ourselves.
Your view is missing the forest for the trees. You see individual objects but miss the aggregate whole. You have a hard time conceiving of how exotic computers can be conscious because we are scale chauvinists by design. Our minds engage with the world on certain time and length scales, and so we naturally conceptualize our world based on entities that exist on those scales. But computing is necessarily scale independent. It doesn't matter to the computation if it is running on some 100GHz substrate or .0001Hz. It doesn't matter if its running on a CPU chip the size of a quarter or spread out over the entire planet. Computation is about how information is transformed in semantically meaningful ways. Scale just doesn't matter.
If you were a mind supervening on the behavior of some massive time/space scale computer, how would you know? How could you tell the difference between running on a human making marks with pen and paper and running on a modern CPU? Your experience updates based on information transformations, not based on how fast the fundamental substrate is changing. When your conscious experience changes, that means your current state is substantially different from your prior state and you can recognize this difference. Our human-scale chauvinism gets in the way of properly imagining this. A mind running on a CPU or a large collection of human computers is equally plausible.
A common question people like to ask is "where is the consciousness" in such a system. This is an important question if only because it highlights the futility of such questions. Where is Microsoft Word when it is running on my computer? How can you draw a boundary around a computation when there are a multitude of essential and non-essential parts of the system that work together to construct the relevant causal dynamic. It's just not a well-defined question. There is no one place where Microsoft Word occurs nor is there any one place where consciousness occurs in a system. Is state being properly recorded and correctly leveraged to compute the next state? The consciousness is in this process.
"'where is the consciousness' in such a system": One could ask the same of humans: where is the consciousness? The modern answer is (somewhere) in the brain, and I admit that's likely true. But we have no proof--no evidence, really--that our consciousness is not in some other dimension, and our brains could be receiving different kinds of signals from our souls in that other dimension, like TV sets receive audio and video signals from an old fashioned broadcast TV station.
This brain-receiver idea just isn't a very good theory. For one it increases the complexity of the model without any corresponding increase in explanatory power. The mystery of consciousness remains, except now you have all this extra mechanism involved.
Another issue is that the brain is overly complex for consciousness to just be received from elsewhere. Typically a radio is much less complex than the signal being received, or at least less complex than the potential space of signals it is possible to receive. We don't see that with consciousness. In fact, consciousness seems to be far less complex than the brain that supports it. The issue of the specificity of brain damage and the corresponding specificity in conscious deficits also points away from the receiver idea.
If you put a droplet of water in a warm bowl every 12 hours, the bowl will remain empty as the water will evaporate. That does not mean that if you put a trillion droplets in every twelve hours it will still remain empty.
The point I was trying to make was that the time you use to perform the calculation may change whether there is an "experience" on behalf of the calculation. Without specifying the basis if subjectivity, you can't rule anything out as far as what matters and what doesn't. Maybe the speed or locality with which the calculations happen matters. Like the water drops: given the same amount of time, eventually all the water will evaporate in either case leading to the same end state, but the the intermediate states are very different.
You can replicate the entire universe with pen and paper (or a bunch of rocks). It would take an unimaginably long time, and we haven't discovered all the calculations you'd need to do yet, but presumably they exist and this could be done.
Does that actually make a universe? I don't know!
The comic is meant to be a joke, I think, but I find myself thinking about it all the time!!!
Even worse, as we are part of the universe, we would need to simulate ourselves and the very simulation that we are creating. You would also need to replicate the simulation of the simulation, leading to an eternal loop that would demand infinite matter and time (and would still not be enough!). Probably, you can't simulate something while being part of it.
It doesn’t need to be our universe, just a universe.
The question is, are the people in the simulated universe real people? Do they think and feel like we do—are they conscious? Either answer seems like it can’t possibly be right!
You're arguing against Functionalism [0], of which I'd encourage you to at least read the Wikipedia page. Why would doing the brain's computations on pen and paper rather than on wetware lead to different outcomes? And how?
Connect your pen and paper operator to a brainless human body, and you got something indistinguishable from a regular alive human.
> You can simulate a human brain on pen and paper too.
That's an assumption, though. A plausible assumption, but still an assumption.
We know you can execute an LLM on pen and paper, because people built them and they're understood well enough that we could list the calculations you'd need to do. We don't know enough about the human brain to create a similar list, so I don't think you can reasonably make a stronger statement than "you could probably simulate..." without getting ahead of yourself.
I can make a claim much stronger than "you could probably" The counterclaim here is that the brain may not obey physical laws that can be described by mathematics. This is a "5G causes covid" level claim. The overwhelming burden of proof is on you.
There are some quantum effects in the brain (for some people, that's a possible source of consciousness).
We can simulate quantum effects, but here comes the tricky part: even if our simulation matches the probability, say 70/30 of something happening, what guarantees that our simulation would take the same path as the object being simulated?
We don't have to match the quantum state since the brain still produces an valid output regardless of what each random quantum probability ended up as. And we can include random entropy in a LLM too.
This is just non-determinism. Not only can't your simulation reproduce the exact output, but neither can your brain reproduce its own previous state. This doesn't mean it's a fundamentally different system.
Consider for example Orch OR theory. If it or something like it were to be accurate, the brain would not "obey physical laws that can be described by mathematics".
Orch OR is probably wrong, but the broader point is that we still don’t know which physical processes are necessary for cognition. Until we do, claims of definitive brain simulability are premature.
This is basically the Church-Turing thesis and one of the motivations of using tape(paper) and an arbitrary alphabet in the Turing machine model.
It's been kinda discussed to oblivion in the last century, interesting that it seems people don't realize the "existing literature" and repeat the same arguments (not saying anyone is wrong).
The simulation isn't an operating brain. It's a description of one. What it "means" is imposed by us, what it actually is, is a shitload of graphite marks on paper or relays flipping around or rocks on sand or (pick your medium).
An arbitrarily-perfect simulation of a burning candle will never, ever melt wax.
An LLM is always a description. An LLM operating on a computer is identical to a description of it operating on paper (if much faster).
What makes the simulation we live in special compared to the simulation of a burning candle that you or I might be running?
That simulated candle is perfectly melting wax in its own simulation. Duh, it won't melt any in ours, because our arbitrary notions of "real" wax are disconnected between the two simulatons.
If we don't think the candle in a simulated universe is a "real candle", why do we consider the intelligence in a simulated universe possibly "real intelligence"?
>If we don't think the candle in a simulated universe is a "real candle", why do we consider the intelligence in a simulated universe possibly "real intelligence"?
I can smell a "real" candle, a "real" candle can burn my hand. The term real here is just picking out a conceptual schema where its objects can feature as relata of the same laws, like a causal compatibility class defined by a shared causal scope. But this isn't unique to the question of real vs simulated. There are causal scopes all over the place. Subatomic particles are a scope. I, as a particular collection of atoms, am not causally compatible with individual electrons and neutrons. Different conceptual levels have their own causal scopes and their own laws (derivative of more fundamental laws) that determine how these aggregates behave. Real (as distinct from simulated) just identifies causal scopes that are derivative of our privileged scope.
Consciousness is not like the candle because everyone's consciousness is its own unique causal scope. There are psychological laws that determine how we process and respond to information. But each of our minds are causally isolated from one another. We can only know of each other's consciousness by judging behavior. There's nothing privileged about a biological substrate when it comes to determining "real" consciousness.
That's a fair reading but not what I was going for. I'm trying to argue for the irrelevance of causal scope when it comes to determining realness for consciousness. We are right to privilege non-virtual existence when it comes to things whose essential nature is to interact with our physical selves. But since no other consciousness directly physically interacts with ours, it being "real" (as in physically grounded in a compatible causal scope) is not an essential part of its existence.
Determining what is real by judging causal scope is generally successful but it misleads in the case of consciousness.
I don't think causal scope is what makes a virtual candle virtual.
If I make a button that lights the candle, and another button that puts it off, and I press those buttons, then the virtual candle is causally connected to our physical reality world.
But obviously the candle is still considered virtual.
Maybe a candle is not as illustrative, but let's say we're talking about a very realistic and immersive MMORPG. We directly do stuff in the game, and with the right VR hardware it might even feel real, but we call it a virtual reality anyway. Why? And if there's an AI NPC, we say that the NPC's body is virtual -- but when we talk about the AI's intelligence (which at this point is the only AI we know about -- simulated intelligence in computers) why do we not automatically think of this intelligence as virtual in the same way as a virtual candle or a virtual NPC's body?
Yes, causal scope isn't what makes it virtual. It's what makes us say it's not real. The real/virtual dichotomy is what I'm attacking. We treat virtual as the opposite of real, therefore a virtual consciousness is not real consciousness. But this inference is specious. We mistake the causal scope issue for the issue of realness. We say the virtual candle isn't real because it can't burn our hand. What I'm saying is that, actually the virtual candle can't burn our hand because of the disjoint causal scope. But the causal scope doesn't determine what is real, it just determines the space and limitations of potential causal interactions.
Real is about an object having all of the essential properties for that concept. If we take it as essential that candles can burn our hand, then the virtual candle isn't real. But it is not essential to consciousness that it is not virtual.
> If we don't think the candle in a simulated universe is a "real candle", why do we consider the intelligence in a simulated universe possibly "real intelligence"?
A candle in Canada can't melt wax in Mexico, and a real candle can't melt simulated wax. If you want to differentiate two things along one axis, you can't just point out differences that may or may not have any effect on that axis. You have to establish a causal link before the differences have any meaning. To my knowledge, intelligence/consciousness/experience doesn't have a causal link with anything.
We know our brains cause consciousness the way we knew in 1500 that being on a boat for too long causes scurvy. Maybe the boat and the ocean matter, or maybe they don't.
I think the core trouble is that it's rather difficult to simulate anything at all without requiring a human in the loop before it "works". The simulation isn't anything (well, it's something, but it's definitely not what it's simulating) until we impose that meaning on it. (We could, of course, levy a similar accusation at reality, but folks tend to avoid that because it gets uselessly solipsistic in a hurry)
A simulation of a tree growing (say) is a lot more like the idea of love than it is... a real tree growing. Making the simulation more accurate changes that not a bit.
I believe that the important part of a brain is the computation it's carrying out. I would call this computation thinking and say it's responsible for consciousness. I think we agree that this computation would be identical if it were simulated on a computer or paper.
If you pushed me on what exactly it means for a computation to physically happen and create consciousness, I would have to move to statements I'd call dubious conjectures rather than beliefs - your points in other threads about relying on interpretation have made me think more carefully about this.
Thanks for stating your views clearly. I have some questions to try and understand them better:
Would you say you're sure that you aren't in a simulation while acknowledging that a simulated version of you would say the same?
What do you think happens to someone whose neurons get replaced by small computers one by one (if you're happy to assume for the sake of argument that such a thing is possible without changing the person's behavior)?
It seems to me that the distinction becomes irrelevant as soon as you connect inputs and outputs to the real world. You wouldn't say that a 737 autopilot can never, ever fly a real jet and yet it behaves exactly the same whether it's up in the sky or hooked up to recorded/simulated signals on a test bench.
It’s not that open. We can simulate smaller system of neurons just fine, we can simulate chemistry. There might be something beyond that in our brains for some reason, but it sees doubtful right now
Our brains actually do something, may be the difference. They're a thing happening, not a description of a thing happening.
Whatever that something that it actually does in the real, physical world is produces the cogito in cogito, ergo sum and I doubt you can get it just by describing what all the subatomic particles are doing, any more than a computer or pen-and-paper simulated hurricane can knock your house down, no matter how perfectly simulated.
You're arguing for the existence of a soul, for dualism. Nothing wrong with that, except we have never been able to measure it, and have never had to use it to explain any phenomenon of the brain's working. The brain follows the rules of physics, like any other objects of the material world.
A pen and paper simulation of a brain would also be "a thing happening" as you put it. You have to explain what is the magical ingredient that makes the brain's computations impossible to replicate.
You could connect your brain simulation to an actual body, and you'd be unable to tell the difference with a regular human, unless you crack it open.
Doing something merely requires I/O. Brains wouldn't be doing much without that. A sufficiently accurate simulation of a fundamentally computational process is really just the same process.
Why are the electric currents moving in a GPU any less of a "thing happening" than the firing of the neurons in your brain? What you are describing here is a claim that the brain is fundamentally supernatural.
Thinking that making scribbles that we interpret(!!!) as perfectly describing a functioning consciousness and its operation, on a huge stack of paper, would manifest consciousness in any way whatsoever (hell, let's say we make it an automated flip-book, too, so it "does something"), but if you made the scribbles slightly different it wouldn't work(!?!? why, exactly, not ?!?!), is what's fundamentally supernatural. It's straight-up Bronze Age religion kinds of stuff (which fits—the tech elite is full of that kind of shit, like mummification—er, I mean—"cryogenic preservation", millenarian cults er, I mean The Singularity, et c)
Of course a GPU involves things happening. No amount of using it to describe a brain operating gets you an operating brain, though. It's not doing what a brain does. It's describing it.
(I think this is actually all somewhat tangential to whether LLMs "can think" or whatever, though—but the "well of course they might think because if we could perfectly describe an operating brain, that would also be thinking" line of argument often comes up, and I think it's about as wrong-headed as a thing can possibly be, a kind of deep "confusing the map for the territory" error; see also comments floating around this thread offhandedly claiming that the brain "is just physics"—like, what? That's the cart leading the horse! No! Dead wrong!)
The brain follows the laws of physics. The laws of physics can be closely approximated by mathematical models. Thus, the brain can be closely approximated by mathematical models.
well, promises' computations start as soon as they are created, so they are not composable. and there' no cancellation/resource control as well. so I guess that the criticism is valid
The model that google is using to handle requests in their search page is probably dumber than the other ones for cost savings. Not sure if this would be a smart move, as search with ads is their flagship product. It would be better having no ai in search at all.
But then the product manager wouldn't get a promotion. They don't seem to care about providing a good service anymore.
> probably dumber than the other ones for cost savings
It's amusing how anyone at Google thinks offering a subpar and error-prone AI search result would not affect their reputation worse than it already is.
It's making stuff up, giving bad or fatal advice, promoting false political narratives, stealing content and link juice from actual content creators. They're abusing their anti-competitively dominant position, and just burning good will like it's gonna last forever. Maybe they're too big to fail, and they no longer need reputation or the trust of the public.
Bad information is inherently better for Google than correct information. If you get the correct information you only do one search. If you get bad, or misleading information that requires you to perform more searches that it is definitely better for Google.
This is a variation of the parable of the broken window. It is addressed in what may be the most influential essay in modern economics, "That Which is Seen, and That Which is Not Seen."
I've never liked that parable; it seems to me an incredibly poor argument, standing on its own. It literally itself contrasts the definite circulation of money in the destruction case, with a "could" spend on other things. Or he could not. He could have kept it, waiting for another opportunity later, reducing the velocity of money and contributing to inequality.
It doesn't even cover non-renewable resources, or state that the window intact is a form of wealth on its own!
I'm not naive, I'm sure thousands have made these arguments before me. I do think intact windows are good. I'm just surprised that particular framing is the one that became the standard
I don't think most people care if the information is true; they just want an answer. Google destroyed the value of search by encouraging and promoting SEO blog spam, the horrible ai summary that confidently tells you some lie can now be sold as an improvement over the awful thing they were selling, and the majority will eat it up. I have to assume the ad portion of the business will be folded into the AI results at some point. The results already suck, making them sponsored won't push people any further away.
that might be possible by asking it to create an 3d model with animations (based on a template) and then capture the sprites. but then again, not sure if building it would be worthwhile because 1) openai might add that as a native product (like what happened with .ppt generation) or 2) the capability to do so might be 6 months away
reply