Personally, I would surprised if we are less than 3 years or more than 20 years from humans being obsolete. That is, humans would be economic dead weight, any job could be done better by AI/robots, and "comparative advantage" wouldn't apply because it's cheap enough to just make more robots. At this point, the average human would be completely useless to the billionaires (or to the AIs, if the billionaires fail to control the AIs).
I can see two major delaying factors here:
1. Current generation LLM technology won't scale to true AGI. It's missing a number of critical things. But a lot of effort is being spent fixing those limitations. But until those limitations are overcome, humans will be needed to "manage" LLMs and work around their limitations, just like programmers do today.
2. Generalist robotics is far behind LLMs for multiple reasons, including insufficient sensors and fine motor control. This would require multiple scientific and engineering breakthroughs to fix. Investors will, presumably, spend a large chunk of the world's wealth to improve robotics to replace manual labor. But until they do, human hands will still be needed in the physical world.
The real danger is if AI passes a point where it starts contributing substantially to its own development, speeding up the pace of breakthroughs. If we ever hit that tipping point, then things will get weird, and not in a good way.
I broadly agree with a 3-20 year timeline for a majority of office work. But some important qualifying statements I would add:
- some jobs will stay with humans even when AI would be better at it. We already see a lot of this with even with pre-AI automatisation. Neither markets nor companies are perfectly efficient
- at the point where AI is better than the average human, half of all humans are still better than AI. For companies or departments built around employing lots of average people the cutover point will be a lot earlier than for shops that aim to employ the best of the best. Social change is inevitable long before the best are out of work
- the actual benchmark for " replacement" is not human vs machine, but human plus machine vs machine alone. But the difference doesn't matter much because efficiency increases still displace workers
- I don't think robots will advance enough to meet this timeline. This is not just a software issue. Humans have an amazing suite of sensors and actuators. Just replicating a human hand is insanely complex. Walking, jumping robots are crude automatons in comparison. We can cover a lot with specialized robots, but we won't replace humans in physical jobs in 20 years
I agree that robots are much further off than people expect, in raw technical terms. As you point out, the sensors and actuators in a human hand are far beyond the state of the art.
But all of that is assuming a world where research is being done by humans, or by some mix of humans and something like current LLMs. The bottlenecks would ultimately come down to human judgement and human oversight, and that's a significant limiting factor. Plus, you have to push matter around, which takes time, and you have to extract a lot of information out of limited experiences, which LLMs are bad at.
But if someone is reckless and clever enough to build AIs that can completely replace engineers, or that only need humans as hands, then I don't think we can count on robotics remaining intractable for more than a decade or so. In a wide variety of circumstances, it's possible to make do with worse actuators than the human hand, or with specialized actuators. We can already build incredibly precise motors and specialized sensors. The trouble comes with trying to pack enough of them together to replicate the full generality of the human hand. (I have actually helped build task-specific actuators that did quite well with a single motor and a single visual sensor, before.)
So to put my position more precisely: we cannot automate manual labor robotics without having previously automated creative intellectual labor. But conditional on automating creative research, then I expect worryingly rapid advances in robotics.
To be clear, I think that developing fully-general replacements for human intellectual and physical labor would potentially be the biggest disaster in all of human history.
> Personally, I would surprised if we are less than 3 years or more than 20 years from humans being obsolete.
I think we are as far from it as we were 10 years ago. Or 100 years ago. I think LLM is a deadend technology. Useful, but that won't get anywhere beyond what it is.
But that's the thing, "personally", "I think", etc. Not much of a debate to be had there.
AI making humans obsolete is not really something that causes me any anxiety.
The other article on France's gold reserves mentioned that France sold their older gold bars in the US, and used the money to purchase higher-standard gold bars in Europe. In their case, they did that over many decades and just finished now.
> I trust none of us would presume that the decentralized labor of pen & paper calculations somehow instantiated a “psychology” in the sense of a mind experiencing various levels of despair
Your argument is based on an appeal to intuition. But the scenario that you ask people to imagine is profoundly misleading in scale. Let's assume a modern frontier model, around 1 trillion parameters. Let's assume that the math is being done by an immortal monk, who can perform one weight's calculations per second.
The monk will generate the first "token", about 4 characters, in 31,688 years. In a bit over 900,000 years, the immortal monk will have generated a single Tweet.
At that point, I no longer have any intuition. The sort of math I could do by hand in a human lifetime could never "experience" anything.
But I can't rule out the possibility that 900,000 years of math might possibly become a glacial mind, expressing a brief thought across a time far greater than the human species has existed.
As the saying goes, sometimes quantity has a quality all its own.
(This is essentially the "systems response" to Searle's "Chinese room" argument. It's a old discussion.)
I don't personally believe LLMs are sentient, but I've always enjoyed this thought experiment: https://xkcd.com/505. I have a signed copy framed on my wall.
In discussions like this, we're always going to bottom out at certain assumptions we bring with us, so I agree.
One reason I like bringing up examples like this (the xkcd in sister reply is also good) is that it makes really visible what our assumptions are. The scales are big both in space and time in order to emphasize what weight is given to functional equivalence.
I feel pretty confident most people wouldn't presume that doing a bunch of math by hand on paper can create glacial ephiphenomenal experiences (though I like the term).
Another thing that's interesting to me is that the converse assumption, i.e. one with a strong allegiance to functionalism, ends up feeling far more idealistic than you might expect. A box of gas, left on its own for long enough, will engage in a pattern of collisions that in a certain interpretative framework correspond to an LLM forward pass. In another, it can be a game of minesweeper.
The individual particles of course, couldn't care less whether you see them as part of one or the other. Yet your ability to see them in light of the first one is perhaps enough for the lights to truly turn on, if transiently, in some mind somewhere.
> A box of gas, left on its own for long enough, will engage in a pattern of collisions that in a certain interpretative framework correspond to an LLM forward pass.
That's a fun thought experiment. Greg Egan based a delightful science fiction novel on this premise. Permutation City, I believe.
To be clear, I don't necessarily think that current LLMs have subjective experiences. If I had to guess, I'd say "probably not." But:
- If I came from another universe, and if you asked me whether chemistry could have subjective experiences, I'd answer "probably not." And I would be wrong.
- Even if no current frontier models are "aware", it's possible that future models might be. Opus 4.6, for example, behaves far more like a coherent mind than last year's 3 billion parameter toy models. So future 100 trillion parameter models with different internal architectures might be even more like minds. (To be clear, I do not think we should build such models.)
- Awareness and intelligence might be different. Peter Watts' Blindsight is a fun exploration of this idea. Which leads me to conclude that it wouldn't necessarily matter whether an AI like SkyNet has subjective awareness or not. What matters is what kind of long-term plans it could pull off and how much it could reshape the world.
> Which leads me to conclude that it wouldn't necessarily matter whether an AI like SkyNet has subjective awareness or not. What matters is what kind of long-term plans it could pull off and how much it could reshape the world.
> just look at the relationship subreddit the first answer is always divorce, it’s become a meme
As someone who has been married for a couple of decades, I, too, would recommend divorce to many of the (often-fictional) people asking Reddit for relationship advice. A marriage has a huge impact on whether your life is basically good, or if you pass a big chunk of your time on this Earth in misery. And many of the people (or repost bots) asking for advice on Reddit appear to be in shockingly awful relationships. Especially for people who don't have kids, if your marriage is making you miserable, leave.
(But aside from this, yeah, don't ask Reddit for relationship advice. Reddit posters are far more likely to be people who spend their life indoors posting on Reddit, and their default advice leans heavily towards "never interact with anyone, ever.")
As someone who has moved back and forth between Mac and Linux around 3 or 4 times since 1992, Linux is actually surprisingly reasonable. For laptops, I just buy from Dell, with Ubuntu preloaded, and everything works. (Dell's build quality isn't as good as Apple's, so I usually spend extra for Dell's next-day on-site service.) For workstations, it's usually pretty straightforward to get something that Just Works.
After that, I've got Chrome, Visual Studio Code, Steam and a full suite of command-line tools, which covers my personal essentials. But if you rely heavily on something like Photoshop or the MacOS X Omnifocus application, then you might find much larger holes on the Linux side.
As a matter of principal, I consider myself too old to troubleshoot Linux without getting paid for it. It turns out that I virtually never do that, so I'm pretty happy. Really, buying pre-loaded and fully supported Linux laptops eliminates 80% of the pain, and nearly all of the remaining 20% can be avoided by refusing to get clever.
Then you just implement Serialize and Deserialize for TypeWithDifferentSerialization.
This cover most occasional cases where you need to work around the orphan rule. And semantically, it's pretty reasonable: If a type behaves differently, then it really isn't the same type.
The alternative is to have a situation where you have library A define a data type, library B define an interface, and library C implement the interface from B for the type from A. Very few languages actually allow this, because you run into the problem where library D tries to do the same thing library C did, but does it differently. There are workarounds, but they add complexity and confusion, which may not be worth it.
The gotcha is what happens when TypeWithSomeSerialization is not something you’re using directly but is contained within SomeOtherTypeWithSomeSerialization which you are using directly. Then things get messy.
We can't say with certainty how an unspecified in-the-future library might work, so I'm going to use serde as a stand-in.
You can implement `Serialize` for a wrapper type and still serialize `SomeOtherTypeWithSomeSerialization` (which might be used by the type being wrapper directly or indirectly) differently. It might not be derivable, of course, but "I don't want the default" sort of makes that a given.
You might think you can keep 16 year olds from looking at porn, if they want to. You can't. You have never been able to. All you can do is teach them that the law is stupid and pointless, and they should treat rules with contempt. But they'll still be able to look at porn.
What you can do is allow the government and private companies to track everyone, everywhere, all the time. And you can create more gatekeepers that hold personal identity data, misuse it, and leak it.
Yeah, I agree with this. I think age-related content moderation is a losing fight and one that will create more contempt for laws, more surveillance, and much more PII surface area that will be exploited.
There are really two "core" issues at play:
1. The prudish nature of US society
2. The fact that we don't have data privacy laws and restrictions on digital surveillance by private companies
Sixteen year olds? Sure, mysterious Forest Porn and the older brother who'd give you skin mags have always existed. And Cinemax at night, catching the odd frame that somehow gets thought the scrambler. Whatever.
But we can't realize all the supposed glorious promise of all this tech bullcrap for education and free exploration of younger kids if we can't at least come pretty damn close to guaranteeing that an eight-year-old won't stumble on Rotten.com or hardcore porn if an adult isn't looking over their shoulder constantly. And whatever that solution is needs to work for parents who don't have the know-how or time to be sysadmins for their household.
I'm not overly concerned with 16 year olds. But the tools for protecting younger children suck. A consistent account setting and header would do a lot to improve parental controls.
> What you can do is allow the government and private companies to track everyone, everywhere, all the time. And you can create more gatekeepers that hold personal identity data, misuse it, and leak it.
This is already happening. A central setting would improve privacy over the way things are right now.
> A central setting would improve privacy over the way things are right now.
What? How? What improvement are you seeing that I'm not?
Putting all our PII into one huge repository and then letting corps and govts access it sounds like a dystopian nightmare. This is why we don't like Palantir.
What happens if a bad guy steals that data and your identity? They go and look at CSAM using your ID? The police turn up at your door and cart you off to prison? Are you really going to be able to argue that it wasn't you? If so, what is the point of the system? If we're relying on IP addresses and other evidence for access (so you can fight these charges) can't we just use them in the first place?
I don't know what you're talking about, but it's not what this kind of bill is about.
This kind of bill is about the OS telling things whether you're: 0-12, 13-15, 16-17, 18+
No databases, no stealable identity, only the barest sliver of 2 bits of PII.
As for how it's an improvement, we already have sites asking to see your driver's license or pictures of your face for much worse age verification paradigms. If most of those changed to a local age setting, privacy would go up.
How does the OS know that you moved from the "13-15" bracket to the "16-17" bracket without knowing your DoB?
And this is the thin edge. Because in a few years there'll be a bill saying something like "too many children are lying about their age online. We need to verify their age" and then we're capturing IDs and storing them somewhere.
> The OS could require the parent to manually update it.
How is their age verified?
At some point one of two things is required:
1) A promise that the user is a certain age
- Which puts us exactly where we are
2) Official identification is used to verify age
- Which creates a PII nightmare
That's it. There's only those two options. You may not believe #2 is going to be a privacy nightmare but we're already seeing it happen with Discord/OpenAI/LinkedIn and everyone else that uses Persona[1]. They aren't doing the minimal security things and already aren't doing what they claimed (processed on device, then deleted). This "hack" couldn't happen if that was true
The difference here is it can be set by the parent on the OS and locked. Requiring sudo equivalent to change.
The way it is now, there's nothing stopping a (18-) user from logging out of a 'parental control enabled' account and making a new account without those controls on any service from Facebook to Steam. So the only effective option at that point is to entirely block that app or service.
This gives more power to parental control software. And yeah moves the responsibility from the service to the parents, which is what the services want cuz COPPA and other similar laws.
But you do bring up another issue people aren't discussing. That the default setting is under 18.
So we protect the children from adults by... having no way to actually verify someone is a child?
The problem is less kids getting access to porn and more pedos getting accounts to spaces designed for children. Places like Club Penguin or very famously Roblox.
Here's the problem, you can't verify children. They don't have identification in the same way adults do. And worse, if we gave them that then it only makes them more vulnerable!
Then we have the whole problem of a global internet. VPN usage is already skyrocketing to circumvent these policies.
So the only real "solution" to this is global identification systems where essentially everyone is carrying around some dystopian FIDO key (definitely your phone) that has all your personal information on it and you sign every device you touch. Because everything from your fridge to your car is connected to the Internet.
But that's a cure worse than the poison. I mean what the fuck happens to IOT devices? Do we just not allow them on the internet? That they're assumed 18+? So all kids need to do is get a raspberry pi? All they need to do is install a VM on their phone? On their computer? You might think that kids won't do this but when I was in high school 20 years ago we all knew how to set up proxies. That information spread like wildfire and you bet it got easier as the smarter kids put in the legwork.
This is a losing battle. It's not a cat and mouse game it's While E Coyote vs Road Runner.
We're on HN FFS. If there's anywhere on the Internet that the average user is going to understand how impossible this is it should be here. We haven't even talked about hacking! And yes, teenage script kiddies do exist.
These policies don't protect kids, they endanger them. On top of that they endanger the rest of us. Seriously, just try to work it out. Try to create a solution and then actually try to defeat your solution. Don't be fucking Don Quixote.
> But you do bring up another issue people aren't discussing. That the default setting is under 18.
Some things do that. This law doesn't have a default. If the admin sets all the user accounts to 18+, then the users are stuck with the setting being 18+.
> I mean what the fuck happens to IOT devices? Do we just not allow them on the internet?
Sounds pretty good to me.
But yeah they need a different handling of some manner. Maybe a "give no access to anything age-gated" category, though is that really different from under-13 in practice?
> So all kids need to do is get a raspberry pi? All they need to do is install a VM on their phone? On their computer? You might think that kids won't do this but when I was in high school 20 years ago we all knew how to set up proxies.
Just delaying unrestricted access to high school would already solve most of the problem.
> These policies don't protect kids, they endanger them. On top of that they endanger the rest of us.
They do not. Some totally different system could endanger people, but this one doesn't.
Really? Be a bit more serious now. There are a lot of things that connect to the internet, and not just for stupid data harvesting reasons. I gave other examples. I think you can understand that this gets pretty hairy pretty quickly. If you don't, then dig in deeper to how the networking is done. You're an older account so I'm assuming you actually understand computers.
> They do not.
They definitely do. I explicitly stated how that happens too. If you want me to take you seriously you have to respond with something better than "trust me bro".
There is no evidence that these companies are actually handling that data properly. There is a lot of evidence that they are handling it improperly. That data being leaked does in fact, endanger kids.
I'm also unconvinced these things even achieve the goals they claim to be after. Which is keeping pedos away from kids. i.e. the reason I said you're missing the point. So either it is not achieving that goal, or lulling people into a false sense of security. Imagine if Roblox was saying "we don't allow adults on the platform" and so now all the tech illiterate parents and kids think their kids are exclusively talking to other kids. That's just a worse situation than now.
> They definitely do. I explicitly stated how that happens too. [...] data being leaked
Again "Some totally different system could endanger people, but this one doesn't."
Any system that has companies handling personal data and able to leak it is not the system this kind of law talks about.
> false sense of security. Imagine if Roblox was saying
In that situation, Roblox is the problem, not the law.
> So what do these laws even solve?! I'm serious
If widely implemented, a parent can set a single toggle and then the accounts their kids make will all be appropriately restricted.
It wouldn't replace direct checks from the parent on what their kids are doing, but it would greatly reduce the risk profile. And making it simple and built-in means that non-tech-expert parents can set it.
>> Be a bit more serious now.
> The serious answer is in the next line.
> ...
> Again "Some totally different system could endanger people, but this one doesn't."
>> If you want me to take you seriously you have to respond with something better than "trust me bro".
I do have a hard time taking you seriously
> If widely implemented, a parent can set a single toggle and then the accounts their kids make will all be appropriately restricted.
People keep telling you option 1 is the correct one, and that it's not actually useless.
You keep describing privacy problems that only exist with option 2.
This law is not option 2. Stop interpreting people as if they're badly defending option 2. They're not.
> HOW
They take an OS where only admins can change the age setting. They set the age on a non-admin account, which they give their child access to. The OS passes the age setting along to programs, which pass it along to services that need to restrict behavior.
This is not the same as how it works today. It's impossible for a parent to do this today. The best they can do is try to keep track of every account their child has and dig through the settings manually.
Heard exactly the same thing about VPN use (kids won't know how to set up a VPN). Then Australia age verification kicked in, and VPN use went through the roof [0]
And, of course, the response so far has included similar thoughts as the UK about banning VPNs [1]
> How does the OS know that you moved from the "13-15" bracket to the "16-17" bracket without knowing your DoB?
The OS has the birth date. Of probably 1-5 people.
> And this is the thin edge. Because in a few years there'll be a bill saying something like "too many children are lying about their age online. We need to verify their age" and then we're capturing IDs and storing them somewhere.
Those things are already happening. I see this kind of mechanism as significantly more of an alternative to privacy invasion than an enabler of privacy invasion.
The political establishment used to be able to control what you read, through control of the media. Then 1995 happened and everyone got access to anything they wanted. The establishment have wanted to put that genie back in the bottle ever since. This is part of that effort.
> Requiring the central database is the scary part.
Yes, agreed.
And this type of proposal has no central database, so it removes the scary part.
(Unless you're talking about the local accounts on each computer storing dates of birth for a single household as a "central database" in which case you're being ridiculous and please stop doing that.)
We have been able to automatically inline functions for a few decades now. You can even override inlining decisions manually, though that's usually a bad idea unless you're carefully profiling.
Also, it's pointer indirection in data structures that kills you, because uncached memory is brutally slow. Function calls to functions in the cache are normally a much smaller concern except for tiny functions in very hot loops.
I'm not sure Rust's `async fn` desugaring (which involves a data structure for the state machine) is inlineable. (To be precise: maybe the desugared function can be inlined, but LLVM isn't allowed to change the data structure, so there may be extra setup costs, duplicate `Waker`s, etc.) It's probably true that there is a performance cost. But I agree with the article's point that it's generally insignificant.
For non-async fns, the article already made this point:
> In release mode, with optimizations enabled, the compiler will often inline small extracted functions automatically. The two versions — inline and extracted — can produce identical assembly.
I am fairly doubtful that it makes sense to be using async function calls (or waits) inside of a hot loop in Rust. Pretty much anything you'd do with async in Rust is too expensive to be done in a genuinely hot loop where function call overhead would actually matter.
I have actually very convincingly recreated a moderately complex 70s-era mainframe app by having an LLM reimplement it based on existing documentation and by accessing the textual user interface.
The biggest trick is that you need to spend 75% of your time designing and building very good verification tools (which you can do with help from the LLM), and having the LLM carefully trace as many paths as possible through the original application. This will be considerably harder for desktop apps unless you have access to something like an accessibility API that can faithfully capture and operate a GUI.
But in general, LLM performance is limited by how good your validation suite is, and whether you have scalable ways to convince yourself the software is correct.
> I haven't seen this much hype and hopium since the dot com boom.
The notion that 99% of the workforce and military will be AIs isn't "copium", it's grounds for absolute terror. One of two things will be true:
1. The AIs will be controlled by the Epstein class, who will then have no use for most of humanity, either as workers or soldiers.
2. Or the AIs will be controlled by the AIs themselves, which also seems worrisome.
Really, any situation where 99% of the workforce and military are AIs should be deeply concerning, for reasons that should be obvious to any student of history or evolution.
And, sure, maybe we won't get there in our lifetimes. But if we did, I wouldn't expect an automatic utopia.
I can see two major delaying factors here:
1. Current generation LLM technology won't scale to true AGI. It's missing a number of critical things. But a lot of effort is being spent fixing those limitations. But until those limitations are overcome, humans will be needed to "manage" LLMs and work around their limitations, just like programmers do today.
2. Generalist robotics is far behind LLMs for multiple reasons, including insufficient sensors and fine motor control. This would require multiple scientific and engineering breakthroughs to fix. Investors will, presumably, spend a large chunk of the world's wealth to improve robotics to replace manual labor. But until they do, human hands will still be needed in the physical world.
The real danger is if AI passes a point where it starts contributing substantially to its own development, speeding up the pace of breakthroughs. If we ever hit that tipping point, then things will get weird, and not in a good way.
reply