I imagine my future will involve spending 40–60 hours a week using LLMs to do the work of multiple roles instead of just one, while wishing I could spend my remaining time doing other things.
I think your final sentence is more accurate than your churn argument. AI doesn't double output, but actually writing the code is only a small part of the job.
I understand the vision, but how does this work on a global scale. e.g. American employees refuse to build this, but China's don't.
Edit: I originally ended with "What would have happened if Germany had a nuclear bomb and America didn't?", but I think it distracted from the point I was trying to make so moving this to an edit. I'm not trying to ask "is the US the bad guy". I'm trying to ask how to balance personal anti war sentiments with the realities of the world (specifically in this case keeping up in an arms race).
>American employees refuse to build this, but China's don't.
How about you articulate the threat from an AI powered China to people outside of AI powered China and discuss potential methods to counter that, instead of insisting capabilities be developed just in case.
>is the US the bad guy
Yes
>I'm trying to ask how to balance personal anti war sentiments with the realities of the world
Insist on open information, never surrender consent willingly and demand justification for everything. As always.
"geopolitics" are an abstraction that tries to pave over guilt. But Abstractions are only useful if they provide some benefit. Geopolitics provide no benefit when assigning guilt so its easily ignored.
PRC isn't going to do any of the things you are asking for, and no one expects them to. The threat of an AI powered China is really obvious to me, but apparently the idea of "IP theft and industrial sabotage, but at scale with AI agents instead of human meat sacks" is hard to clearly articulate.
One method, beyond AI powered kill chains, to counter an AI powered China is of course strategic weapons.
Well game theory aside, the reality is if PRC weaponizes AI, there's a chance they may use it in the future. If US weaponizes AI, they'll definitely be using it to kill people within the calendar year. Employees have to factor that in, for PRC worker their killing people is hypothetical, for US worker, it's inevitable.
Do the same thing we did with the nuclear arms race: Treaties to limit and control it.
Obviously, we would have had more political leverage if our leaders had started working on a treaty before they crossed enough moral red lines to start a tech revolt, but we did not elect the sort of leaders that would do that.
The obvious countries sign those treaties for political reasons. There are countries officially pretending they don’t have any nuclear warheads, but it is well known by everyone they have plenty and a capacity to deliver them. They sandbag for political, but also for religious reasons and that’s scary.
Treaties are pointless without a practical means of monitoring and enforcement. We're fortunate that nuclear weapons programs are difficult to hide from overhead imagery, seismographs, and fallout detectors. We can't verify what code is running in a Chinese data center or missile guidance system. It's just a ridiculous notion to equate AI or autonomous weapons with nukes in any way.
treaties worked because both sides had large quantity of bombs. In this case, certain people do not want US to have AI bomb, while China and others will have it.
> American employees refuse to build this, but China's don't.
It's not American employees vs. China employees. No need to villainize China at every opportunity. Most Chinese employees are more similar to American employees than you think.
It's {top candidates who have their pick of employers} have the luxury to refuse to build this.
Mid-tier dude who can't land a job at any of the top AI companies and can code with Cursor and trying to pay their rent or medical bills will absolutely build AI for the military in return for having their rent paid.
This is regardless of whether it is in the US or China.
The reason it works is when you have less participants in an effort you have slower progress in that endeavor. Brilliant employees prohibiting their entire org to not support the development of bad things prevents less brilliant employees from doing bad things.
It is sort of like computers are amazing but can also be a privacy nightmare. Software engineers don’t help or coordinate with black hat hackers. So black hat hackers have a harder time refining their systems.
Well, then military use of some US commercial AI systems will be subject to minimal restrictions while Chinese AI might not be.
Thus some people avoid having to see their work used for killing people or in mass surveillance, so that they're actually able to contribute to AI development instead of leaving the field.
As I understand anthropic refused two things: domestic surveillance in the US and weapons automated such that they could kill without a human in the loop. I don't think either of these would hamper the US against China in any meaningful way.
That’s exactly why I think the principled position is naive in a tragedy of the commons situation we’re in - it isn’t a sci fi story with a happy ending, it’s the Manhattan project and 70+ years ago nazi and japanese data centers doing foundational model training would’ve been bombed to smithereens at any cost.
The PRC is comparable to early 30s Germany/Japan. We're on a dangerous course toward a devastating conflict if both sides can't reach a stable understanding.
Don't need to use China even, Microsoft, or Palantir, etc will continue to support the US military, likely using Google technology in the process (Guava, gRPC maybe?, k8s assuredly? etc).
Sorry but if you truly believe in technology not using in bad context, the only way to avoid it is to change careers. The issue with news like this is it's hard to actually trust the protesters, they probably are happy to clear their conscience personally while continuing to reap the benefits of living in the tech industry. Have your cake and eat it too.
Sometimes people do quit - they're probably the ones you want to hire if you care about ethics. Most don't though.
I'm going to give a shout out here to an episode of the excellent podcast Hardcore History, specifically Episode 59: The Destroyer of Worlds [1].
The development of the atomic bomb created a debate in American policy circles about how the US should react. Within a few years, the same debate occurred over developing thermonuclear weapons. The same question kept coming up: what if the enemy has these weapons and we don't?
Dan Carlin's position, which I happen to agree with, is that America chose wrong. It became both belligerent and paranoid to a degree that just wasn't the case before WW2. If you look up the history of regime changes at the hands of the US [2] then you can see it went into overdrive after 1945.
Part of the problem here I think is projection, the psychological phenomenon. It's also a cultural phenomenon. So, for example, when you have a historically oppressed people who are being potentially freed, the oppressors will fret that the formerly oppressed will rise up and kill them. This is projection.
We saw this exact thing play out with Emancipation. There was no mass revenge violence by the former slaves. If anything, there was more violence by the former oppressors against freed slaves and a system that excuded the violence (eg the Colfax massacre [3]).
I think nations can be guilty of this too. The US sees any other global power as a potential hegemonic, imperialist power that will dominate and exploit everyone around them because, well, that's what we do.
We also see this in how we view AI as a resource. We see it as something to be owned and gatekept such that some US company will become insanely wealthy further extracting every last dollar from every person on Earth.
So your comment belays a common fear that China will displace us as a global hegemonic, imperialist power despite there being zero evidence that China behaves in that fashion. American propaganda runs deep and the projection is strong so this will immediately cause some to say "but Tibet" or "but Taiwan" without really knowing anything any of those situations.
As just one example, the One China policy is the official policy of the US, the EU and almost every nation on Earth. "They might invade" I preemptively hear. They won't, partly because they can't but really because they don't need to. If the world already has the One China policy, why do anything? Oh and I said they can't because they can't. They don't have that military capability. If you think that, you don't know anything about war. Crossing 100 miles of ocean to invade an island with a army of over 500,000 is simply not possible.
Let me put it this way: the 17 or so miles of the English Channel stopped the German war machine despite having millions of soldiers.
Anyway, back to the point: this whole argument of "what if China does military AI?" is (IMHO) projection. If anything, China has shown that they won't allow a US tech company to control and gatekeep AI (eg by rreleasing DeepSeek). And if China gets AI, they're more than likely to use it to further raise people out of poverty and automate away more menial jobs without making those displaced workers homeless.
> The US sees any other global power as a potential hegemonic, imperialist power that will dominate and exploit everyone around them because, well, that's what we do.
In the Cold War, this was the correct approach, the USSR was that.
> And if China gets AI, they're more than likely to use it to further raise people out of poverty and automate away more menial jobs without making those displaced workers homeless.
Your comment is very optimistic. But the quoted part reminded me of something I heard (again) about China using slave labor in their lithium mines:
It was a meta point. Sorry if I gave you the impression that I was weighing in on the particulars of jmyeet's essay. Rather, it was a high-level point that if you know a ton of little facts but you're only seeing half of the story, then you need to improve and broaden out your intake.
I would have the same opinion of a poster who was so one-sidedly pro-America and anti-China.
And maybe you can read a book about adding to the conversation instead of navel gazing oh superior intelligent one who has read so many books but can't add a comment or reference a book to point to a concept that could help add to the shared pool meaning.
The good books, unlike the good podcasts, can rarely be reduced to a single forum comment. You don't read them to cite them as a zinger in an online back-and-forth. You read lots of them, and you cross-reference them with the world around you, to slowly build up a view of the world that's irreducibly complex. You read them to escape yourself and your times -- the exact opposite of "navel gazing", in a sense.
Most books add to "the shared pool [of] meaning", as you say. Pick any one; I didn't have a specific one in mind. The commenter to whom I was responding is in a state where pretty much any well-written book about history would help them out a lot. Something written before 1980 might be especially illuminating.
It might take many books, if they want their comprehension of history to actually be "hardcore".
You seem to be laboring under the naive belief that mainland China is a rational actor which will refrain from attacking Taiwan over fear of heavy losses and possible defeat. You might have been correct at some point, but that situation no longer obtains. Xi Jinping has successfully purged all potential rivals and personally taken over centralized control of all important decisions. We have no visibility into his thinking, so we have to assume the worst. If he orders the PLA to go then they'll go, regardless of consequences. Part of preparing for the eventuality involves building more effective autonomous weapons. There is no realistic alternative.
So I follow a number of China scholars and experts and I've yet to see any consensus about what these military purges actually mean.
It could be about corruption. You see this in the Russian military where paid-for tanks didn't exist because the generals had pocketed the money. It could be to have an expansionist policy. It could well be to not have an expansionist policy. The point is that nobody really knows yet.
But the string I really wanted to pull at was this idea that China isn't a "rational actor". It's lazy and really a thought-terminating cliche. It's certainly no basis for analysis or policy-making. It's kind of the final boss of justification. "Putin/Saddam/Xi/Castro/Maduro is crazy". That really just means you don't understand what's going on or want to ignore the facts.
We now have 50+ years (since really the end of the Cultural Revolution) of China acting in a very rational, very intentional and very long-term way. Xi's own history here is pretty interesting. He went from privileged child (his father was one of Mao's lieutenants) to being banished to working his way up through the party's ranks over decades.
It's a mistake (IMHO) to view Xi as a singular actor, let alone as a irrational autocrat. While the PRC and the CCP might be relatively new the systems and political structures can probably be traced back thousands of years. I'm thinking particularly of the bureaucratic reforms of the Qin Dynasty some ~2300 years ago.
What cannot be ignored is that a billion Chinese have seen a massive improvement in their living conditions during their lifetimes. Almost all of the people pulled out of extreme poverty in the 20th century were because of China (~800M). So although China is authoritarian, the government is extremely popular because of that increase in living conditions. It's something that we in the West have a hard time fathoming because our living conditions have been in decline since at least the 1970s.
Interesting but irrelevant. Hope is not a strategy. In intelligence analysis you have to look at capabilities and intents. We have no clear understanding of Xi's true intents, so for national security purposes we have to assume they're negative.
This is not answering the question.. and HN ain't US only.
You can say the same for any other country... What if Japan employee refuse, but American want that anyway? What if China employee refuse, but Russia employee want that anyway?
The implication are still the same -- social, culture, jurisdiction, national interest, company interest don't share the same boundary and don't align on their priorities.
No, I don't know that at all. The differences so far are only incremental. There is the potential for another revolution in military affairs due to autonomous systems but so far it hasn't actually arrived.
Is there any reason to think that autonomous weapons are a critical strategic capability? It's hard to see what an unpiloted drone can do that a remotely piloted drone can't, other than perhaps human rights violations.
The simple version: Weapons systems are quickly advancing to the point where many of them can navigate and operate independent of human control. The obvious question here is at which point do we give these platforms release authority for lethal weapons. It becomes impractical to require (or even imagine, really) there to be a human "pilot" operating every single drone when you have hundreds or thousands of them operating in theater. That's really what this is about.
Think of it this way: mines installed in the seabed in wars past were "dumb", in that a passing ship had to happen into it. Imagine systems deployed underwater that were mobile, contained multiple torpedoes, and could strike warships with little to no warning given their small acoustic signature. It's the same principal as a mine (you leave it one spot, hope an enemy ship comes by), but the capabilities are far more advanced. If the system is not at least semi-autonomous than it might as well be a dumb mine again.
Remotely piloted drones can't operate at long ranges in a conflict against a near-peer adversary such as China. All of the high-bandwidth communications links will be degraded by a combination of jamming, cyber attacks, and anti-satellite weapons. Remote piloting will only be reliable using fiber optic cables (very short range) or direct line-of-sight transmission. So hardly practical in the Pacific theater of operations.
In an existential conflict no one cares about human rights. That's something for the winners to worry about after the shooting stops.
Purely anecdotal, but my friend's dad was a professor at well respected university in California doing Cancer research and recently moved to China even though he didn't want to because the money was too much for him to pass up.
Admittedly I haven't used C# in a few years, but to my knowledge it is much more ergonomic than java and personally it's my preferred language. Only thing stopping me from using it more is it has a much smaller community than java/python etc. Wondering what you think is missing.
I worked for a publicly traded corporate elearning company that was written this way. Mainly sprocs with a light mapping framework. I agree this is better as long as you keep the sprocs for accessing data and not for implementing application logic.
ORMs are way more trouble than they’re worth because it’s almost easier to write the actual SQL and just map the resulting table result.
Is it true that it's bad for learning new skills? My gut tells me it's useful as long as I don't use it to cheat the learning process and I mainly use it for things like follow up questions.
It is, it can be an enormous learning accelerator for new skills, for both adults and genuinely curious kids. The gap between low and high performancer will explode. I can tell you that if I had LLMs I would've finished schooling at least 25% quicker, while learning much more. When I say this on HN some are quick to point out the fallibility of LLMs, ignoring that the huge majority of human teachers are many times more fallible. Now this is a privileged place where many have been taught by what is indeed the global top 0.1% of teachers and professors, so it makes more sense that people would respond this way. Another source of these responses is simply fear.
In e.g. the US, it's a huge net negative because kids aren't probably taught these values and the required discipline. So the overwhelming majority does use it to cheat the learning process.
I can't tell you if this is the same inside e.g. China. I'm fairly sure it's not nearly as bad though as kids there derive much less benefit from cheating on homework/the learning process, as they're more singularly judged on standardized tests where AI is not available.
I don't get this line of thinking. Never in my life have I heard the reasoning "replacing effort is the problem" when talking about children who are able to afford 24/7 brilliant private tutors. Having access to that has always been seen as an enormous privilege.
Having an actual human who is a "brilliant private tutor" is an enormous privilege. A chatbot is not a brilliant private tutor. It is a private tutor, yes, but if it were human it would be guilty of malpractice. It hands out answers but not questions. A tutor's job is to cause the child to learn, to be able to answer similar questions. A standard chatbot's job is to give the child the answer, thus removing the need to learn. Learning can still happen, but only if the child forces it themselves.
That's not to say that a chatbot couldn't emulate a tutor. I don't know how successful it would be, but it seems like a promising idea. In actual practice, that is not how students are using them today. (And I'd bet that if you did have a tutor chatbot, that most students would learn much more about jailbreaking them to divulge answers than they would about the subject matter.)
As for this idea that replacing effort not being a problem, I suggest you do some research because that is everywhere. Talk to a teacher. Or a psychologist, where they call it "depth of processing" (which is a primary determinant of how much of something is incorporated, alongside frequency of exposure). Or just go to a gym and see how many people are getting stronger by paying 24/7 brilliant private weightlifters to do the lifting for them.
Regarding your concerns about tutor emulation, your argument seems to be students use chatbots as a way to cheat rather than as a tutor.
My pushback is its very easy to tell a chatbot to give you hints that lead to the answer and to get deeper understanding by asking follow up questions if that's what you want. Cheating vs putting in work has always been something students have to choose between though and I don't think AI is going to change the amount of students making each choice (or if it does it won't be by a huge percentage). The gap in skills between the groups will grow, but there will still be a group of people that became skilled because they valued education and a group that cheated and didn't learn anything.
> A standard chatbot's job is to give the child the answer, thus removing the need to learn.
An LLM's job is not to give the child the answer (implying "the answer to some homework/exam question"), it's to answer the question that was asked. A huge difference. If you ask it to ask a question, it will do so. Over the next 24 hours as of today, December 5th 2025, hundreds of thousands of people will write a prompt that includes exactly that - "ask me questions".
> Learning can still happen, but only if the child forces it themselves.
This is literally what my original comment said, although "forcing" it is pure negative of a framing; rather "learning can still happen, if the child wants to". See this:
>In e.g. the US, it's a huge net negative because kids aren't probably taught these values and the required discipline. So the overwhelming majority does use it to cheat the learning process.
I never claimed that replacing effort isn't necessarily a problem either, just that such a downside has never been brought up in the context of access to a brilliant tutor, yet suddenly an impossible-to-overcome issue when it comes to LLMs.
I learnt the most from bad teachers#, but only when motivated. I was forced to go away and really understand things rather than get a sufficient understanding from the teacher. I had to put much more effort in. Teachers don't replace effort, and I see no reasons LLMs will change that. What they do though is reduced the time to finding the relevant content, but I expect at some poorly defined cost.
# The truly good teachers were primarily motivation agents, providing enough content, but doing so in a way that meant I fully engaged.
I think what it comes down to, and where many people get confused, is separating the technology itself from how we use it. The technology itself is incredible for learning new skills, but at the same time it incentivizes people to not learn. Just because you have an LLM doesn't mean you can skip the hard parts of doing textbook exercises and thinking hard about what you are learning. It's a bit similar to passively watching youtube videos. You'd think that having all these amazing university lectures available on youtube makes people learn much faster, but in reality in makes people lazy because they believe they can passively sit there, watch a video, do nothing else, and expect that to replace a classroom education. That's not how humans learn. But it's not because youtube videos or LLMs are bad learning tools, it's because people use them as mental shortcut where they shouldn't.
I fully agree, but to be fair these chatbots hack our reward systems. They present a cost/benefit ratio where for much less effort than doing it ourselves we get a much better result than doing it ourselves (assuming this is a skill not yet learned). I think the analogy to calculators is a good one if you're careful with what you're considering: calculators did indeed make people worse at mental math, yet mental math can indeed be replaced with calculators for most people with no great loss. Chatbots are indeed making people worse at mental... well, everything. Thinking in general. I do not believe that thinking can be replaced with AI for most people with no great loss.
I found it useful for learning to write prose. There's nothing quite like instantaneous feedback when learning. The downside was that I hit the limit of the LLM's capabilities really quickly. They're just not that good at writing prose (overly flowery and often nonsensical).
LLMs were great for getting started though. If you've never tried writing before, then learning a few patterns goes a long way. ("He verbed, verbing a noun.")
Agree but at the same time talking about leaving the rat race and glorifying the simple life is old hat for anyone over the age of 25. It gets annoying reading trite advice written by someone that sees it as profound insight.
reply