Hacker Newsnew | past | comments | ask | show | jobs | submit | Insanity's commentslogin

I'm also experimenting with it more and more. Now I'm trying to create a 2D side-scrolling shooter with it, running in the browser. When it was relatively small, it did a good job. As the codebase and docs/ files that I'm using get larger it starts hallucinating, especially when the context gets at about 50% usage (Codex w/ gpt5.5). As in, it'll literally forget to update parts of the code.

e.g, I change velocity of player to '200' and of bullets to '300', and it only updated the bullet velocity. Then told me the player was already 'at the correct value' even though it was set to 150. Things like that.. :)


For me, unless there is a concrete way of proving work is correct you can't rely on AI coding. tsz has super strict tests around correctness, performance and architectural boundaries

If I understood you correctly, I think I'm less extreme than that. Most code written by humans is also not provably correct. But I'm assuming you mean provably correct like Lean: https://lean-lang.org/, and not just "passes tests".

If you mean 'passes tests', that can be tackled by AI. Although AI writing its own tests and then implementing its own code is definitely not a foolproof strategy.


More or less. The tsz solver is pure enough (it doesn't know about the AST) that it might be possible to formally validate it. But in my case I am lucky with tsc baseline. Anything that produces different output than tsc is a bug

At this point, "GH is down" posts are competing with "Newest LLM Hype" for the HN front-page week over week.

For my personal project, I've been considering moving everything over to Codeberg. Stability of GH being one reason, but I also like the idea of an alternative that is not strictly tied to a big tech company.


And yet, you haven’t. That’s the problem with dominant platforms: Slight inconveniences + inertia are enough to ensure no-one moves (even without monopolistic abuse – and I’m talking about Microsoft here).

"Claude Code is basically magic" spam hit the hardest. Temporarily side lined by GH status post. Maybe Claude Advertising is in a lull right now.

Your name summarizes all the GitHub uptime crap.

_Usually_ the blast radius isn't "GH is down globally across all functionality". So it can work for you while still being either down for other regions, or at least degraded.

Pushing commits over SSH is often the most "reliable" thing, though you can get some fun situations where a commit is pushed and runners never ran, causing downstream FUN eventually.

I’ve seen this behaviour but IMO it’s a fatal design flaw of Actions. It shouldn’t be possible.

That’s just a US quirk. In many countries you can buy alcohol when you’re 18+.

It’s odd that you can do everything but drink. Like you can go to work, drive home to your wife and kids, but can’t have a beer lol.


> That’s just a US quirk

My understanding is it’s because of car culture. Drunk-driving deaths drove up the drinking age [1].

[1] https://www.britannica.com/topic/Why-Is-the-US-Drinking-Age-...


They could fix this by lowering the drinking age. Not by raising it. If you have two years of experience being drunk and then start driving you know exactly what you are in for.

If you’ve been confidently driving for years and then suddenly pair that with alcohol… complete opposite effect.


Also it would increase illegal transfer of alcohol to underage peers, in theory. But high school kids who want beer have no major problems getting already.

It’s up to each state, but the federal government threatens funding if they get out of line.


Seems like they should have raised the driving age. Also, having taken a driving exam in NA and EU, NA is laughably easy. So not a surprise that drivers are unprepared, especially young ones.

It’s hard in many transit-desert communities to get around without a car, heck where I lived the DL age was 15.

Paywalled so I can't read the article.

However, is this exclusive to young people? I'm a millenial (early 90s) and I share their sentiment. I might not share it for the same reason though. Personally, I'm concerned about what AI usage would do to my cognitive ability, and as such I try to limit my use. I can't avoid using it at work (we're being tracked on "AI Adoption") and it does genuinely speed up some of my tasks. And I do play around with AI coding tools, mostly because I think I _should_ know them in this day and age.

But apart from that, I'm not using it. I'm using DDG searches rather than asking ChatGPT for solutions, I still go around reading websites and papers instead of AI summaries, and I don't outsource my writing to it. (i.e, I write my own emails, my own blogs, my own poorly worded HN comments, etc).



I don't think it's exclusive to young people, no. I'm a couple years older than you. All of my friends also hate it and make fun of it. Like some of the people in the article, I'm also looking to get out of the tech industry and find something else to do other than be forced to talk to shitty robots. If they want to fire me for not using their crappy tech enough, fine. I don't care anymore.

Fellow millenial here. I rarely use AI, for similar reasons. Not only am I worried about cognitive decline, I also have plenty of ethical concerns and I don't want to become even more dependent on US megacorps. Fortunately, I'm writing my own software and nobody can tell me which tools to use :)


They have better PR than OpenAI but they are not a more ethical company. They do a bunch of shady stuff and are just as much involved in military applications. Cal Newport’s recent podcast had a good discussion about this: https://youtu.be/BRr3pAPsQAk?si=jaRJYJ_XQE7VpxPN

Pet peeve of mine is people saying "hey this thing is totally shady/false, I've got proof right here <links to hour long podcast>".

It happens surprisingly often.


I understand not everyone has the interest or time to sit through an hour long podcast. But last I checked this is HN, and I think that podcast is right up the alley for many of us here. Cal Newport is not exactly a 'random podcaster'.

Next time I can summarize some of the talking points in my comment though, but I didn't want to poorly regurgitate the arguments when they were readily available in the video lol.

Although I see another poster has commented the key takeaways :)


What I want is for people to give evidence that can be checked within a few minutes at most.

But claiming you have proof and expecting me to a) just believe you or b) invest an hour of my time to dispute or agree with you... That's just a selfish way of having a conversation.

If you gave me some timestamps in that hour, that would be fine. Or if you gave a much shorter and easier to consume piece of evidence and then said that it's also discussed in the podcast if someone wants to invest more time into this, also fine.


sometimes people aren't looking for an argument, theyre just sharing something. Discussions don't have to strictly be about right vs wrong, unless you're on reddit

Podcasts are still short form if we're talking about something as complex as "is this company ethical". Issues involving human players and disagreements over philosophy/ethics take a huge amount of information to understand at anything beyond a vibes level.

You can understand almost any controversial issue better than almost everyone commenting on it by reading 1-3 books on the subject. It's becoming more of an x-factor as people get conditioned to expect everything to fit in a headline, chat response, or 10 second social media video.


Podcasts (and video) are very low-throughput, low-density information channels. Essays and articles are superior. To demonstrate this, you can just compare the transcript of a typical podcast — even a high-quality, well-researched one — with a typical high-quality, well-researched blog post, essay, or journalistic article.

It's odd that people don't understand this. It's not about Tiktok brain. I would rather read a book or a dense article than listen to people meander on a Podcast and pad their time.

Sure, but the other angle is time investment. I only listen to podcasts sporadically but I can definitely see why people like it. Not as a substitute to reading but _in addition_ to reading. Listening to a podcast can be done while driving, or cooking, etc. It beats sitting in traffic and just listening to music (to some people).

I don't think multitasking and giving some of your attention to two people talking in conversation format is a great way to get information. There's a strong argument it's a recipe for getting misinformation, because you aren't verifying a single thing they say, and you may be going off the vibes of the host. An uncritical "Wow that's crazy" from podcast hosts is really all it takes for people to believe anything.

And some people are so socially isolated and exposed to so much toxicity online that just listening to two people talk like friends severely lowers their guard to misinformation.


There's a world of difference between a tweet and a podcast, which are designed to NOT deliver information efficiently.

Cal Newport and tech commentator Ed Zitron discussed this disparity between Anthropic's public image and their actual practices. Despite cultivating a reputation as the "ethical" AI company, Zitron argues that Anthropic's actions show they are just as ruthless and ethically questionable as their competitors.

Anthropic has been deeply integrated with the US military, having been installed with classified access since June 2024. The podcast highlights that Claude has been actively utilized during the "Venezuela incursion" and the ongoing "war in Iran".

Despite this active involvement, CEO Dario Amodei released a statement attempting to publicly distance the company from the Department of Defense by declaring they would not allow their technology to be used for "mass domestic surveillance" or "fully autonomous weapons". Zitron categorizes this as a highly calculated PR maneuver, pointing out that LLMs are fundamentally incapable of controlling autonomous weapons anyway. The stunt successfully manufactured a wave of positive press—with celebrities and commentators praising Anthropic as an ethical objector—right when the company was trying to secure an IPO or a massive ~$100 billion valuation, all while they quietly remained an active part of the war effort.

Beyond their military contracts, the podcast details several highly questionable business practices Anthropic has used to artificially inflate their numbers:

1. During a lawsuit regarding their military contract, Anthropic's CFO filed a sworn affidavit revealing the company had only made $5 billion in its entire lifetime. This directly contradicted leaked media reports suggesting they made $4.5 billion in 2025 alone. It revealed that the company's publicly perceived run rate was heavily exaggerated through the "shady revenue math" popular in Silicon Valley, a major discrepancy that most financial journalists ignored.

2. When the open-source agent library OpenClaw first launched, Anthropic deliberately allowed users to connect a $200/month "max account" and essentially burn through thousands of dollars of API compute at Anthropic's expense. Zitron points out that Anthropic knowingly let this happen to temporarily boost their usage metrics and hype while they raised a $30 billion funding round. Just weeks after securing the funding, they abruptly cut off access for these users, a move Zitron cites as proof of them being an "unethical company".

Furthermore, the company has faced criticism for gaslighting users, maintaining poor service availability, and silently degrading model performance while rug-pulling users on rate limits. As Zitron summarizes, it is highly unlikely that either Anthropic or OpenAI actually care about these ethical boundaries beyond how they can be weaponized for better PR and higher valuations.


In my experience Anthropic positions itself as the "safe" AI company more than the "ethical" AI company. They're related but not the same thing.

The only way you could be surprised that Anthropic wants to be in bed with the US military is if you just never listened to anything Dario has said publicly. He's very open about wanting the US government and the US military to use Claude to win against China. That's why Claude was in the Pentagon before all the others in the first place.

>LLMs are fundamentally incapable of controlling autonomous weapons anyway

This is obviously false, though that's not surprising from what I've seen from Zitron. Claude is probably too slow and clunky to go full mech warrior for the time being, but it would be trivial to hook Claude up to an autonomous drone with missile strike capabilities. Those things are mostly autonomous already, they just require a human to tell them where to shoot. Claude can easily do that with a simple API.

The rest is valid. I wouldn't describe Anthropic as an ethical company. On the contrary, if you believe that you losing the AI race is an existential threat to humanity, then it's easy to justify all sorts of unethical behavior for the greater good.


There's some validity to these criticisms, but it would be a lot more credible to cite someone whose job isn't "loudly promote any claim that sounds negative for AI, regardless of how well-founded it is."

> Despite cultivating a reputation as the "ethical" AI company, Zitron argues that Anthropic's actions show they are just as ruthless and ethically questionable as their competitors.

Anthropic has taken 10s of billions from investors just like everyone else has. There is no such thing as "ethics" or "morality" when the scale of obligation is that large.

So yes, this is obvious despite whatever image they try to cultivate.


> There is no such thing as "ethics" or "morality" when the scale of obligation is that large.

At that scale, ethics and morality should become more important, not discarded


Alternatively, finance at that scale ought not be permitted to exist, because of the moral hazard it represents.

You will find that morals and ethics at that scale are too expensive to maintain.

Then that scale should not be allowed to exist and we should fight aggressively to prevent it

Anthropic is a public benefit corporation which limits liability to shareholders.

Just because they screwed up their billing doesn't mean every ethical commitment they've ever made is bunk.


> Anthropic is a public benefit corporation which limits liability to shareholders

What does this have to do with their ethics? This seems irrelevant unless your understanding of ethics ends at fiduciary duty to investors.


It's the opposite. Parent comment was saying they must be unethical due to their duty to investors. As a public benefit corporation, they can take ethics into account even if it harms shareholders. The extent to which they do so is still up to them, as I understand it, but they aren't forced to be evil as parent was suggesting.

Ah, got you.

Ed Zitron has absolutely zero credibility, meaning these claims have zero credibility.

I think all the AI companies want to hook up with the US military, as it's the only way they'll cover their debt. For investors.

"You must destroy the economy to keep us afloat, because National Security!" has been a clear goal of the LLM hucksters for a long time.

"LLMS are fundamentally incapable of controlling autonomous weapons" -- This was Anthropic's stance too, right?

"Quietly remained an active part of the war effort" - anthropic was totally transparent about it, but yeah not great.

"Leaks were wrong" - and that's Anthropic's fault?

OpenAI agreed to assist the DoD with zero boundaries and then lied about it. Can we at least give them credit for not doing that? If we just throw up our hands and say "they're all awful, whatever" then the result is reduced pressure on them to be better. Like it or not, I do not think AI is going away and as far as I can tell, despite billing problems, Anthropic's still the least bad frontier lab.


Probably some Slopcoded bot which posts fake comments to drive people to their content.

After all, if you’re paying hundreds of millions to buy these shitty podcasts, you might as well host some bots.


Account is from 2016 with 6k karma? : doubt:

Why assume people would not buy and sell Hacker News accounts?

Seems unlikely. I had a hell of a time finding someone to sell me this one.

Did you even check the link? It's a podcast from Cal Newport, a quite known figure (at least in software engineering / compsci circles). So it's not exactly a random shitty podcast. And, it's also (obviously) not my content.

I hadn't heard of him until he got famous last month for slagging off the AI industry.

Agreed. they are better at the PR game. Some developers are grasping at straws looking for ways to not feel guilty and justify their usage of LLMs is from the "good guys". Anthropic is currently filling this role but eventually people will see behind the smoke and mirrors and release its not all that different from OpenAI or some of the other AI labs who are willing to sacrifice any amount of ethics if they mean they get the right paycheck or stroke their ego that they were on the team that built digital god.

Compared to other countries I've lived in, Belgium doesn't do too bad of a job in promoting 'green energy'. Although I've not lived there for some years, they used to subsidize things like solar panels on roofs (at least when my parents installed them 20-ish years ago). And there are 'green energy' companies as far as I'm aware, so you don't have to stick with the larger energy providers.

That said, my information is outdated.


Belgian greens are remarkably less crazy than German "greens".

Even someone like De Sutter didn't come across as crazy in the European Parliament -- but the German ones, meine Götter!

https://en.wikipedia.org/wiki/Petra_De_Sutter


Students score lower on standardized tests in the 2020s than those in the 1990s. So your stance feels misguided. Although I don’t think Google and Calculators are the main culprits, I do think it’s due to larger technology/internet landscape.

> Although I don’t think Google and Calculators are the main culprits, I do think it’s due to larger technology/internet landscape.

That's extremely speculative, especially given there was a major event in 2020 which massively disrupted education worldwide.


It was declining since 1990s, not since 2020s.

Yeah what a strange “guard” to put in place. No clue why they’d do it this way.

I first thought it’d be a “I’m 18+ pop-up” lol.


It's probably underpinned by the same sort of "we're legally/contractually obligated to ask but we really don't care" type situation.

It'll be because of ads. You can only advertise prescription drugs to medical professionals.

Generally medical websites do it in the UK, as a warning to sick people who Google their disease looking for advice and land on research papers or scary information for specialists

My first thought was a conversation with a med student friend about the tension between medical research transparency and public policy. For example, it's good to get vaccinated, but some small fraction of people do have lasting side effects, and vaccine skeptics blow it out of proportion to support their views. So, medical professionals may be tempted to downplay vaccine injury to support public vaccination. Of course, doing so just erodes trust further if people notice. Anyways, perhaps this website is afraid people will hurt themselves with ambiguous information.

If you’re making this change now, I wonder how the technical leadership evaluated GitHub and its competitors.. and then still landed on GH.

What made it better than e.g GitLab?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: