Hacker Newsnew | past | comments | ask | show | jobs | submit | DebtDeflation's commentslogin

Honestly, just learn it like anything else. Understand the basic components of an internal combustion engine (block, crankshaft, rods, pistons, camshafts, cylinder heads, valves, intake and exhaust manifolds), the 4 cycles the engine goes through (intake, compression, power, and exhaust), how fuel delivery and ignition systems work. And then there are tons of resources on tuning and you can get the software for a laptop.

It isn't that simple. I've been learning to work on my own car over the last few years. I'm not even doing anything crazy just fixing up an older vehicle and modernising some parts of it (mainly interior).

I had to fix the wiper system. The wiper system you would think it wouldn't matter much whether the parts are aftermarket or not. I was very wrong, parts that even look almost identical may not work properly, due to differences in tolerances.

There is also different revisions of particular parts and it will become obsolete. You can lose an afternoon on the internet just doing that.

Then there is the tools. I've spent about a small fortune on tools. I have 3 torque wrenches, 3 sets of sockets, 3 sets of spanners and loads of weird specialist tools like special pliers. There are many jobs I can't do myself because they needs specialist knowledge to do properly e.g. gearboxes.

You have to be prepared to spend potentially years on it and huge amount of money, even on relatively simple projects.

There is a reason that a lot of guys get into old 4x4 pickups and do those up, because they are a known quantity and parts are readily available.


Then there is the building of the engine and understanding clearances for specific applications and RPM's, value train harmonics when thing start getting to crazy high revs like 9500.

Still very learnable but outside the scope of standard engine rebuilt stuff.


>Honestly, just learn it like anything else.

If you're starting from 0 that's probably a decade long commitment before you're able to start to execute a project like this. There's a youtube series 'project binky' where a pair of professional car tuners rebuild a mini cooper and stuff a Celica engine in it. They already have all the skills, own a shop and all the tools and it still took them years.


similarly, there's a youtube channel called Mighty Car Mods that does builds also and even the ones they "rush" can take months and thousands of work hours from people from multiple disciplines (body repair, paint, electrical work, tuning, etc.). Not cheap at all.

A decade would be very quick. The amount of specialist knowledge that went into every part of this project is crazy.. After a decade's worth of projects I doubt I'd be confident to tackle the steering and suspension design on something like this, let alone all the aero.

I've been working on cars for 20yr, I weld, I have done CAD/CAM/CAE stuff, rebuilt and modified engines, done custom suspension work... there are so many aspects of a project like this that are just completely unknown to me, like I wouldn't even know where to start. Many aspects of this build are not things you can really learn or research on your own.


More like financial engineering. When Wall Street demands ever increasing EPS growth, but revenues are flat or declining and you don't have cash to buy back shares, cost cutting is the only option. Unfortunately, labor costs are the bulk of most companies' costs, so that means layoffs. It would be nice if companies had a financial horizon that extended beyond the current quarter but that seems more and more like a pipe dream.

That's been true for the last year or two, but it feels like we're at an inflection point. All of the announcements from OpenAI for the last couple of months have been product focused - Instant Checkout, AgentKit, etc. Anthropic seems 100% focused on Claude Code. We're not hearing as much about AGI/Superintelligence (thank goodness) as we were earlier this year, in fact the big labs aren't even talking much about their next model releases. The focus has pivoted to building products from existing models (and building massive data centers to support anticipated consumption).

Meta hiring researchers en masse at $100m+ pay packages is fairly new, as of this summer.

I don't know if that's indicative of the market as a whole though. Zuck just seems really gutted they fell behind with Llama 4.


> Meta hiring researchers en masse at $100m+ pay packages is fairly new, as of this summer.

En masse? Wasn't it just a couple of outliers?


en deux

I think it was more than 2 though... en few? En feux.

It was a few researchers and they stopped doing that already. In a way, the AI market in H1 2025 is VERY different already than H2 2025.

This summer was a lifetime ago.

"en masse" is a stretch

A lot of them left in the first days on the job. I guess they saw what they were going to work on and peaced out. No one wants to work on AI slop and mental abuse of children on social media.

I don't understand how an intelligent person could accept a job offer from Facebook in 2025 and not understand what company they just agreed to work for.

It’s probably a VC fundraising strategy, “Meta gave me 100s of millions so you should give me more”.

It helps when they hand you comically large sacks with dollar signs on them

I wonder if the 100 million dollar guys got a signing bonus like athletes. Just take 10 mill and chuck up deuces.

Those people are intelligent, they’re just selfish and have no qualms over making money off the repugnant crap they’re doing.

In that case, how come they "left in the first days of the job" because "they saw what they were going to work on and peaced out"?

They may have found that the work they were called upon to perform was worse than what they'd been led to expect. Maybe much worse.

With the amount of money Facebook was offering I could see them having a hard time refusing. If someone offered me 100 million dollars to work on AI I know I would have a hard time refusing.

Stated without a shred of evidence and getting no pushback. Classic for a nonsense claim about big tech company HN doesn't like, lol.



Stated with no more evidence than the figure of $100M of compensation, which was started by Sam Altman on his brother's podcast. But surprisingly everyone seems to be entirely fine with this wild claim and not asking for proof.

> No one wants to work on AI slop and mental abuse of children on social media.

If this was true, we wouldn't have AI slop and mental abuse of children. Since we do, we know your comment is just flat out incorrect.


You must not watch broadcast television (e.g American Football). Anthropic is doing a huge ad blitz, trying to get end customers to use their chatbot.

Anthropic, frankly, needs to in ways the other big names don't.

It gets lost on people in techcentric fields because Claude's at the forefront of things we care about, but Anthropic is basically unknown among the wider populace.

Last I'd looked a few months ago, Anthropic's brand awareness was in the middle single digits; OpenAI/ChatGPT was somewhere around 80% for comparison. MS/Copilot and Gemini were somewhere between the two but closer to Open AI than Anthropic.

tl;dr - Anthropic has a lot more to gain from awareness campaigns than the other major model providers do.


Anthropic feels like a one trick pony as most users dont need or want anthropic products.

However, I speak with a small subset of our most experienced engineers and they all love Claude Sonnet 4.5. Who knows if this lead will last.


Claude is ChatGPT done right. It's just better under any metric.

Of course OpenAI has tons of money and can branch off in all kind of directions (image, video, n8n clone, now RAG as a service).

In the end I think they will all be good enough and both Anthropic and OpenAI lead will evaporate.

Google will be left to win because they already have all the customers with the GSuite and OpenAI will be incorporated at massive loss in Microsoft, which is already selling to all the Azure customers.


Anthropic are mostly selling, and having most success, with business customers (incl. selling API access for Claude Code).

This is the reason they haven't bothered to provide an image generator yet - because Chat users are not their focus.


Lately ClaudeAI switched over to ASCII art when doing explanations....

>Anthropic feels like a one trick pony as most users dont need or want anthropic products.

I don't see what the basis for this is that wouldn't be equally true for OpenAI.

Anthropic's edge is that they very arguably have some of the best technology available right now, despite operating at a fraction of the scale of their direct competitors. They have to start building mind and marketshare if they're going to hold that position, though, which is the point of advertising.


If Claude Code is Anthropic’s main focus why are they not responding to some of the most commented issues on their GitHub? https://github.com/anthropics/claude-code/issues/3648 has people begging for feedback and saying they’re moving to OpenAI, has been open since July and there are similar issues with 100+ comments.

Hey, Boris from the Claude Code team here. We try hard to read through every issue, and respond to as many issues as possible. The challenge is we have hundreds of new issues each day, and even after Claude dedupes and triages them, practically we can’t get to all of them immediately.

The specific issue you linked is related to the way Ink works, and the way terminals use ANSI escape codes to control rendering. When building a terminal app there is a tradeoff between (1) visual consistency between what is rendered in the viewport and scrollback, and (2) scrolling and flickering which are sometimes negligible and sometimes a really bad experience. We are actively working on rewriting our rendering code to pick a better point along this tradeoff curve, which will mean better rendering soon. In the meantime, a simple workaround that tends to help is to make the terminal taller.

Please keep the feedback coming!


It’s surprising to hear this get chalked up to “it’s the way our TUI library works”, while e.g. opencode is going to the lowest level and writing their own TUI backend. I get that we can’t expect everyone to reinvent the wheel, but it feels symptomatic of something that folks are willing to chalk up their issues as just being an unfortunate and unavoidable symptom of a library they use rather than seeming that unacceptable and going to the lowest level.

CC is one of the best and most innovative pieces of software of the last decade. Anthropic has so much money. No judgment, just curious, do you have someone who’s an expert on terminal rendering on the team? If not, why? If so, why choose a buggy / poorly designed TUI library — or why not fix it upstream?


We started by using Ink, and at this point it’s our own framework due to the number of changes we’ve made to it over the months. Terminal rendering is hard, and it’s less that we haven’t modified the renderer, and more that there is this pretty fundamental tradeoff with terminal rendering that we have been navigating.

Other terminal apps make different tradeoffs: for example Vim virtualizes scrolling, which has tradeoffs like the scroll physics feeling non-native and lines getting fully clipped. Other apps do what Claude Code does but don’t re-render scrollback, which avoids flickering but means the UI is often garbled if you scroll up.


As someone who's used Claude Code daily since the day it was released, the sentiment back then (sooo many months ago) was that the Agent CI coding TUIs were kind of experimental proof-of-concepts. We have seen them be incredibly effective and the CC team has continued to add features.

Tech debt isn't something that even experienced large teams are immune to. I'm not a huge TypeScript fan, so seeing their choice to run their app on Node to me felt like a trade-off between development speed with the experience that the team had and at the expense of long-term growth and performance. I regularly experience pretty intense flickering and rendering issues and high CPU usage and even crashes but that doesn't stop me from finding the product incredibly useful.

Developing good software especially in a format that is relatively revolutionary takes time to get right and I'm sure whatever efforts they have internally to push forward a refactor will be worth it. But, just like in any software development, refactors are prone to timeline slips and scope creep. A company having tons of money doesn't change the nature of problem-solving in software development.


> CC is one of the best and most innovative pieces of software of the last decade...

Oh come on! Aider existed before it, and so did many other TUI AI agents. I'd say Rust and Elixir were more innovation than CC.


That issue is the fourth most-reacted issue, and third most open issue. And the two things above it are feature requests. It seems like you should at the very least have someone pop in to say "working on it" if that's what you're doing, instead of letting it sit there for 4 months?


Thanks for the reply (and for Claude Code!). I've seen improvement on this particular issue already with the last major release, to the extent that it's not a day to day issue for me. I realise Github issues are not the easiest comms channel especially with 100s coming in a day, but occasional updates on some of the top 10 commented issues could perhaps be manageable and beneficial.

How about giving us the basic UX stuff that all other AI products have? I've been posting this ever since I first tried Claude: Let us:

* Sign in with Apple on the website

* Buy subscriptions from iOS In App Purchases

* Remove our payment info from our account before the inevitable data breach

* Give paying subscribers an easy way to get actual support

As a frequent traveller I'm not sure if some of those features are gated by region, because some people said they can do some of those things, but if that is true, then that still makes the UX worse than the competitors.


It's entirely possible they don't have the ability in house to resolve it. Based on the report this is a user interface issue. It could just be some strange setting they enabled somewhere. But it's also possible it's the result of some dependency 3 or 4 levels removed from their product. Even worse, it could be the result of interactions between multiple dependencies that are only apparent at runtime.

>It's entirely possible they don't have the ability in house to resolve it.

I've started breathing a little easier about the possibilty of AI taking all our software engineering jobs after using Anthropic's dev tools.

If the people making the models and tools that are supposed to take all our jobs can't even fix their own issues in a dependable and expedient manner, then we're probably going to be ok for a bit.

This isn't a slight against Anthropic, I love their products and use them extensively. It's more a recognition of the fact that the more difficult aspects of engineering are still quite difficult, and in a way LLMs just don't seem well suited for.


AGI/ASI does not need perfect terminal rendering to crush all humans like bugs.

Seems these users are getting it on VS code, while I am getting the exact same thing when using claude code on a Linux server over SSH from Windows Terminal. At this point their app has to be the only thing in common?

That's certainly an interesting observation. I wonder if they produce one client that has some kind of abstraction layer for the user interface & that abstraction layer has hidden or obscured this detail?

> some kind of abstraction layer

You mean a terminal interface?


The novelty of LLMs are wearing off, people are beginning to understand them for what they are and what they are capable of, and performance has been plateauing. I think that's why people are starting to worry that the AI bubble is a repeat of the dotcom bubble, which was a similar technological revolution.

Did they even fail? Llama2 was groundbreaking for open source LLMs, it defined the entire space. Llama3 was a major improvement over Llama2. Just because Llama4 was underwhelming, it's silly to say they failed.

Any exponential growth is failing in a market which demands superexponential growth

> Generative AI in final outputs or productive work undermines the foundation of their future success vis a vis discounting or dismissing IP Law and Rights

It goes beyond just IP law compliance. Creativity is their core competency and competitive differentiator. If you replace that with AI slop, then your product becomes almost indistinguishable from that of everyone else producing AI slop.

IMO, they're striking exactly the right balance - use AI as a creative aid and productivity booster not something to make the critical aspects of the final product.


That seems like something that should be handled with a simple takedown request and those behind archive.is would almost certainly comply. 99.999% of people using archive.is are using it to bypass news article paywalls nothing more. Which, if we're honest, is the real reason why the FBI is going after them.

Personal anecdote but I almost never use these archive sites to bypass paywalls. I only use it when I want to see how establishment news sites somehow sometimes accidentally tell the truth, then, when they get the call, they try to purge their original reporting. Again, it might be my personal bias, but in my opinion, this is the main reason they are going after them. Because these websites let people prove the hypocrisy and the lies.

I remember that when[0] Reuters took down that one story about organized crime, and further DMCA'd the Internet Archive to take down their version, archive.ORG cheerfully did the memory-hole thing—while archive.IS stayed up.

If the (Western) internet were to turn into a monoculture of Western-domiciled big corporations, that kind of censorship would be *effective*. Our systems aren't robust against bad-faith actors attacking the free flow of information. (And the root cause of the planet-spanning censorship cascade in that example was, unambigiously, bad actors. A crime syndicate based in India).

The fact the internet is global and freely connects to legal jurisdictions and cultures very different from the West's, is to the West's benefit: it creates an escape-hatch for things that fall between the cracks of our nascent totalitarian technologies.

[0] https://news.ycombinator.com/item?id=39065981#39065996 ("A Judge in India Prevented Americans from Seeing a Blockbuster Report")


>But if you add a feedback loop where it can use tools, investigate external files or processes, and then autocomplete on the results, you get to see something that is (close to) thinking

It's still just information retrieval. You're just dividing it into internal information (the compressed representation of the training data) and external information (web search, API calls to systems, etc). There is a lot of hidden knowledge embedded in language and LLMs do a good job of teasing it out that resembles reasoning/thinking but really isn't.


No, it's more than information retrieval. The LLM is deciding what information needs to be retrieved to make progress and how to retrieve this information. It is making a plan and executing on it. Plan, Do, Check, Act. No human in the loop if it has the required tools and permissions.


> LLMs do a good job of teasing it out that resembles reasoning/thinking but really isn't.

Given the fact that "thinking" still hasn't been defined rigourously, I don't understand how people are so confident in claiming they don't think.


reasoning might be a better term to discuss as it is more specific?


It too isn't rigourously defined. We're very much at the hand-waving "I know it when I see it" [1] stage for all of these terms.

[1] https://en.wikipedia.org/wiki/I_know_it_when_I_see_it


I can't speak for academic rigor, but it is very clear and specific from my understanding at least. Reasoning, simply put is the ability to come to a conclusion after analyzing information using a logic-derived deterministic algorithm.

* Humans are not deterministic.

* Humans that make mistakes are still considered to be reasoning.

* Deterministic algorithms have limitations, like Goedel incompleteness, which humans seem able to overcome, so presumably, we expect reasoning to also be able to overcome such challenges.


1) I didn't say we were, but when someone is called reasonable or acting with reason, then that implies deterministic/algorithmic thinking. When we're not deterministic, we're not reasonable.

2) Yes, to reason does imply to be infallible. The deterministic algorithms we follow are usually flawed.

3) I can't speak much to that, but I speculate that if "AI" can do reasoning, it would be a much more complex construct that uses LLMs (among other tools) as tools and variables like we do.


I Google News searched Poolside and found this:

https://techfundingnews.com/nvidia-prepares-up-to-1b-investm...

>Poolside operates across the US and Paris, focusing on coding automation tailored for government and defence clients.

>The company is also working on bold infrastructure expansion. Earlier this month, Poolside partnered with CoreWeave to build one of the largest data centres in the US under an initiative called Project Horizon. Set in West Texas, the facility is slated to reach 2 gigawatts of capacity, which is enough to power about 1.5 million homes.

Searching for Project Horizon led to this:

https://poolside.ai/blog/announcing-project-horizon

So another data center company, but focused on the Defense/Intelligence community. A hardware equivalent to Palantir?


> Poolside partnered with CoreWeave to build one of the largest data centres

Nvidia has a backstop deal with Coreweave [1]. I am sure this is all above board but seeing how these giants all have incestuous relationship with each other makes me uneasy about putting money in the markets.

[1]:https://www.reuters.com/business/coreweave-nvidia-sign-63-bi...


> seeing how these giants all have incestuous relationship with each other makes me uneasy about putting money in the markets

If you invest in index funds (instead of picking single stocks), you shouldn’t be too worries IMO.

E.g. something tracking MSCI World/All-World or just the S&P500 (US-only though).

You won’t get 100x homeruns (more like 10-15% avg returns/year), but you will drastically lower your risk of losing your money.


Yeah, the first article I linked says Poolside plans to spend much of the $1B investment from Nvidia on GB300's. It's all just one circular flow at this point with Nvidia giving everyone cash that they agree to use to buy GPUs from them.


Nothing brings joy and optimism like giving $1B to a "coding automation for defence" startup.

If there's one person I don't want to lose their job to a shonky AI agent, it's Stanislav Petrov.


Near as I can tell, the only valid point the author makes is that since mortality rates increase as cancer progresses stages and since progression through stages takes time, a 5 year mortality rate is not a great metric and it would be better to also have 10 and 15 year mortality rates to determine the degree to which early detection + treatment actually increases life expectancy.


Also, and I can’t tell if this point is made, but cancers that are more progressed are more likely to be detected without screening, so extra screening may just increase the proportion of cancers that were never going to be deadly that are detected.


That point is made.


WTF does it even mean to run a quantum error correction algorithm on a (classical) FPGA?


It means the next tech hype cycle is about to begin


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: