Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, GPT-5 is more of an iteration than anything else, and to me this says more about OpenAI than the rest of the industry. However, I think the majority of the improvements over the past year have been difficult to quantify using benchmarks. Users often talk about how certain models "feel" smarter on their particular tasks, and we won't know if the same is true for GPT-5 until people use it for a while.

The "GPT-5 will show AGI" hype was always a ridiculously high bar for OpenAI, and I would argue that the quest for that elusive AGI threshold has been an unnecessary curse on machine learning and AI development in general. Who cares? Do we really want to replace humans? We should want better and more reliable tools (like Claude Code) to assist people, and maybe cover some of the stuff nobody wants to do. This desire for "AGI" is delivering less value and causing us to put focus on creative tasks that humans actually want to do, putting added stress on the job market.

The one really bad sign in the launch, at least to me, was that the developers were openly admitting that they now trust GPT-5 to develop their software MORE than themselves ("more often than not, we defer to what GPT-5 says"). Why would you be proud of this?



> Users often talk about how certain models "feel" smarter on their particular tasks, and we won't know if the same is true for GPT-5 until people use it for a while.

The idea that models “feel” smarter may be 100% human psychology. If you invest in a new product, admitting that it isn’t better than what you had is hard for humans. So, if users say a model “feels” smarter, we won’t know that it really is smarter.

Also, if users manage to improve quality of responses after using it for a while, who says they couldn’t have reached similar results if they stayed using the old tool, tweaking their prompts to make that model perform better?


> Do we really want to replace humans?

AGI doesn't really replace humans, it merely provides a unified model that can be hooked up to carry out any number of tasks. Fundamentally no different than how we already write bespoke firmware for every appliance, except instead of needing specialized code for each case, you can simply use the same program for everything. To that extent, software developers have always been trying to replace humans — so the answer from the HN crowd is a resounding yes!

> We should want better and more reliable tools

Which is what AGI enables. AGI isn't a sentience that rises up to destroy us. There may be some future where technology does that, but that's not what we call AGI. As before, it is no different than us writing bespoke software for every situation, except instead of needing a different program for every situation, you have one program that can be installed into a number of situations. Need a controller for your washing machine? Install the AGI software. Need a controller for your car's engine? Install the same AGI software!

It will replace the need to write a lot of new software, but I suppose that is ultimately okay. Technology replaced the loom operator, and while it may have been devastating to those who lost their loom operator jobs, is anyone today upset about not having to operate a loom? We found even more interesting work to do.


> Which is what AGI enables.

I appreciate the well-crafted response, but respectfully disagree with this sentiment, and I think it's a subtle point. Remember the no free lunch theorems: no general program will be the best at all tasks. Competent LLMs provide an excellent prior from which a compelling program for a particular task can be obtained by finetuning. But this is not what OpenAI, Google, and Anthropic (to a lesser extent) are interested in, as they don't really facilitate it. It's never been a priority.

They want to create a digital entity for the purpose of supremacy. Aside from DeepMind, these groups really don't care about how this tech can assist in problems that need solving, like drug discovery or climate prediction or discovery of new materials (e.g. batteries) or automation of hell jobs. They only care about code assistance to accelerate their own progress. I talk to their researchers at conferences and it frustrates me to no end. They want to show off how "human-like" their model is, how it resembles humans in creative writing and painting, how it beats humans on fun math and coding competitions that were designed for humans with a limited capacity to memorize, how it provides "better" medical opinions than a trained physician. That last use case is pushing governments to outlaw LLMs for medicine entirely.

A lab that claims to push toward AGI is not interested in assisting mankind toward a brighter future. They want to be the first for bragging rights, hype, VC funding, and control.


> no general program will be the best at all tasks.

Perhaps I wasn't entirely clear, but AGI isn't expected to be the best at all tasks. The bar is only as compared to a human, which also isn't the best at all tasks.

But you are right that nobody knows how to make them good at even some tasks. Hence why everyone is so concerned about LLMs writing code. After all, if you had "true" AGI, what would you need code for? It is well understood that AGI isn't going to happen. What many are banking on, however, is that AGI can be simulated if LLMs can pull off being good at one task (coding).

> They want to be the first for bragging rights, hype, VC funding, and control.

That's the motivation for trying to create AGI (at least pretending to), but not AGI itself.


Fair enough. I respect the objective of making a better coding assistant, and I use LLMs for this purpose all the time. I think this is why I would give Anthropic a pass on more things than some of the others, since they are clearly interested in that application, while the others seemed almost begrudgingly pushed into it. If others focused on this application early on, the agentic approach probably would have progressed faster.

But I think we do the discipline a disservice by referring to coding assistance as AGI. Also, having them be good enough that they can write their own code autonomously is a nightmare scenario to me, but I know many others don't feel that way.


> Why would you be proud of this?

Isn't it obvious? They have a huge vested interest in getting people to believe that it's very useful, capable, etc.


> Do we really want to replace humans?

Unfortunately for a substantial number of people the answer to this question seems to be a resounding "yes"


With those people being business owners, investors, etc, 100% of the time.

The other 99% would like automation to make their lives easier. Who wouldn't want the promised tech utopia? Unfortunately, that's not happening so it's understandable that people are more concerned than joyous about AI.


>With those people being business owners, investors, etc, 100% of the time.

How can one run a business by replacing humans, if no humans are left with enough income to buy your products?

I suspect that the desire to "replace humans" runs far deeper than just shortsighted business wants.


I’m not sure the typical small business owner is thinking about the second and third order effects of reducing their labor costs from a Kantian categorical imperative perspective.


> How can one run a business by replacing humans, if no humans are left with enough income to buy your products?

If you control all of the wealth and resources and you have fully automated all of the production of everything you could ever want, then why would you need other humans to buy anything?


Two things.

One, a lot of human jobs have been replaced by machines before. Refrigerators eliminated the jobs of all those guys who used to deliver ice for your icebox every day. And so on. Those were real human beings and I'm sure many of them had families. There was real pain but it was ultimately probably a huge net positive. On a much larger scale, the microcomputer revolution of ~1975-present certainly does not seem to have reduced the number of human jobs.

Two, I am not the biggest fan of capitalism, but this is an area where it works pretty well as a self-balancing system because companies still need to compete with each other. If competing companies A and B each eliminate a bunch of human jobs thanks to AI, they're still locked in an existential struggle. They need to outcompete and outperform each other. They will shift that money to other expenditures: on AI tech, humans doing other jobs, capital expenditures, whatever. Jobs will be created or sustained in other companies providing those goods and services.

It's not foolproof, and it can certainly devastate particular regions, because the money may now flow out of those regions instead of being spent on local salaries.

There is a lot of change, and a lot of very very real pain to come, but if it is anything at all like past technology revolutions the net gains will also be real.


>One, a lot of human jobs have been replaced by machines before. Refrigerators eliminated the jobs of all those guys who used to deliver ice for your icebox every day. And so on. Those were real human beings and I'm sure many of them had families.

The fallacy here is in supposing that the mechanisms that kept those people from starving in the 1920s still exists and remains effective, that the "people replaced" have some other industry to move into. But we live in a post-industry nation... all that god offshored. There is nothing more to make, or build, or repair, not to any scale that would employ everyone meaningfully. And while I suppose some like you imagine that we'll all sit around day trading and speculating on bitcoin for a living, this means that places like China would have to manufacture everything and grow everything and that they'd be willing to do that so that they could have the bitcoin tablescraps you toss to them from time to time.

I've heard your argument all my life, starting all the way back in the late 1980s when the government was first talking about making China the "most favored nation" status that would permit it. Maybe back then people could still believe it but now it rings hollow as hell.

>Two, I am not the biggest fan of capitalism,

I am. I am a big fan, when it's used well everyone benefits. But you still have to police it a little to deter fraud, and we've all been the victims of the biggest fraud ever. And we can't even talk about it here, hurts too many feelings.

>hey will shift that money to other expenditures: on AI tech, humans doing other jobs

Or, maybe instead of shifting to "humans doing other jobs", someone runs the numbers and discovers there are still 30 years worth of profit (or even just 10 years) selling the product to Europe or whereever even if they don't hire anymore humans, and since this exceeds their projected career duration, there's no need to look past that very distant horizon. And it doesn't matter that here or there you're even correct (that some companies might shift to "humans doing other jobs", because I only have to be partially correct and you have to be entirely correct... if some companies do it as I hint, then those companies outcompete your companies which go under, and it still results in massive unemployment.

The fixes for all of these things are simple, clear, and effective, but are politically untenable. Even if people could have been eventually persuaded that they were necessary, those people are now outvoted by many more people who have been brought in who have no loyalty to this country (and it really applies to many countries, not just the one I'm in) and would cockblock the fixes.


The devs have been co-opted into marketing roles now, too - they have to say it's that good to keep the money coming in. IMO this reinforces the original post - this all feels like a scramble.

Whether it's indicative of patterns beyond OpenAI remains to be seen, but I don't expect much originality from tech execs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: