Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We are at a very early part of the exponential curve. Doesn't make it any less exponential compared to what we had in the past two decades.



I am still praying for this to hit its local maximum spot soon, because I don't want to lose my job. If we get GPT-5 and 6 at the same speed, they get the capability to be trained on proprietary code bases and become able to automagically solve most tickets under supervision, most software engineering jobs are done for. I have become a luddite.


Well, I might as well come out and say it - libertarian meritocracies are fun when you're a winner at being productive but it's not going to be long before we're all in the exact same position as hardline communist Starbucks baristas with liberal arts PhDs.

People tend to choose their beliefs based on what benefits them, and although I don't think dialectical materialism is true in its originally stated form, I do think a great deal of the dialogue we see is ultimately material.


Luckily the current world hegemon doesn't just kill people that it cannot find a use for, just to make powerful people richer via weapons sales.


But what is at the end?

I don't see any real understanding only human like appearance.

So we don't get new knowledge but better spam and disinformation campaigns.


>But what is at the end?

We don't know yet, because that information is only available in the future.

>I don't see any real understanding only human like appearance.

There isn't, but trying to find that in currently available LLMs just means you are seeking the wrong things. Did workers who weaved magnetic core memories in the 1950s expect those devices to store LLMs with billions of parameters? Yet the design and operation of these devices were crucial stepping stones towards computer memory devices that exist today. The future will look at GPT-4 in the same way we look at magnetic core memories in the present.


AI will prove to be an excellent mechanism for extracting and retaining tacit (institutional) knowledge. (Think 'Outsourcing to AI')

A lot of institutional verbiage, formalisms, procedures, and machanisms are ~giberish for the general public but meaningful within the domain. Training machines that can informationally interact within that universe of semantics is powerful and something these machines will likely do quite well.

If you have domain knowledge, you should ramp up on your prompting skills. That way, there will be a business case for keeping you around.


I tried ChatGPT multiple times with real technical questions (use of custom code and custom assemblies in SSRS) and I got beautiful answers with code sample and such, but they were all wrong.

I was told to use features that don't exist and as I mentioned that, I was told that's because I use an old version of the software. But this feature doesn't exist in any version

So I highly doubt that it will be a reliable source of information.

These programs are text generators not AI. They are chinese rooms on steroids without any understanding.

Impressive as long you don't look behind the curtain.


> These programs are text generators

The applications I listed are not assuming anything beyond a text generator that can be trained on a domain's explicit and tacit knowledge. They are not going to "innovate" in the domain, they will automate the domain.



Not from ChatGPT


Doesn't mean technological singularity won't be coming. GPT not being the direct cause of it is not a reason to dismiss it.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: