Hacker News new | past | comments | ask | show | jobs | submit login

Well as someone who kinda bought into the hype that we would have self driving cars 3 years ago, I’d say beware of underestimating the importance of the size of the problem.

However, I agree with you in general: there’s a certain amount of surprise in laughing off the creation an AI agent that could pass as a reasonably intelligent person if it just weren’t straight up wrong about random things.




FSD is different. The last 0.01% kills children.

The last x% to make this bot a full fledged programmer does not matter. The clients will accept bugs that would be fatal in FSD. We will lose our jobs at an astonnishing rate in some kinds of companies in the coming years. I guess agile sweatshops will be first but I have no clue how far it will go.

Even if you happen to be safe there will be a downwards pressure on wages. Algorithm cranking might be largely obsolete and our main focus might be skeleton writing. It removes a big obstacle for becoming a programmer. The "good at math in school" requirement.


Are you a programmer? Because I’m not sure from reading your comment that you understand the profession…

Good at math at school isn’t even a requirement to be a good programmer nowadays. Maybe in certain sub-disciplines, but not all.

Being a programmer is much more than coding “in the small”. It’s about analyzing requirements and creating high level abstractions. There has been a pressure towards reducing the amount of “coding in the small” since languages started incorporating standard libraries. Then there’s been web frameworks, open source packages, API services, etc. Despite all this the need for developers has exploded and there is a perennial supply gap for talent…

Then there is the question of who is going to be manipulating these tools? Programmers.

Same promise was made of low-code tools, and what do we have now?

- As many app devs as before

- Most low-code tools (at least in the enterprise) are operated by… app devs?


I know we don't do this here, but what a great point.

Additionally, it is hard to put my finger on a good explanation, but FSD is a very specific problem that we're trying to handcraft an AI solution for.

You have to appreciate how broad this model is and how decent the results it produces are without being told specifically how to do it.


As a reasonably intelligent person who is sometimes straight up wrong about random things, I feel I should get it to help me write a blog post to explain how I feel about it.


ChatGPT is wrong about basically everything though as long as you give it the right prompts. It has all the right answers but also all the wrong answers, that makes it much dumber than you who is reliably correct about some things at least.


>ChatGPT is wrong about basically everything though as long as you give it the right prompts.

So is my Uncle. This is very much a "human level intelligence" problem.


I really doubt that. The value of a human is what they are right about, not what they are wrong about, as long as your uncle knows some area fairly well he is valuable to society and worth his salary. Human intelligence is the sum of such humans, each human is wrong about a lot of stuff, but add some bits of correctness, and the sum of humans is extremely accurate at solving problems, or we wouldn't have computers or cars or rockets. ChatGPT doesn't know any area well, if you know the area then you can fiddle with ChatGPT until it gives you what you want but it isn't an expert on anything on its own.

ChatGPT is impressive at generating text, but it doesn't generate better information than GPT3, it just hides its ignorance better behind more political/vague speech so it is harder to find its errors/information. To me that is regression, not progress, the results looks better but are harder to parse correctly for humans.


These are not random things.

When the creators of this tool present it as the frontier of machine intelligence, and when its persona revolves around being intelligent, authoritative, and knowledgeable, and yet it gets some basic, not random, stuff awfully wrong, you can't really discount the skeptic sentiments expressed in the comments here like this.


Skeptic about what

You’re assuming that this will only be used when it’s perfect and in helpful ways

This will be used at scale THIS YEAR and every subsequent year to infiltrate social networks including this one and amass points / karma / followers / clout. And also to write articles that will eventually dwarf all human-generated content.

With this and deepfakes and image/video generation, the age of trusting or caring about internet content or your friends sharing online is coming to an end. But it will be a painful 10 years as online mobs and outrage will happen over and over since people think they’re reacting to their friends’ posts of real things.

No, forget violent killbots. Today’s tech puts the nail in the coffin of human societal organization and systems of decision making and politics over the next 10 years. And AI doesn’t have to be right about stuff to destroy our systems.

We’ve been analyzing the performance of ONE agent among say 9 humans at a poker table. But imagine untold swarms of them, being owned by competing groups, infiltrating ALL human content exchange.

Not much different than what happened in trading firms over the last 20 years. Bots will be WELCOMED because they perform better on many metrics but will F everyone on the others.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: