Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I tried to do some AI database clean up this weekend - simple stuff like zip lookup and standardizing spacing, and caps - and ChatGPT managed to screw it ip over and over. It’s the sort of thing there a little error means the answer is totally wrong so I spent an hour refining the query and then addressing edge cases etc. I could have just done it all in excel in less with less chance of random (hard to catch) errors.


Similar experience.

In fields I have less experience with it seems feasible. In fields I am an expert in, I know it's dangerous. That makes me worry about the applicability of the former and people's critical evaluation ability of the whole idea.

I err on the side of "run away".


If the SQL took you an hour to just clean up and you're an expert that is some pretty complex SQL. I could understand how it could get it wrong.


The point is that these problems will follow the same growth trajectory as every other tech bug. In other words, they will go away eventually.

But the Rubicon is still crossed. There is a general purpose computer system that understands human language and can write real sounding human language. That's a sea change.


> "understands human language"

I've got some oceanfront property in Wyoming to sell you.


> will follow the same growth trajectory as every other tech bug

What you're referring to isn't a bug. It's inherent to the way LLMs work. It can't "go away" in an LLM model because...

> understands human language

...they don't. They are prediction machines. They don't "understand" anything.


> What you're referring to isn't a bug. It's inherent to the way LLMs work. It can't "go away" in an LLM model because...

The 'bug' presented above is a simple case of not understanding correctly. Larger models, models with MOE, models with truth guages, better selection functions, etc will make this better in the future.

> ...they don't. They are prediction machines. They don't "understand" anything.

Implementation detail.


Prediction without understanding is just extrapolation. I think you're just extrapolating your prediction on the abilities of future LLM based prediction machines.


What do you mean by understand?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: