Hacker News new | past | comments | ask | show | jobs | submit login

I have been a little bit skeptic of exaggerated claims about AI, but I tried ChatGPT and I've been amazed. My trial was not searching for anything. I tried asking the AI to code an traffic capture program with DPDK. Funnily enough, it knew several alternatives to use the code. I could also ask him to fill in certain parts, add parsing code for the packets... It's definitely far better than a bunch of 8 year olds knowing how to use Google.



People seem to be especially impressed by code generation like people were with chess because we associate those tasks with intelligence but they are constrained environments with well defined rules and a lot of data. Anyone who has followed the progress of RL knows that strong performance in such conditions does not mean it is comparable to human intelligence or that current architecture will scale even close to general human intelligence. It is much more comparable to a calculator.


I'm not claiming "general human intelligence". Just that to me it's impressive that it's able to generate code for a niche library (not too many code samples out there for DPDK), correct it according to instructions, add more code as needed... From what I've seen, it's even capable of detecting bugs. That, to me, is impressive. And I don't think programming is as much a constrained environment as chess is.


It is not as constrained but still fairly constrained and with a lot of data. And I was referring to the comparison to children with a search engine.


Well, I don't know what kind of 8 year olds you know but it definitely works better than any 8 year old I know at programming. In any way, I still think it's fairly impressive compared to what existed before.


You are making what I think of as the standard "game move" of the AI skeptic. Chess-playing skill was considered a marker of intelligence, but then AI solved it, so it turns out it doesn't require "real" intelligence. 15 years later, neither does playing Jeopardy, it's "just" a clever Google search, that's not really intelligence. Now according to you, programming doesn't require real intelligence (and apparently neither does poetry writing, or essay writing, or any of the other things GPT can do).

In a few years, perhaps we will discover that medicine, law, the social sciences, accounting, financial advice, and white collar work generally speaking don't require intelligence either, and the only things left that will require "real" intelligence will be the building trades and maybe truck driving.


No, again we just discovered it only does well in constrained environments with well defined rules and a lot of data. Either that or something subjective that it can afford a lot of inaccuracy without looking totally stupid. It literally can't generalise to some basic logic problems just like AlphaZero is not going to be cooking your meals anytime soon.

If to you that's ok, fantastic. But to compare what we got so far to something intelligent rather than seeing it as more of a calculator then you are totally misrepresenting it.


So intelligence is a robot that can cook? Not so much an engineer or poet?


A key component to intelligence is generalisation which ironically you failed to do with the examples I gave.


What's wrong with updating our definition of "intelligence" based on new data? 500 years ago people presumably thought that lifting heavy objects required "muscles". Nowadays you can do it with metal and hydraulics. Surely robots must therefore have muscles. If you deny that then you're shifting the "muscles" goalposts?!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: