Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Have you used them to code? I think that is the tipping point where the level of interaction is so high, when the models are troubleshooting. That is when you can start to get a little philosophical.

The models make many mistakes and are not great software architects, but watch one or even two models work together to solve a bug and you'll quickly rethink "text transformers".



All the time. And what stands out is that when I hit a problem in a framework that's new and badly documented, they tend to fall apart. When there's a lot of documentation, StackOverflow discussions etc. for years about how something works they do an excellent job of digging those insights up. So it fits within my mental model of "find stuff on the web and run a text transformer" pretty well. I don't mean to underestimate the capabilities, but I don't think we need philosophy to explain them either.

If we ever develop a LLM that's able to apply symbolic logic to text, for instance to assess an argument's validity, develop an accurate proof step by step, and do this at least as well as many human beings, then I'll concede that we've invented a reasoning machine. Such a machine might very well be a miraculous invention in this age of misinformation, but I know of no work in that direction and I'm not at all convinced it's a natural outgrowth of LLMs which are so bad at math (perhaps they'd be of use in the implementation).


I mean it is like asking a person to white board a framework they have never seen. However when I then say, "go ahead and research the correct methods or recommended methods of doing x with framework y." The models is usually capable of figuring out what to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: