Hacker Newsnew | past | comments | ask | show | jobs | submit | donperignon's commentslogin

trial, pray, error, trial ... such a waste of energy and talent

an llm will never reason. reasoning is an emergent behavior of those systems that is poorly understood. neurosymbolic systems will be what combined with llm will define the future of AI


What are neurosymbolic systems supposed to bring to the table that LLMs can't in principle? A symbol is just a vehicle with a fixed semantics in some context. Embedding vectors of LLMs are just that.


Pre-programmed, hard and fast rules for manipulating those symbols, that can automatically be chained together according to other preset rules. This makes it reliable and observable. Think Datalog.

IMO, symbolic AI is way too brittle and case-by-case to drive useful AI, but as a memory and reasoning system for more dynamic and flexible LLMs to call out to, it's a good idea.


Sure, reliability is a problem for the current state of LLMs. But I see no reason to think that's an in principle limitation.


There are so many papers now showing that LLM "reasoning" is fragile and based on pattern-matching heuristics that I think it's worth considering that, while it may not be an in principle limitation — in the sense that if you gave an autoregressive predictor infinite data and compute, it'd have to learn to simulate the universe to predict perfectly — in practice we're not going to build Laplace's LLM, and we might need a more direct architecture as a short cut!


Slicing high dimensional concepts like 'reasoning' into discrete categories of 'will' and 'will not' ... will not work :P


how do you falsify that "llm will never reason?"

I asked GPT to compute some hard multiplications and the reasoning trace seems valid and gets the answer right.

https://chatgpt.com/share/6999b72a-3a18-800b-856a-0d5da45b94...


i dont need to. llm are probabilistic systems, they are not design to reason, and its actually the opossite nobody can explain some of the emergent behaviour they exhibit. will you let one of those to control the air traffic based on "black magic"? sometimes i have the feeling that we have forgot what scientific method is...


You trust humans yet our brain is a black box.


i trust my kind yes. i dont know how it works, but i have one.


They can do some sort of reasoning, but not the way humans can


are people still participating in this charade of pretending llms cannot reason?


Google cannot ignore LLMs, I don’t remember last time I used Google to search instead of using ChatGPT.maybe a combination of factors, one being Google destroying their own search engine in order to milk advertisers money, the other the LLMs being so good at synthesizing information. Anyways either they improve or surf the AI wave, otherwise they are doomed


Why? Google has killed so many, many other working and successful ideas. Why are LLMs to be considered safe from the graveyard? I bet they can ignore them just as soon as they become yesterday's technology.


Yeah, this really hits home. Everything’s turning into a subscription lately, even simple tools that should just run on your own machine.

That’s why we built ChannelVault (https://mestr.io/channelvault.html), as a desktop app (made with Wails + Go) to archive and search Slack workspaces locally for eDiscovery and backups. No SaaS, no recurring fees, no cloud dependency. It just runs on your computer and keeps your data with you. Trying to defy that general trend.

I miss when software felt like something you actually owned, not rented month to month.


I dont get this…


Or simply stop tracking and selling user data… sell real services or native ads


obsessively writing Motivational pieces is the new therapy … what a world…


More like an indulging an obsession than therapy; he seems to keenly want to be like the people he is studying.


Writing as therapy is great, but I doubt he enjoys writing, seems more worried about outcome, about epic and fame. Productivitymania is messing up with our brains…


Yeah and using people who write as inspiration is really weird to me. I’d rather look up to people who are slightly too busy to write 2k words a day because they’re actually doing things.


Why is that weird? The author is obviously impressed by writers, given they have an interesting in writing themselves, so it makes sense to use writers as an example.

And why is writing a less valuable profession than another job? Writing is also "doing a thing" - it just so happens to be a profession for some, a great one for those who are skilled and gifted at it.


I will never trust a chief trust officer…


But what if his lips stop moving?


This AI madness is getting more stupid every day…


Looking at the sample apps… I think I pass, not liking much what I see, either generated by LLM or by an intern


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: