Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We've had better technology in NLP and problem-solving since the mid-1970's -- and many methods that were deemed intractable are trivial now given that compute is on the order of 1 million times more powerful. Systems that use partial-order hierarchical planning, abductive logic programming, bi-directional search, constraint satisfaction, multi-modal interaction, multi-agent conversational modeling based on planning applicable speech-acts, etc., have been replaced with simple state-machines, and in "advanced" cases, simple slot-filling -- and more recently relatively shallow black-box stochastic methods based on deep learning models.

Having worked at two companies that offered such systems, I can say that this is due to both not understanding/appreciating what has been done in the past, but also because companies treat these systems like car companies treat features -- they know that an deliver rich features in the future, so the "logic" is to wait to do so, in the hope they can make more money in the long-term, and be able to dribble out new features.

To be fair, there is also the issue of reproducibility and predictability -- companies expect everybody's agent to respond the same way given the same input, rather than have variability in agents due to non-determinism or context and learning -- but we can't ask for for flexible "human-like" AI on one hand and also not expect the variation (and occasional misunderstandings) that humans would also suffer.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: