Hacker News new | past | comments | ask | show | jobs | submit login

> Partly due to the recognized difficulty of the problem, in the 1970s-1980s mainstream AI gradually moved away from general-purpose intelligent systems, and turned to domain-specific problems and special-purpose solutions...

I think there's little evidence for this. What happened in the 1980s was the introduction of and overselling of expert systems. These systems applied AI techniques to specific problems: but those techniques themselves were still pretty foundational. This is like saying that because electricity was used for custom things, we started inventing custom electricity.

> Consequently, the field currently called "AI" consists of many loosely related subfields without a common foundation or framework, and suffers from an identity crisis:

Nonsense. AI of course consists of loosely related subfields with no common foundation. But even back in the 1960s, when a fair chunk of (Soft) AI had something approaching a foundation (search), the identity of the field was not defined by this but rather by a common goal: to create algorithms which, generally speaking, can perform tasks that we as humans believe we alone are capable of doing because we possess Big Brains. This identity-by-common-goal hasn't changed.

So this web page has a fair bit of apologetics and mild shade applied to soft AI. What it doesn't do is provide any real criticism of the AGI field. And there's a lot to offer. AGI has a reasonable number of serious researchers. But it is also replete with snake oil, armchair philosophers, and fanboy hobbyists. Indeed the very name (AGI) is a rebranding. The original, long accepted term was Hard AI, but it accumulated so much contempt that the word itself was changed by its practitioners. This isn't uncommon for ultrasoft areas of AI: ALife has long had this issue (minus the snake oil). But at least they're honest about it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: