Hacker News new | past | comments | ask | show | jobs | submit login

First of all "this, then that" does not imply causality.

The way that I heard it, it was the fact that Lisp environments on Sun workstations were able to outperform Lisp machines at a much better price point. And just like that, a significant AI specific industry collapsed, and its other promises came into question.

That said, all three versions are consistent. The fact that researchers thought that they were closer than they were caused them to overpromise and underdeliver. Then when the visible bleeding edge of their efforts publicly lost to a far cheaper architecture, their failure became very visible.

Which we call cause versus effect almost doesn't matter. All of these things happened, and lead to an AI winter. And we continued to get incremental progress until the unexpected success of Google Translate. Whose success was not welcomed by people who had been trying to get rule-based AI systems to work.




Google Translate got a lot worse after the AI version was introduced, maybe not for english-centric translations but all other. The previous deductive translator was be much better. Same with Siri and Google Assistant, they are really bad at other languages except English


> Google Translate got a lot worse after the AI version was introduced,

Jesus. I remember when statistical translation was considered "AI".

Fun fact: One time I put "trompe le monde" into Google Translate, and it came back with the inspired mistranslation, "doolittle"


IMO Doolittle was a better album. So, while the translation was bad, I wouldn't say google was wrong.


artificial intelligence (countable and uncountable, plural artificial intelligences)

(computer science) Anything that performs better than whatever we called “artificial intelligence” a few years ago.


This mislabeling is quite common in popsci outside the AI field, so sorry for the rant, but I've got to rant as this is my pet peeve. Like, I get the joke you're making, but it's based on a horrible misuse of the word. All the major dictionaries (https://www.merriam-webster.com/dictionary/artificial%20inte... or https://www.oed.com/viewdictionaryentry/Entry/271625 etc) have only the uncountable or adjective meanings, and none have plural "artificial intelligences" as a valid option referring to anything ever, that's simply not a word in English.

M-W has (IMHO rightly) these two senses:

artificial intelligence

noun

1 : a branch of computer science dealing with the simulation of intelligent behavior in computers

2 : the capability of a machine to imitate intelligent human behavior

And that's it. There's no plural "artificial intelligences" or singular "an AI" because this term never refers to a specific system, it may refer to the field or the property but not to the specific machines which (perhaps) possess some artificial intelligence as the attribute/capability. Even if you'd have a system with fully superhuman capabilities, it wouldn't be "a artificial intelligence" because you simply don't (or at least shouldn't) call things or systems "artificial intelligences", just as you don't call people "natural intelligences".


I read it like "Artificial Intelligence could mean either of two things:

a) Intelligence that is produced "artificially, meaning by computer programming

b) Intelligence which is not intelligence but artificial so, thus not "real" intelligence.


Emphatic disagreement, at least when it comes to Indo-European languages. The previous translator was effectively unusable. Then suddenly Google Translate became something that would work most of the time. At the least for average users who were dealing with English, Spanish, French, German, Russian, etc.

Despite the documented failure modes (and they were many), suddenly it was possible to read articles in other languages, and it was likewise possible to make yourself understood in other languages using it. I personally know a lot of people who speak multiple of those languages. And they all agreed that it was a giant improvement. And the fact that it WAS a giant improvement was why they got rid of the previous translator.

I understand that it was terrible with Chinese. But I never used it for that.


here is Chinese user, I read this page by Google translate, at least English to Chinese is good for daily use.


I know that they focused on Chinese as a specific problem and have improved. I would expect it to be much better today than it was in 2006.

Part of the problem was that there is a lot less grammar in Chinese than in Indo-European languages. So there are many ways to translate a given Indo-European sentence into Chinese, and you need to understand context on a Chinese sentence to properly translate it into an Indo-European one.

The many ways to translate to Chinese is a problem because Chinese flexibility in word order means that there are many choices of reasonable next word, and they didn't have enough data to tell the difference between a reasonable next word and an unreasonable one.

Going the other way Chinese may not care whether you have one apple or 10 apples, or whether Xi is a man or a woman. But Indo-European languages generally do care. So Google Translate has to guess, and often gets it wrong.


thanks, this make sense. this is why student still can't write English report by it.


> Lisp environments on Sun workstations were able to outperform Lisp machines

I think it was just that it became clear the projects didn't deliver anything very useful. You can't keep the hype up very long if it can't be backed up by real applications.

But some good stuff that got started then prevailed, like speech understanding and language translation. But it didn't come usable overnight.

Classic AI was a reasonable research program, but research takes time. Think of nuclear fusion.


It took a while to get there. One would think of end 80s / early 90s. Remember, there were probably only around 10000 (ten thousand, not ten thousands) Lisp Machines ever produced. A 40 bit Ivory 3 processor from Symbolics was basically slightly faster than a Motorola 68030 processor, but with larger memory capabilites. Memory was expensive on stock hardware, too - but not as expensive as the 48bit wide ECC memory on a Lisp Machine. Add to that a Megapixel screen, a large disk, a tape drive, a faster graphics card,...

There was little point investing money into a hardware market which did not produce cheaper and/or faster machines, given the small market.

There were a lot of interesting applications development on Lisp Machines, but there was no point to deliver them on that expensive hard- and software. Development environments were catching up. Common Lisp was actually designed to be able to deliver applications on many different platforms, even though its main influence was Lisp Machine Lisp.

So a $50k ART expert system development system was replaced by a low-cost CLIPS on machines with less hardware/software costs. It also was moved away from Lisp, as Lisp was extremely unpopular (and with almost no funding left) in the 90s.

Nowadays a native Lisp on a M2 processor from Apple is 1000 times faster than on the Lisp Machine from 1990. That's just a single CPU core, we are not even talking about GPU or Neural network functionality. Expensive 40 MB main memory from then is now 8 GB entry level.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: