Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That reads like the successor to Prof. John McCarthy's "Epistemological Problems in Artificial Intelligence" class, which I once took, back in the days of logical inference and expert systems. AI people thought back then that if you thought about thought enough, you could figure out how to mechanize it. That turned out to be a dead end, and the "AI winter" (roughly 1985-2005) followed.

That class was known informally as "Dr. John's Mystery Hour".

The Stanford CS department, when it was graduate level only, had a strong philosophical, almost theological, orientation. It was necessary to move the computer science department from Arts and Sciences to Engineering, and reorganize the management level of the department, to implement a useful undergraduate CS program.



“To implement a useful CS program”

That’s too bad. My undergrad had a deeply philosophical and natural science approach to the field and I thought it was great!


Sounds great.

My CS degree got me a string of software engineering jobs. Which is fine I suppose, but what I was after was CS.


Also fair but a bit heartbreaking for me. Is this the product of mass academization? The motto of stanford is "The wind of freedom blows", not "mass produce uncritical engineers". A freedom to learn, to develop freely, to learn obscure math and think about the philosophical limits of computer science.

I study in Tübingen, Germany and I love it that the top machine learning faculty here is located at a holistic, full university. There's constant reflection of the work and the professors engage in a lot of dialog. Multidisciplinary seminars thinking about fairness and discrimination and a "broader" look found in the basic lectures. It is really an enrichment of your university education and develops you as a person. Better than my last university.

In the end, one is studying computer science and the science should stand for something, it's not a training of web-dev.


I'm glad you ended up somewhere you can respect.

I won't say I'm stuck with the local university, not exactly, but let's just say that switching costs are high. And I'm mad as hell about the way the place has been gutted of any academic integrity and turned into a job training program.

If the employers want particular skills, they should train people in those skills, not offload it onto the universities.


people can choose how the tone of their education. some people may want brass tacks engineering and to focus on getting a job.. some people may want a philosophical approach... its a good thing that there are lots of diverse mindsets out there.


While this is true for the individual I was still thinking a bit in the stanford-context. It is a world-leading, exceptional university where academic ideals should be lived and passed on. It's fine if you just want to pass some courses to get a job but I don't think those universities are really made for this and should be used in this way. You will never use the proof for the non-computability of the halting problem in a usual job but it's elementary for CS.


And instead now, we don't think about thought at all, but call whatever outputs from the AI thought


AFAIK the AI winter was more related to the Minsky paper, and the hype and no delivery of the over-promisses made by the industry.


First of all "this, then that" does not imply causality.

The way that I heard it, it was the fact that Lisp environments on Sun workstations were able to outperform Lisp machines at a much better price point. And just like that, a significant AI specific industry collapsed, and its other promises came into question.

That said, all three versions are consistent. The fact that researchers thought that they were closer than they were caused them to overpromise and underdeliver. Then when the visible bleeding edge of their efforts publicly lost to a far cheaper architecture, their failure became very visible.

Which we call cause versus effect almost doesn't matter. All of these things happened, and lead to an AI winter. And we continued to get incremental progress until the unexpected success of Google Translate. Whose success was not welcomed by people who had been trying to get rule-based AI systems to work.


Google Translate got a lot worse after the AI version was introduced, maybe not for english-centric translations but all other. The previous deductive translator was be much better. Same with Siri and Google Assistant, they are really bad at other languages except English


> Google Translate got a lot worse after the AI version was introduced,

Jesus. I remember when statistical translation was considered "AI".

Fun fact: One time I put "trompe le monde" into Google Translate, and it came back with the inspired mistranslation, "doolittle"


IMO Doolittle was a better album. So, while the translation was bad, I wouldn't say google was wrong.


artificial intelligence (countable and uncountable, plural artificial intelligences)

(computer science) Anything that performs better than whatever we called “artificial intelligence” a few years ago.


This mislabeling is quite common in popsci outside the AI field, so sorry for the rant, but I've got to rant as this is my pet peeve. Like, I get the joke you're making, but it's based on a horrible misuse of the word. All the major dictionaries (https://www.merriam-webster.com/dictionary/artificial%20inte... or https://www.oed.com/viewdictionaryentry/Entry/271625 etc) have only the uncountable or adjective meanings, and none have plural "artificial intelligences" as a valid option referring to anything ever, that's simply not a word in English.

M-W has (IMHO rightly) these two senses:

artificial intelligence

noun

1 : a branch of computer science dealing with the simulation of intelligent behavior in computers

2 : the capability of a machine to imitate intelligent human behavior

And that's it. There's no plural "artificial intelligences" or singular "an AI" because this term never refers to a specific system, it may refer to the field or the property but not to the specific machines which (perhaps) possess some artificial intelligence as the attribute/capability. Even if you'd have a system with fully superhuman capabilities, it wouldn't be "a artificial intelligence" because you simply don't (or at least shouldn't) call things or systems "artificial intelligences", just as you don't call people "natural intelligences".


I read it like "Artificial Intelligence could mean either of two things:

a) Intelligence that is produced "artificially, meaning by computer programming

b) Intelligence which is not intelligence but artificial so, thus not "real" intelligence.


Emphatic disagreement, at least when it comes to Indo-European languages. The previous translator was effectively unusable. Then suddenly Google Translate became something that would work most of the time. At the least for average users who were dealing with English, Spanish, French, German, Russian, etc.

Despite the documented failure modes (and they were many), suddenly it was possible to read articles in other languages, and it was likewise possible to make yourself understood in other languages using it. I personally know a lot of people who speak multiple of those languages. And they all agreed that it was a giant improvement. And the fact that it WAS a giant improvement was why they got rid of the previous translator.

I understand that it was terrible with Chinese. But I never used it for that.


here is Chinese user, I read this page by Google translate, at least English to Chinese is good for daily use.


I know that they focused on Chinese as a specific problem and have improved. I would expect it to be much better today than it was in 2006.

Part of the problem was that there is a lot less grammar in Chinese than in Indo-European languages. So there are many ways to translate a given Indo-European sentence into Chinese, and you need to understand context on a Chinese sentence to properly translate it into an Indo-European one.

The many ways to translate to Chinese is a problem because Chinese flexibility in word order means that there are many choices of reasonable next word, and they didn't have enough data to tell the difference between a reasonable next word and an unreasonable one.

Going the other way Chinese may not care whether you have one apple or 10 apples, or whether Xi is a man or a woman. But Indo-European languages generally do care. So Google Translate has to guess, and often gets it wrong.


thanks, this make sense. this is why student still can't write English report by it.


> Lisp environments on Sun workstations were able to outperform Lisp machines

I think it was just that it became clear the projects didn't deliver anything very useful. You can't keep the hype up very long if it can't be backed up by real applications.

But some good stuff that got started then prevailed, like speech understanding and language translation. But it didn't come usable overnight.

Classic AI was a reasonable research program, but research takes time. Think of nuclear fusion.


It took a while to get there. One would think of end 80s / early 90s. Remember, there were probably only around 10000 (ten thousand, not ten thousands) Lisp Machines ever produced. A 40 bit Ivory 3 processor from Symbolics was basically slightly faster than a Motorola 68030 processor, but with larger memory capabilites. Memory was expensive on stock hardware, too - but not as expensive as the 48bit wide ECC memory on a Lisp Machine. Add to that a Megapixel screen, a large disk, a tape drive, a faster graphics card,...

There was little point investing money into a hardware market which did not produce cheaper and/or faster machines, given the small market.

There were a lot of interesting applications development on Lisp Machines, but there was no point to deliver them on that expensive hard- and software. Development environments were catching up. Common Lisp was actually designed to be able to deliver applications on many different platforms, even though its main influence was Lisp Machine Lisp.

So a $50k ART expert system development system was replaced by a low-cost CLIPS on machines with less hardware/software costs. It also was moved away from Lisp, as Lisp was extremely unpopular (and with almost no funding left) in the 90s.

Nowadays a native Lisp on a M2 processor from Apple is 1000 times faster than on the Lisp Machine from 1990. That's just a single CPU core, we are not even talking about GPU or Neural network functionality. Expensive 40 MB main memory from then is now 8 GB entry level.


No, it was bigger than that. There were AI startups in the 1980s. They all went bust. Expert systems were just not very useful. Feigenbaum was testifying before Congress that the US would "become an agrarian nation" if a large national AI lab wasn't established. Japan had a "Fifth Generation" project, trying to do AI with Prolog. All that stuff hit the upper limit of what you can do with that technology, and it wasn't a very high upper limit.

AI was a tiny field in those days. Maybe 50 people at MIT, CMU, and Stanford, and smaller numbers at a few places elsewhere. No commercial products that were any good.


Doesn't the post office still use the handwriting detectors to automatically route mail? Aren't those from the 80's? That's pretty much all before my time.

It seems like, AI research produced some fantastic results, but those systems were quickly relabeled to not be AI. Like, win at chess.

Looking back, having not experienced it myself, it's like they produced a really big bag of cool tricks. But you're not going to be doing much searching in 640k of ram. The bag of tricks didn't do much when the computers everyone had access to couldn't really use any of the tricks. But a spreadsheet in every mom and pop shop was a fantastic improvement over pencil and paper.


> systems were quickly relabeled to not be AI. Like, win at chess.

Right, some things came out of it but progress was slow in the more general areas. Still there was progress but hyper had to die, it was time to come back to the planet and do some database-stuff.


> Japan had a "Fifth Generation" project, trying to do AI with Prolog. All that stuff hit the upper limit of what you can do with that technology, and it wasn't a very high upper limit.

What was the limiting factor of the expert system approach? Something like the size of the search space or that the number of rules you’d have to write was just infeasibly large?


>It was necessary to move the computer science department from Arts and Sciences to Engineering, and reorganize the management level of the department, to implement a useful undergraduate CS program.

Why was it necessary? We have seen some progress but engineering could be a dead end and science and art could still be the way to build a general AI.


Organization. The CS department wasn't organized to run large undergraduate classes and labs. They just had a rotating chair. Engineering had deans and structure.


John, any other thoughts to share on changes in the department? I’m curious to hear your perspective.


I'm too out of date. I haven't even been on the Stanford campus since the pandemic started.


> AI people thought back then that if you thought about thought enough, you could figure out how to mechanize it. That turned out to be a dead end, and the "AI winter" (roughly 1985-2005) followed.

To be fair, maybe we just haven't thought about it enough.


It may merely be beyond the power of the human mind. That's becoming less of a limitation.


Any cool sources on the AI Winter?


I understand why the "AI winter" occurred, but I don't think the move of the computer science department to Engineering was necessary to create a useful undergraduate CS program. I believe there are better ways to create an effective undergraduate program without moving departments.



Conway's Law is frequently referenced, but the claimed connection is rarely explained.

From Wikipedia:

> The law is, in a strict sense, only about correspondence; it does not state that communication structure is the cause of system structure, merely describes the connection. Different commentators have taken various positions on the direction of causality; that technical design causes the organization to restructure to fit, that the organizational structure dictates the technical design, or both. Conway's law was intended originally as a sociological observation[citation needed], but many other interpretations are possible.

My take: Conway's Law is vague, squishy, non-causal, misunderstood, and can mean a great many things, some of which are mutually inconsistent.


Yeah I think the main idea is that people who like clean organization, boundaries, tidiness, clear responsibilities, etc, will tend to congregate together. And their organizations will be that way, and so will the things they create together.

People who like experimentation, chaos, prototyping, flat hierarchies, etc, will also tend to congregate together, and the systems that they build will also have those values.

Same for lots of different qualities. It isn’t A->B or B->A, it’s more like A<->B<->C.


I like the idea of Conway's Law, but I also like to be critical of vague theories. Maybe it is asking too much, but I don't see people using it in a testable way. This isn't just selection or survivorship bias; I think the "law" itself is too vague to admit a true experiment, even an associative one. I'm happy to be shown to be wrong... or that I'm missing the point.

Another way of making this point is: what could make Conway's Law totally wrong? What evidence could do that? It seems like shifting sands -- there is always something that fits the "pattern"; the problem is the pattern is not defined a priori: it feels like getting your organization's palm read.


Totally agree. I guess I just like it as a concept because it’s almost a tautology.

Organized people are organized.

People who value short-term experiments will make short-term experiments.

The only part that is palm-reader level is the translation across contexts. And I agree that it’s a terrible law, but probably a decent rule of thumb, that someone who is tidy in one area of their life is likely tidy in others. But yeah, we shouldn’t really look into it any more than that.

It’s like other vague assumptions that our brain makes. Probably correct to some degree sometimes, but not a hard rule by any means.


I never took it to be about organization vs chaos.

For example, if a team is responsible for two modules of an application they are less likely to create an API because the modules can communicate directly inside the application (i.e. monolith)

If the same two modules are cared for by separate teams, then maybe they end up being two separate API services.

It both cases there is organization, just different types of organization.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: