That reads like the successor to Prof. John McCarthy's "Epistemological Problems in Artificial Intelligence" class, which I once took, back in the days of logical inference and expert systems. AI people thought back then that if you thought about thought enough, you could figure out how to mechanize it. That turned out to be a dead end, and the "AI winter" (roughly 1985-2005) followed.
That class was known informally as "Dr. John's Mystery Hour".
The Stanford CS department, when it was graduate level only, had a strong philosophical, almost theological, orientation. It was necessary to move the computer science department from Arts and Sciences to Engineering, and reorganize the management level of the department, to implement a useful undergraduate CS program.
Also fair but a bit heartbreaking for me. Is this the product of mass academization?
The motto of stanford is "The wind of freedom blows", not "mass produce uncritical engineers". A freedom to learn, to develop freely, to learn obscure math and think about the philosophical limits of computer science.
I study in Tübingen, Germany and I love it that the top machine learning faculty here is located at a holistic, full university. There's constant reflection of the work and the professors engage in a lot of dialog. Multidisciplinary seminars thinking about fairness and discrimination and a "broader" look found in the basic lectures. It is really an enrichment of your university education and develops you as a person. Better than my last university.
In the end, one is studying computer science and the science should stand for something, it's not a training of web-dev.
I won't say I'm stuck with the local university, not exactly, but let's just say that switching costs are high. And I'm mad as hell about the way the place has been gutted of any academic integrity and turned into a job training program.
If the employers want particular skills, they should train people in those skills, not offload it onto the universities.
people can choose how the tone of their education. some people may want brass tacks engineering and to focus on getting a job.. some people may want a philosophical approach... its a good thing that there are lots of diverse mindsets out there.
While this is true for the individual I was still thinking a bit in the stanford-context. It is a world-leading, exceptional university where academic ideals should be lived and passed on. It's fine if you just want to pass some courses to get a job but I don't think those universities are really made for this and should be used in this way. You will never use the proof for the non-computability of the halting problem in a usual job but it's elementary for CS.
First of all "this, then that" does not imply causality.
The way that I heard it, it was the fact that Lisp environments on Sun workstations were able to outperform Lisp machines at a much better price point. And just like that, a significant AI specific industry collapsed, and its other promises came into question.
That said, all three versions are consistent. The fact that researchers thought that they were closer than they were caused them to overpromise and underdeliver. Then when the visible bleeding edge of their efforts publicly lost to a far cheaper architecture, their failure became very visible.
Which we call cause versus effect almost doesn't matter. All of these things happened, and lead to an AI winter. And we continued to get incremental progress until the unexpected success of Google Translate. Whose success was not welcomed by people who had been trying to get rule-based AI systems to work.
Google Translate got a lot worse after the AI version was introduced, maybe not for english-centric translations but all other. The previous deductive translator was be much better. Same with Siri and Google Assistant, they are really bad at other languages except English
This mislabeling is quite common in popsci outside the AI field, so sorry for the rant, but I've got to rant as this is my pet peeve. Like, I get the joke you're making, but it's based on a horrible misuse of the word. All the major dictionaries (https://www.merriam-webster.com/dictionary/artificial%20inte... or https://www.oed.com/viewdictionaryentry/Entry/271625 etc) have only the uncountable or adjective meanings, and none have plural "artificial intelligences" as a valid option referring to anything ever, that's simply not a word in English.
M-W has (IMHO rightly) these two senses:
artificial intelligence
noun
1 : a branch of computer science dealing with the simulation of intelligent behavior in computers
2 : the capability of a machine to imitate intelligent human behavior
And that's it. There's no plural "artificial intelligences" or singular "an AI" because this term never refers to a specific system, it may refer to the field or the property but not to the specific machines which (perhaps) possess some artificial intelligence as the attribute/capability. Even if you'd have a system with fully superhuman capabilities, it wouldn't be "a artificial intelligence" because you simply don't (or at least shouldn't) call things or systems "artificial intelligences", just as you don't call people "natural intelligences".
Emphatic disagreement, at least when it comes to Indo-European languages. The previous translator was effectively unusable. Then suddenly Google Translate became something that would work most of the time. At the least for average users who were dealing with English, Spanish, French, German, Russian, etc.
Despite the documented failure modes (and they were many), suddenly it was possible to read articles in other languages, and it was likewise possible to make yourself understood in other languages using it. I personally know a lot of people who speak multiple of those languages. And they all agreed that it was a giant improvement. And the fact that it WAS a giant improvement was why they got rid of the previous translator.
I understand that it was terrible with Chinese. But I never used it for that.
I know that they focused on Chinese as a specific problem and have improved. I would expect it to be much better today than it was in 2006.
Part of the problem was that there is a lot less grammar in Chinese than in Indo-European languages. So there are many ways to translate a given Indo-European sentence into Chinese, and you need to understand context on a Chinese sentence to properly translate it into an Indo-European one.
The many ways to translate to Chinese is a problem because Chinese flexibility in word order means that there are many choices of reasonable next word, and they didn't have enough data to tell the difference between a reasonable next word and an unreasonable one.
Going the other way Chinese may not care whether you have one apple or 10 apples, or whether Xi is a man or a woman. But Indo-European languages generally do care. So Google Translate has to guess, and often gets it wrong.
> Lisp environments on Sun workstations were able to outperform Lisp machines
I think it was just that it became clear the projects didn't deliver anything very useful. You can't keep the hype up very long if it can't be backed up by real applications.
But some good stuff that got started then prevailed, like speech understanding and language translation. But it didn't come usable overnight.
Classic AI was a reasonable research program, but research takes time. Think of nuclear fusion.
It took a while to get there. One would think of end 80s / early 90s. Remember, there were probably only around 10000 (ten thousand, not ten thousands) Lisp Machines ever produced. A 40 bit Ivory 3 processor from Symbolics was basically slightly faster than a Motorola 68030 processor, but with larger memory capabilites. Memory was expensive on stock hardware, too - but not as expensive as the 48bit wide ECC memory on a Lisp Machine. Add to that a Megapixel screen, a large disk, a tape drive, a faster graphics card,...
There was little point investing money into a hardware market which did not produce cheaper and/or faster machines, given the small market.
There were a lot of interesting applications development on Lisp Machines, but there was no point to deliver them on that expensive hard- and software. Development environments were catching up. Common Lisp was actually designed to be able to deliver applications on many different platforms, even though its main influence was Lisp Machine Lisp.
So a $50k ART expert system development system was replaced by a low-cost CLIPS on machines with less hardware/software costs. It also was moved away from Lisp, as Lisp was extremely unpopular (and with almost no funding left) in the 90s.
Nowadays a native Lisp on a M2 processor from Apple is 1000 times faster than on the Lisp Machine from 1990. That's just a single CPU core, we are not even talking about GPU or Neural network functionality. Expensive 40 MB main memory from then is now 8 GB entry level.
No, it was bigger than that. There were AI startups in the 1980s. They all went bust. Expert systems were just not very useful. Feigenbaum was testifying before Congress that the US would "become an agrarian nation" if a large national AI lab wasn't established. Japan had a "Fifth Generation" project, trying to do AI with Prolog. All that stuff hit the upper limit of what you can do with that technology, and it wasn't a very high upper limit.
AI was a tiny field in those days. Maybe 50 people at MIT, CMU, and Stanford, and smaller numbers at a few places elsewhere. No commercial products that were any good.
Doesn't the post office still use the handwriting detectors to automatically route mail? Aren't those from the 80's? That's pretty much all before my time.
It seems like, AI research produced some fantastic results, but those systems were quickly relabeled to not be AI. Like, win at chess.
Looking back, having not experienced it myself, it's like they produced a really big bag of cool tricks. But you're not going to be doing much searching in 640k of ram. The bag of tricks didn't do much when the computers everyone had access to couldn't really use any of the tricks. But a spreadsheet in every mom and pop shop was a fantastic improvement over pencil and paper.
> systems were quickly relabeled to not be AI. Like, win at chess.
Right, some things came out of it but progress was slow in the more general areas. Still there was progress but hyper had to die, it was time to come back to the planet and do some database-stuff.
> Japan had a "Fifth Generation" project, trying to do AI with Prolog. All that stuff hit the upper limit of what you can do with that technology, and it wasn't a very high upper limit.
What was the limiting factor of the expert system approach? Something like the size of the search space or that the number of rules you’d have to write was just infeasibly large?
>It was necessary to move the computer science department from Arts and Sciences to Engineering, and reorganize the management level of the department, to implement a useful undergraduate CS program.
Why was it necessary? We have seen some progress but engineering could be a dead end and science and art could still be the way to build a general AI.
Organization. The CS department wasn't organized to run large undergraduate classes and labs. They just had a rotating chair. Engineering had deans and structure.
> AI people thought back then that if you thought about thought enough, you could figure out how to mechanize it. That turned out to be a dead end, and the "AI winter" (roughly 1985-2005) followed.
To be fair, maybe we just haven't thought about it enough.
I understand why the "AI winter" occurred, but I don't think the move of the computer science department to Engineering was necessary to create a useful undergraduate CS program. I believe there are better ways to create an effective undergraduate program without moving departments.
Conway's Law is frequently referenced, but the claimed connection is rarely explained.
From Wikipedia:
> The law is, in a strict sense, only about correspondence; it does not state that communication structure is the cause of system structure, merely describes the connection. Different commentators have taken various positions on the direction of causality; that technical design causes the organization to restructure to fit, that the organizational structure dictates the technical design, or both. Conway's law was intended originally as a sociological observation[citation needed], but many other interpretations are possible.
My take: Conway's Law is vague, squishy, non-causal, misunderstood, and can mean a great many things, some of which are mutually inconsistent.
Yeah I think the main idea is that people who like clean organization, boundaries, tidiness, clear responsibilities, etc, will tend to congregate together. And their organizations will be that way, and so will the things they create together.
People who like experimentation, chaos, prototyping, flat hierarchies, etc, will also tend to congregate together, and the systems that they build will also have those values.
Same for lots of different qualities. It isn’t A->B or B->A, it’s more like A<->B<->C.
I like the idea of Conway's Law, but I also like to be critical of vague theories. Maybe it is asking too much, but I don't see people using it in a testable way. This isn't just selection or survivorship bias; I think the "law" itself is too vague to admit a true experiment, even an associative one. I'm happy to be shown to be wrong... or that I'm missing the point.
Another way of making this point is: what could make Conway's Law totally wrong? What evidence could do that? It seems like shifting sands -- there is always something that fits the "pattern"; the problem is the pattern is not defined a priori: it feels like getting your organization's palm read.
Totally agree. I guess I just like it as a concept because it’s almost a tautology.
Organized people are organized.
People who value short-term experiments will make short-term experiments.
The only part that is palm-reader level is the translation across contexts. And I agree that it’s a terrible law, but probably a decent rule of thumb, that someone who is tidy in one area of their life is likely tidy in others. But yeah, we shouldn’t really look into it any more than that.
It’s like other vague assumptions that our brain makes. Probably correct to some degree sometimes, but not a hard rule by any means.
I never took it to be about organization vs chaos.
For example, if a team is responsible for two modules of an application they are less likely to create an API because the modules can communicate directly inside the application (i.e. monolith)
If the same two modules are cared for by separate teams, then maybe they end up being two separate API services.
It both cases there is organization, just different types of organization.
Just to be clear, this content isn't from Stanford per se, the Stanford Encyclopedia of Philosophy is an academic publication that's hosted by Stanford (and was started by and continues to be run by Stanford faculty, but is mostly written by academics elsewhere and has some form of peer-review).
Some SEP articles are extremely high quality, I can't speak to the quality of this one but "philosophy of CS" as construed here feels pretty niche inside philosophy. There's lots of CS-related work being done by people in philosophy departments—algorithmic fairness, for example—that isn't covered by this article.
I would add that independent of quality, any article on the SEP is closer to a literature review than a comprehensive, rigorous treatment of the topic at hand.
While "computer science has as much to do with computers as astronomy does with telescopes", I fear that it's become too inexorably bound to computers and too dependent on its foundation in the maths. As a result CS is seen as sort of a fancier software developer/engineering field and not a field that's about theories of computation.
I think CS has the potential to reframe a huge number of fields and offer a different or unique take on how those fields might approach problems. e.g. a chemical reaction could be reframed as a computation that takes inputs and produces outputs. Biology, economics, finance, psychology, etc. could take on entirely new aspects if viewed through a computational lens. I also think that such a lens could enable higher-order multidisciplinary, interdisciplinary, and ultimately convergent efforts [1] by simplifying the reasoning across disciplines -- defeating barriers while allowing disparate experts to work better together. CS has also developed a number of very powerful tools for encoding, aggregating, abstracting, executing, and validating the collective thoughts of massive numbers of individual contributors -- I'm not sure any other discipline is able to organize millions of atomic computational steps, all provided by people who have never met, test them, build them, and execute them.
Imagine needing to develop a complex system, one that might draw from dozens of disciplines and have millions of atomic steps to achieve, and the entire effort is organized under atomic instructions that each drive some step, but they could come from any discipline. We (Computer Scientists) have already figured out how to manage that complexity fairly reliably, version it, change it, etc. "Build a car" could be source code somewhere in a repository, and when you need to make one you "execute it" and suddenly mines in Brazil start extracting ore, fabs in Korea start making touch screens, and in a little while you get a car. Somebody could decide to adjust how the blinker works, push the change, it gets reviewed, unit tests "run" in some blinker factory somewhere, and now the future versions of those cars get the change.
> I think CS has the potential to reframe a huge number of fields and offer a different or unique take on how those fields might approach problems. e.g. a chemical reaction could be reframed as a computation that takes inputs and produces outputs.
Very well put. I had this sort of dream going into my studies. Unfortunately the prevailing ideology (at my school) seems to be that CS is applied mathematical induction and discrete logic.
Though I do think the dark horse of the 21st century is going to be computational biology, and that will really kick things off in the direction you describe. Imagine git fetch & make 'ing an orange tree, where the source is some amniotic goo
What content is that, beyond what is actually written in the article? Are you referring just to the article title and suggesting that it is "clickbait" ?
> and from the practice of software development and its commercial and industrial deployment. More specifically, the philosophy of computer science considers the ontology and epistemology of computational systems, focusing on problems associated with their specification, programming, implementation, verification and testing.
Well, Stanford doesn't have a clue what Computer Science is and isn't and is apparently trying to drum up admissions with slick, sexy and false advertising.
CS has zero to do with computers (with the exception of the computer scientist themselves, they are the computer), and it certainly isn't programming. It's math, you fools!
This is extreme. Computer architecture is undoubtably a field of study within CS.
Many parts of computer science are mathematical, but many parts are closer to physics or chemistry than mathematics. You can run experiments and form hypotheses in computer science.
Computer architecture used to be and imho was properly a part of electrical engineering.
“Computer science” was never really scientific. Algorithms, symbolic logic, natural languages, computational complexity, finite model theory and compilers, after that it’s what, grinding gears?
Can you name one significant experimental result in computer science? Nothing comes to my mind.
> Experimentally, we comprehensively compare the behavior of ICL and explicit finetuning based on real tasks to provide empirical evidence that supports our understanding
Well you could say that physics is also math. But conversely, would geometry for example be considered computer science? If not, then computer science is not identical with mathematics.
Something that occurs to me is often when creating programs to run on a computer system we don't know exactly how they will perform in advance, because of the complexity of the hardware and software interrelations within the system. And so we run them and measure the results. That's definitely science.
CS would not exist without computers. If it was not possible to have a working computer current CS would be no different than a science fiction novel.
Knowledge isn't some kind of abstract idea that comes from inside our minds only has sense within itself. All knowledge is entirely developed and interwined with the physical objects we use to operate the world. The same goes for math. It is only because of our past and future operations in the real world that math has any sense.
If we can only describe them in terms of other fields, I'd say math is like law and CS is like physics. In math you can kind of say whatever you want, but in CS it's meaningless unless it's in the context of some sort of computer. I'd say there's also a lot more stake to the organizational principles you have in CS, where in math it really is symbol shunting at the end of the day.
In Germany it's called "Informatics" (Informatik in german). I would have loved a name without computer in it, to set it more apart from things like computer/software engineering.
I’m going to presume that downvoters aren’t perceiving the page as complete nonsense.
Which concerns me greatly. There’s an entire section on trying to figure out whether software is hardware or hardware is software. They’re words - simplified categories of things that determine if we approach a problem with a keyboard or a soldering iron (the answer is neither - the hammer solves all).
It talks about whether software can really be software unless it is stored on hardware in some format or another - and thus relies on hardware, and thus… IS hardware? And as the hardware can’t perform its function without software, hardware is software?
No. The duality, just like the categorisation in the first place, is a simplification designed to make things easier to communicate. That is all.
Thats why I see this as nonsense. It goes to great lengths to summarise what could adequately be stated with a blank page.
Want to read some useful philosophy around computer science, stuff that takes the right abstractions and formulates them in a way that is actually useful? Read David Deutsch.
> I’m going to presume that downvoters aren’t perceiving the page as complete nonsense.
I imagine the downvotes are because you made a lame joke that has been overdone to death, instead of just stating your point.
> Which concerns me greatly. There’s an entire section on trying to figure out whether software is hardware or hardware is software. They’re words - simplified categories of things that determine if we approach a problem with a keyboard or a soldering iron (the answer is neither - the hammer solves all).
Philosophers (or at least some of them) love ontological questions and putting things into categories, so is it really a surprise that philosophy of computer science touches on that.
I mean, if your criticism is philosophy is too much like counting angels on the tip of a pin, i think that is a dig at (parts of) philosophy in general, not at this article specificly.
Ah, point taken. I’m new here so I didn’t realise that was done to death.
On the philosophy point - there is a great deal of use in philosophy within computer science and any other field I can think of. But categorising things for the sake of categorising things isn’t philosophy of the subject at hand, it’s a meandering path through nomenclature that ends up going nowhere in particular.
That’s the reason I recommend people like Deutsch for anyone interested in philosophy of computing. He gets it. It’s all about choosing an abstraction for the purpose of exploring an idea, allowing people to push their imaginations to - but not beyond - the theoretical or hypothetical limitations within a framework.
Perhaps it’s because he is a scientist, perhaps the concept of “model a thing for a problem then look at the problem” is inherent to his world view and “model a problem for the sake of modelling it and explore whether or not other people might have thought of this model before” is the philosophical world view, but if that is the case then it turns out I don’t have respect for philosophy.
Ya know, if nothing else, the whole Bing AI debacle has generated the phrase “I have been a good Bing.” Probably getting their name in such a meme-able phrase is the most exposure Bing has gotten… ever? What’s a more memorable Bing event?
That class was known informally as "Dr. John's Mystery Hour".
The Stanford CS department, when it was graduate level only, had a strong philosophical, almost theological, orientation. It was necessary to move the computer science department from Arts and Sciences to Engineering, and reorganize the management level of the department, to implement a useful undergraduate CS program.