Philosophy is the birthplace of sciences, which is why most philosophers are dealing with some kind of metaphysics. Yes, there are some philosophers that continue their work after developing it from a metaphysics into a physics, but that's sort of besides the point. The point of philosophy is to create the framework for empirical research.
That you would deride Wittgenstein on a math/CS forum, when he is literally the person who thought up the concept of truth tables, seems quite egregious.
Yes, Wittgenstein is one of the most frustrating philosophers to read (I know, I took a class on his work), but his impact on the development of computer science, as one of the main people trying to harness the logic of thought/language, seems obvious to me.
> That you would deride Wittgenstein on a math/CS forum, when he is literally the person who thought up the concept of truth tables, seems quite egregious.
For extra lulz you should ask him about which of the many meanings of the word "is" he is using here, for each instance of the word:
> That's because a lot of philosophy is eminently pooh-pooh-able.
Of all places, you'd think people on a hacker forum would know the difference between is and equals.
Wittgenstein's discussion of what _all_ games have in common (nothing, really) led him to the notion of "family resemblances".
Margaret Masterman, who was Wittgenstein's student in Cambridge, may have passed some of that on to her student Karen Spärck Jones -- later of TFIDF fame (Spärck Jones, 1972; [1]) --, and Karen's Ph.D. was on semantic clustering, which years later were published as a book [2]. Her husband Roger needham published a paper about the notion of a "clump" theory of meaning [3]. So it seems Wittgenstein put some precursor ideas to clustering (linkage?) out in the Cambridge air for others to pick up...
> That you would deride Wittgenstein on a math/CS forum, when he is literally the person who thought up the concept of truth tables, seems quite egregious.
That would be Charles Peirce, in the XIXth century, not Wittgenstein.
Also, his philosophical works might be bad even if he had invented truth tables: it's not like the truth table was hard to find the way, e.g., Newtonian mechanics was.
Peirce, apparently, did develop a equivalent form of truth table earlier, but it would be misunderstand the history of computer science to attribute them to Peirce. Just because someone had the idea first, doesn't mean that work is the source of the idea going forward.
I think it's pretty clear that Wittgenstein's truth tables are those that guided the development of computer science.
>In a manuscript of 1893, in the context of his study of the truth-functional analysis of propositions and proofs and his continuing efforts at defining and understanding the nature of logical inference, and against the background of his mathematical work in matrix theory in algebra, Charles Peirce presented a truth table which displayed in matrix form the definition of his most fundamental connective, that of illation, which is equivalent to the truth-functional definition of material implication. Peirce’s matrix is exactly equivalent to that for material implication discovered by Shosky that is attributable to Bertrand Russell and has been dated as originating in 1912. Thus, Peirce’s table of 1893 may be considered to be the earliest known instance of a truth table device in the familiar form which is attributable to an identifiable author, and antedates not only the tables of Post, Wittgenstein, and Łukasiewicz of 1920-22, but Russell’s table of 1912 and also Peirce’s previously identified tables for trivalent logic tracable to 1902.
>But even if that conclusion is challenged, it is now clear that Russell understood and used the truth-table technique and the truth-table device. By 1910, Russell had already demonstrated a well-documented understanding of the truth-table technique in his work on Principia Mathematica. Now, it would seem that by 1912, and surely by 1914, Russell understood, and used, the truth-table device. Of course, the combination of logical conception and logical engineering by Russell in his use of truth tables is the culmination of work by Boole and Frege, who were closely studied by Russell. Wittgenstein and Post still deserve recognition
for realizing the value and power of the truth-table device. But Russell also deserves some recognition on this topic, as part of this pantheon of
logicians.
>In this paper I have shown that neither the truth-table technique nor the truth-table device was "invented" by Wittgenstein or Post in 1921-22. The truth-table technique may originally be a product of Philo's mind, but it was clearly in use by Boole, Frege, and Whitehead and Russell. The truth-table device is found in use by Wittgenstein in 1912, perhaps with some collaboration from Russell. Russell used the truth-table technique at Harvard in 1914 and in London in 1918. So the truth-table technique and the truth-table device both predate the early 1920s.
Sure, just as alchemy is the birthplace of chemistry. That doesn't mean we should still be studying alchemy for anything other than its historical significance.
No, this is an incorrect assessment. You have framed it as post hoc, but the point is that the philosophy is the development of ideas like, say, atomic theory in chemistry, or germ theory in medicine. Theories that define the framework for study.
New sciences happen very infrequently, but they happen, and when they do, they are typically created in philosophy departments. Computer science is the most recent, which came in large part from philosophy departments. Before that was psychology.
Alchemy is exactly a framework-free type of empiricism. The point of philosophy, and philosophy that happens in other sciences, is that we live inside of a model, and we interact with that model, and change the model while we are doing empirical research using the rules of the model. This is a type of reflexive framework development, where metaphysical ideas become obvious physics as people propose changes to the standard model we use.
This dance between induction and deduction is exactly the field of philosophy.
Incorrect. Philosophers write about, "organize" and "codify" what people are doing in the trenches from trial and error. To say that the philosophers created the science is like saying that by dressing a man in a suit they have created life.
Philosophers of science have given us some axiomatic statements about the nature of deduction and the limits of inquiry, but research is a trade far more nuanced than a philosophical framework, with cultural transmission of domain specific problem solving strategies and learned intuitive guides. Those things are inherently a-philosophical, and also probably the most important of all.
I still don't understand your point. If you have some disagreement with certain philosophy of science writers, I can certainly understand, but my point is that philosophy is exactly the interplay of the deductive frameworks and the induction "in the trenches" as you say. The process of philosophy is the interplay between the to, as neither aspect of that dichotomy can be demonstrated it's own.
Empiricism cannot be justified without a reason-based framework, and that deductive framework is by definition arbitrary, and needs to map to empirical findings.
This is the basis of all superstition. Our crops were doing poorly, but then I burned a chicken and now they're doing well. Let's keep burning chickens.
You have to have an explanation of what counts as a valid empirical relation. And that depends on a whole lot of things worked out by philosophers and scientists.
Including, for example, that you can safely ignore correlations that can't have a basis in physical laws. Or that you can write certain symbols on a blackboard and compute the probability that your correlation is worth spending more time on.
I think we have different definitions of empiricism. Empiricism is the belief that sensory experience is necessary for knowledge, not that it is sufficient. It stands in contrast with the belief that knowledge can be justified by pure logic without reference to any sensory experience.
Your claim was that empiricism justifies itself, I was explaining why that's not the case.
Your definition is overly broad IMO. Empiricism is usually taken to be the belief that the sole source and justification of knowledge is ultimately sensory data. See e.g. Wikipedia or Britannica.
It stands in contrast with rationalism sure. But more importantly in the context of your comment and the parent, empiricism also stands in contrast to the only sane POV (IMO), which is that knowledge is combination of empirical and rational sources, as pointed out for example by Kant.
"empiricism is an epistemological view which holds that true knowledge or justification comes only or primarily from sensory experience and empirical evidence."
Note the phrase "or primarily".
> knowledge is combination of empirical and rational sources
The "or primarily" hedge is all I need to refute you (or at least to show that we are not actually in disagreement). However, rationality can itself be justified by empiricism if the Church-Turing thesis is correct, which I believe it is.
I suspect that you're using empiricism somewhat more broadly than philosophers do, and that you may include things like science as a whole. Science is more traditionally IMO considered to be a recursive process with rational (i.e. theory) and empirical (i.e. sense data) steps. In science, sense data provides a constraint on theory and theory (together with psychology) provide a constraint on the raw sense data (in the same way, for example, an AI provides structure to the raw input vectors).
If that's something along the lines of what you believe, then sure I'd say we're on the same page and we're not actually in disagreement.
But I wouldn't describe that belief system as saying empricism is self-justifying. Empiricism in practice goes off the rails pretty quickly with things like subjective idealism and the belief that there are no external entities, only sense data etc.
The thing we have now, where theory and experience mutually constrain each other seems obviously correct to me both in the academic sense (e.g. I think that Kant was right that the mind provides structure to raw sensory data). I also think it's true in the broader sense (e.g. that the applied sciences tend to move more slowly on their own and advance much more rapidly when combined with theory).
> Science is more traditionally IMO considered to be a recursive process with rational (i.e. theory) and empirical (i.e. sense data) steps.
That's true, but like so many traditions, it's wrong. Science can be justified entirely in terms of empiricism.
To be clear, it is not at all obvious that this is possible (it relies on the Church-Turing thesis), and so people can be forgiven for making this mistake. But it's still a mistake, and it's grounded in ignorance.
> Empiricism in practice goes off the rails pretty quickly with things like subjective idealism and the belief that there are no external entities, only sense data etc.
Unless your sensory experiences are radically different from mine, then you have to concede that those experiences behave, to a very close approximation, as if external entities exist, and a good explanation for that is that external entities do in fact exist.
Ironically, that explanation turns out to be wrong, but to see that you need to dive into quantum mechanics. And the justification for quantum mechanics grounds out in the sensory experience of perceiving the results of experiments. There is no escape from empiricism in the world we live in.
I do think we must have pretty different sensory experiences then, because my awareness of sensory data is entirely of processed data and nothing at all like the raw output of my retina (which I personally don't have accesss to although we can access it in animals if we cut their heads open). My ears have a high pitched ring constantly, and although it's indistinguishable from a sensory perspective from hearing a high pitched noise out in the world, my brain has somehow never attributed it to an outside object. Similarly with occular migraines.
I think a reasonable test would be whether you could build a useful embedded device out of only a sensor and no logic to interpret the output of that sensor.
The thing that lets us even talk meaningfully about computable functions is that they have a logical description. We wouldn't be able to say anything meaningful about the entire class of such functions if we were restricted to those we have experience of. It's not even clear what it would mean to universally quantify over a class of logical structures in a purely empirical world. We don't even need to get as fancy as computable functions. We couldn't even meaningfully talk about the class of natural numbers.
And the thing that lets us translate little dots on some quantum mechanical detector is that we have a mathematical theory that predicts that dots will look one way if such and such is the case. And they will look another way if such and such is the case. If sense data are all you need, then you have to give an account of why a trained experimental physicist can see different things in those dots than a toddler does.
> my awareness of sensory data is entirely of processed data and nothing at all like the raw output of my retina
Yes, obviously.
> I think a reasonable test would be whether you could build a useful embedded device out of only a sensor and no logic to interpret the output of that sensor.
A more useful test would be if you could build a useful embedded device with no sensors.
> The thing that lets us even talk meaningfully about computable functions is that they have a logical description.
But that logical description is a description of a physical process. The whole notion of computability is inherently physical. Even the lambda calculus is a description of a process of symbol-manipulation, and symbols are physical things.
> If sense data are all you need
That's a straw man. Empiricism does not claim that sense data is all you need. It is the claim that sense data are the primary source of knowledge. All knowledge -- even mathematical knowledge -- starts with sense data, is ultimately grounded in sense data, but is obviously not just raw sense data.
Nobody is disputing that sense data are necessary. The standard rationalist position is, to quote Leibniz since it's easily accessible on Wikipedia
> The senses, although they are necessary for all our actual knowledge, are not sufficient to give us the whole of it, since the senses never give anything but instances, that is to say particular or individual truths. Now all the instances which confirm a general truth, however numerous they may be, are not sufficient to establish the universal necessity of this same truth, for it does not follow that what happened before will happen in the same way again.
So sense data are necessary but not sufficient, especially for logic. If you believe that empiricism is self-justifying then it does seem like you have to start with the manifold of sense data and somehow build up logic from it. Otherwise you're going to end up #include-ing logic somewhere that may not be obvious, and you've landed to the position you were originally arguing against that you need a reason-based framework to justify empiricism.
A lot of smart people have tried to work out a version of empiricism that attempts to build of logic from sense data. For example, Russell's work and early Wittgenstein. But it always ends up getting pretty crazy.
The easiest way, IMO, to see why it won't work to build up human knowledge from sense data is to realize that the human knowledge system is a distributed system. Empiricists are working with the wrong unit of analysis. The words we have for thought are things like "logic" from logos or "word" and "rational" from ratio or "account" or "reckoning". They're both words that are fundamentally about multiple parties (in CS we can think of them as nodes) doing computation together. Some of the oldest ideas we have about reasoning are dialogs, and that basic idea persists to this day with things like game theoretic semantics. There seems to be something fundamentally "distributed systems" about thought and knowledge.
The individual nodes and their sensors aren't the whole story or even the bulk of the story. That's even more true when you consider that how we interpret sense data depends on billions of years of evolution. Yes we operate on sense data, but we do so using a logical structure that heavily constrains what we see, hear, taste and so forth.
Sense data plays a role, as Leibniz says, just as network cards play an important role in the Google data center. But I don't think anyone would really argue that the value of any large knowledge base is ultimately grounded in network packets.
> So sense data are necessary but not sufficient, especially for logic.
But they are sufficient.
> it does not follow that what happened before will happen in the same way again.
Yes, that's true. It does not follow that sense data are insufficient. All that follows is that induction is not a valid mode of reasoning (which is true -- it isn't).
> The easiest way, IMO, to see why it won't work to build up human knowledge from sense data is to realize that the human knowledge system is a distributed system.
Nonsense. Distributed systems can do logic. Being distributed is completely irrelevant.
> Mathematical knowledge can be justified by pure logic without reference to any sensory experience.
No, I don't think it can. I challenge you to give me an example of mathematical knowledge that you can justify without reference to any sensory experience, keeping in mind that reading and hearing people talk are both sensory experiences.
It doesn't even have to be math. I'll bet you can't even define the distinction between "true" and "false" without reference to sensory experience.
If you try burning a chicken repeatedly when crops are doing poorly, and it works every time, that hypothesis is supported by the evidence. There's obviously no chain of direct causality, but that is immaterial.
This reminds me of scientists who completely shit on astrology for lacking any predictive power. At the very least it has some predictive power because people believe in it and subtly conform to its predictions. Beyond that, there are obviously cycles in the universe and we know biological cycles synchronize with natural ones (melatonin is any easy example, but animals synchronize to year level cycles as well). Are the stars causing the correlations, and are all the correlations astrology talks about present? Clearly not, but there is a mountain of poorly controlled empirical observations that hint to more being there than we've laid out with regard to human character that science is content to shit on because it can't do anything else without looking bad.
The Black Swan problem (the problem of induction) prevents empiricism (induction) from being justified on its own. You end up with solipsism (turkey problem, grue problem, etc.)
You can call someone's reasoning circular all you want but at the end of the day there's what you've accomplished, and the score is philospher 0 engineer 1.
True, it's basically indisputable that engineers are better at engineering than philosophers are. But that seems orthogonal to the issues raised in the problem of induction.
My thrust was more that people are out doing stuff in the world, and for the most part philosophers don't do anything other than say things about what people are already doing. Engineering was an empirical science long before it was a deductive and analytic one.
Philosophers make arguments for/against claims, I don't see why that doesn't count as doing something. I mean, maybe you're complaining that they're not building rockets or feeding the poor, but philosophers are far from the only ones who don't do these things.
Making arguments for/against claims can be a noble pursuit, and mathematicians have done it to great benefit for humanity. I suspect the sum total of the benefit from philosophers' claims is much lower.
Maybe so, but I don't see why every discipline needs to be evaluated purely on "benefit for humanity" in the sense of scientific or technological progress, if that's what you're implying. There's more to humanity than just scientific/technological progress.
I mean, people love musicians for making interesting "what if" statements to music, and I'm not shitting on that. The difference is that music makes people happy and makes the time go faster, while most philosophy makes people confused for no good reason, is boring and even when "understood" doesn't provide any tangible benefit to people's lives.
Don't get me wrong, there's a lot of good "philosophy" out there, but it absolutely doesn't need to be its own academic discipline, it could just be a genre of nonfiction - "fun thought experiments taht will blow your mind"
>philosophy makes people confused for no good reason, is boring and even when "understood" doesn't provide any tangible benefit to people's lives.
I mean, maybe this is true for some people, but there are a lot of people who don't get confused and who find it interesting and enjoyable.
>Don't get me wrong, there's a lot of good "philosophy" out there, but it absolutely doesn't need to be its own academic discipline, it could just be a genre of nonfiction - "fun thought experiments taht will blow your mind"
Philosophers are particularly interested in reasoning about whether certain claims are true or false though, not just saying "what if". I mean, if you want the literature and philosophy departments to nominally merge together and for philosophers to continue doing what they're doing, that's fine I suppose, though there are institutional reasons why that's probably not going to happen.
Except that there are practically journals just for arguing about what one particular german guy who has been dead for 150 years meant when he said a thing. That is not a sign of clear writing.
I'm not sure how that contradicts what I've said or why this means we should abolish philosophy departments. And for what it's worth, philosophy today tends to be clearer (to us at least), e.g. Dennett's work.
But Popper wasn't saying that empiricism could be justified empirically, was he?
In his own words, in the section on the problem of induction in The Logic of Scientific Discovery:
"My own view is that the various difficulties of inductive logic here
sketched are insurmountable. So also, I fear, are those inherent in the
doctrine, so widely current today, that inductive inference, although
not ‘strictly valid’, can attain some degree of ‘reliability’ or of ‘probability’."
He then goes on to provide the (now contentious) falsification-based view of science after conceding that inductivism can't work.
Because the only reason you have to believe anything at all is that you perceive things. And the things that you perceive probably lead you to believe things like that you are a human being, that you exist in a particular subset of three-dimensional space, that there are other humans that exist in other subsets of that same three-dimensional space, that these other humans move around and do things that can reasonably be described as "saying things" and "writing things", and that the things that these other humans say and write correspond to circumstances in this three-dimensional space that you occupy so that it makes sense, at least in some circumstances, to label these sayings and writings with labels like "true" and "false" to indicate whether the way they correspond with circumstances is a positive or negative correlation, and if you get these labels right it can help you survive and flourish. Likewise, if you get them wrong (and that includes denying what I have just told you) it will greatly diminish your prospects of survival, and evolution will take care of the rest. In short, it isn't circular because if you try to pick a fight with reality, reality will win.
I see where you're coming from, but none of this really means that justifying inductive reasoning through inductive reasoning isn't circular.
Hume himself thinks that inductive reasoning is grounded in "custom or habit", and thinks it's rational to proceed this way---a solution you'd probably agree with.
I suppose the confusion still remains about how empiricism can be self-justifying. You've laid out a case for why it's empirical reasoning is pragmatic, fine, but that doesn't mean that empirical reasoning is grounded in empirical reasoning, even if empirical reasoning is in fact rational. Whether you go with a Humean-style solution or a Popperian solution, it's just still not the case that justifying empirical reasoning through empirical reasoning is not circular.
I'm disputing something very specific. I'm not disputing that empirical reasoning is rational. What I'm disputing is that empirical reasoning is justified by empirical reasoning. This not being circular is not logically related to actual reality. Like, I'm just saying that this doesn't make sense:
1: If you try to pick a fight with reality, reality will win. (Empirical reasoning is evolutionary useful, etc.)
2: Thus, empirical reasoning is justified by empirical reasoning.
2 doesn't follow from 1. I accept 1, and I accept the rationality of empirical reasoning, but I don't accept 2.
You've actually moved the goal posts here. The original claim was: empiricism can be justified empirically. But "empiricism" and "empirical reasoning" are not synonyms.
(You also threw in induction at some point, which is just a red herring.)
So let me try this again: to quote Wikipedia, empiricism is an epistemological view which holds that true knowledge or justification comes only or primarily from sensory experience and empirical evidence. This can be justified empirically (I claim) by observing (empirically!) that people who do not base their actions on sensory experience will do stupid things like walk into walls or fall off cliffs.
If you want to dispute this, tell me how you would define the words "true" and "false" without making any reference to sensory experience.
No, I'm not confusing them. At worst I'm using evolutionary epistemology to justify empiricism. And I'm only doing that because I'm presenting an informal argument. I can justify empiricism without resorting to evolution. But invoking evolution has more emotional appeal to entities that have evolved and so presumably don't have to be persuaded of the value of survival.
That you would deride Wittgenstein on a math/CS forum, when he is literally the person who thought up the concept of truth tables, seems quite egregious.
Yes, Wittgenstein is one of the most frustrating philosophers to read (I know, I took a class on his work), but his impact on the development of computer science, as one of the main people trying to harness the logic of thought/language, seems obvious to me.