First: did they even find some kind of correlation, or is it just a statistical aberration? Second: if they found a real correlation, is their explanation of why that correlation exists correct?
IMHO, their evidence does not support their claims. Where are the controls? What are the sampling biases? What are possible alternative explanations, and why are those not considered? What is the relationship between a person's response to a hypothetical situation and his/her actions in a real one?
They greatly overstate their findings in the discussion section. It's almost painful to read:
> We have shown that people’s moral judgments and decisions depend on the native-ness of the language in which a dilemma is presented, becoming more utilitarian in a foreign language.
> Most likely, a foreign language reduces emotional reactivity, promoting cost-benefit considerations, leading to an increase in utilitarian judgments.
> This discovery has important consequences for our globalized world as many individuals make moral judgments in both native and foreign languages. Immigrants face personal moral dilemmas in a foreign language on a daily basis, sometimes dilemmas with even larger stakes such as when serving as a jury member in a trial.
> Given that what we have discovered is surprising and unintuitive, increasing awareness of the impact of using a foreign language may help us check our decision-making context and make choices that are based on the things that should really matter.
> Foreign languages are used in international, multilingual forums such as the United Nations, the European Union, large investment firms and international corporations in general. Moral choices within these domains can be explained better, and are made more predictable by our discovery.
Wait, what? When was that covered in the rest of the paper?
The premise had more potential. One set of individuals, evaluating a scenario presented to them in their native tongue, were found to react in a statistically significantly different way from a second set of individuals, evaluating the same scenario in a language foreign to them.
The problem is the people in group one are not the same as those in group two. Statistical controls can mitigate the risk of randomness crashing the party. But there is an experimental solution: have the same people evaluate similar situations in a foreign and their native tongues. That said, something gained if it inspires a more rigorous study.
The paper (...) is not an example of good science.
Very true, good point. It's quite common to see a study that makes some observations, even rigorously controlled ones (this isn't an example), then offers a conclusion that isn't supported by the observations. People untrained in science notice the quality of the observations and think that supports the conclusion.
In this case, a correlation is established, then, when evidence for a cause-effect relationship is needed, the authors instead begin waving their arms.
That's right. And it's not even clear that they found a correlation that would hold up in different experimental conditions.
The paper is at http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjourna... , and unfortunately has the provocative title "Your Morals Depend on Language", instead of a more accurate title like "There's a possibility that some people's morals are correlated with language, but we're not sure and we're also not sure if that's true for everybody because our sampling is insufficient to allow us to draw such a conclusion".
This is my understanding of homoiconicity. There's a good chance that I'm wrong. Comments and clarifications welcome!
-----
Is homoiconicity about concrete syntax? I think the answer is no , but I also think that this confuses lots of people because it's not explicitly stated.
Then is homoiconicity about abstract syntax? Let's say we have some arbitrary language, for which there's an eval function (input: abstract syntax tree, output: some data structure) whose job is to execute/evaluate a program in that language. Homoiconicity, then, is the case where the output data structure is also an abstract syntax tree, or conversely, that an abstract syntax tree is expressed in data structures of the language.
So for homoiconicity, the input and output of your eval function are of the same type (of course the type may be a complicated algebraic sum type etc.). Thus the input and output have the same domain. This means that you can take the output from eval, and throw it back into eval again without getting a type error.
If the input and output types of the eval function don't match, then the language isn't homoiconic.
Is this correct?
What I don't get is whether the eval function is available from inside or outside the language, or both.
I'm also unsure of whether homoiconicity is a property of a language, or of an implementation of a language.
I'no authority on this but I think you are right and gave
the best explanation, even definition of homoiconity I have read so far: You can take the output of eval and pass it back as further input to eval. In other words input and output of eval must have the same data-type.
A good example is JavaScript, often called a "lisp-like" language. In JavaScript you have the function Function() which takes a String as input and produces a Function. Thus a JavaScript program can create another program which it executes within itself. But it is not homoiconic: input of Function() is a String, and its output is a Function. You can not pass the output of Function() as an input to another call of Function().
The most coherent definition I can come up with is that homoiconicity describes the relationship between language syntax and a data representation, such that all valid programs can be parsed by a conforming data parser, and the parser output contains all the information required to compile or execute the program.
Thus, all programs are homoiconic to some data format (if only flat text files). The interesting thing about Lisp is that it's homoiconic to s-expressions, which it also provides powerful tools to manipulate. This paves the way for its macro system which provides compile-time support for processing s-expressions and treating the output as input to the compiler.
Your definition (already rather loose) covers only one direction. The point of homoiconicity is that code and data are the same, so they must be easily interchangeable both ways, and also during runtime.
Even if you would consider flat text as an acceptable data format for homoiconicity, most languages don't fit that criteria. Can a random program in a random language (not expressly written as a quine) easily obtain itself in this representation? Can I easily write a function that, when passed any arbitrary function as an argument, obtain the flat text representation of that function, modify it, and execute that modified function?
I think the word syntax is ambiguous. Is homoiconicity about concrete syntax? Abstract syntax? Both? Some other kind of syntax? Does parsing have something to do with homoiconicity?
What is your definition of "effects"? I ask because this article[0] and many others[1] seem[2] to use a contradictory definition, in which failure, nondeterminism, state etc. are viewed as effects and monads are a means of implementing those effects.
"Effect" is a more general idea. It's a lens for viewing any kind of impure computation[0]. Monads are a model for effects. You can view them as pure[1] computation in lambda calculus or as defining a region of code which, internally, has impure effects.
So to summarize: effects are a general concept, monads are a particular technology for implementing that concept.
Further complexity ensues when people start talking about the general concept of a monad which is interesting in its own right but it has a more sophisticated relationship with the concept of effects.
[0] Purity is a property of, say, functions. Its definition is a function `f` is pure if and only if
const () . f = const ()
which usually means that non-termination is impure as well. The notion of equality you use above can finesse this definition a lot.
[1] As stated in [0], non-termination is an effect, so Haskell monads are impure in that sense. Haskell typically ignores non-termination effects, though. Generally, monads would work more or less just fine even without non-termination though. Externally you can think of them as pure.
Ironically, I added that just as I was linking it here, despite the date on that post.
In addition to tel's discussion, what I was really trying to get at is that monads aren't "about" IO effects in particular, they're not about "impurity". In this case the whole thing is pure.
Defining effects at a deep programming language research level can bring a different understanding, where all monads are about effects, but "effects" has a different meaning that most people understand.
An alternative viewpoint: everything's easy after it's been done, but it's hard up until then.
> after centuries of science as a profession, pretty much all the easy stuff has already been done.
But with the benefit of all that has already been done, shouldn't we be able to do things now that were previously impossible? In other words, "easy in 2014" != "easy in 1200".
> It's actually an argument for more cojones in science-- being willing to do bold stuff and explore "crazy" hypotheses.
Maybe doing whatever's easy is the most efficient (by time, money) way to make progress?
I'm not sure... seems like what you say may hold for a while, but eventually you start hitting a more objective sort of hard: things that are hard for human beings to comprehend due to the limitations of our intelligence itself. Beyond that there's probably an even harder hard-- when you start actually running out of new things to discover. Can there really be an infinite number of physical laws, principles, and useful relations? Or at some point have you actually found most of basic physics?
Once you start hitting those, you've either entered a permanent era of diminishing returns or one where you can only really make progress by radically redefining problems, making leaps, or trying wild and crazy ideas in the hopes of unlocking some isolated seam of high-value research that isn't connected to the others in the fitness/value state space graph.
> If I say that an airplane functions on fairy dust and you make an argument about propulsion and lift, and you claim that my assertion is wrong because ...
The only reason "science" would claim that your assertion is wrong is if your explanation doesn't agree with reality.
However, if you collect a bunch of data that you say supports your theory, but your experimental technique or data analysis is not good, then it's perfectly reasonable to point out the problems and say that your result is not supported by your procedure. This is not the same as saying that your assertion is wrong.
Another important concept that you're missing is usefulness: your assertion is unlikely to be useful (this is totally different from whether it's correct). Let's say you want to ensure that planes don't fall out of the air: how would you take advantage of your assertion to do this? (My proxy for usefulness is falsifiability -- un-falsifiable theories often are useless)
"The only reason "science" would claim that your assertion is wrong is if your explanation doesn't agree with reality."
Your statement is a circular reference to the problem of induction. You arbitrarily claim to be able to make accurate assertions of reality while at the same time agreeing that the essence of reality is unknowable.
This reply of yours is full of circular reasoning. Stop using the word good without making your case for righteousness. You can't substitute it with practical or useful either, both infer benefit at the personal degree.
Just because an expectation proved to be useful once, remember that the past does not predict the future.
I do agree with you that science doesn't show causation, but I think your interpretation of science is incorrect:
> Therefore, all scientific analysis is unverifiable.
I disagree. It's verified by experiment. (Here I'm using "verify" to mean that contradictions have not (yet) been found by experimental observation).
> Knowledge of the world is completely unjustified.
I disagree. Again, it's justified by experiment. If that's not enough for you, too bad, because that's all we have.
> Our immense presumption of consistency in world phenomena when we really have no basis for asserting uniformity of nature.
I disagree. This is only asserted insofar as it's 1) useful, and 2) justified by observation.
I don't think you understand the significance of scientific results. They don't say "at a fundamental, irreducible level, this is how <some system> operates." Rather, they're saying something more along the lines of "as far as we can tell, this is a description of how <some system> operates, but it's entirely possible that we're wrong. However, we don't have any data that shows that we're wrong (yet), and because this description is useful for understanding <some system>, we're going to use it."
We don't need to know how a system works at a fundamental level, if understanding it at a less-detailed level is useful. I do agree with you that we can't determine "actual causality", but we don't need to, if we can instead find a bunch of really good correlations. The problem is that people find lots of crappy correlations, and/or can't tell the difference between good and crappy correlations.
Your post is simply applying a different definition to the terms I used.
Your reply is a case for righteousness.
I don't think you have considered the implications of the use of such words as "good". I agree with you that experimentation is futile in the absence of ethics.
Why are you presuming common application of the use of the word good and useful? For example, the science behind the atomic bomb and it's usage, was it useful? To whom was it useful, those devastated by the blast or those who set it off?
I urge you to reevaluate your basis for righteousness.
Don't you understand that morality requires certainty?
Philosophically speaking you can never verify anything through experimentation: in a controlled experiment you will use instruments and tools you need to trust (how do you know the microscope isn't lying?) and if you go through the process of trying to see if anything is confirmed, for a long enough time you will end up with the axioms of any scientific field, which are never verified (just assumed true).
You can prove negatives though, so thats something :).
> Philosophically speaking you can never verify anything through experimentation
This depends on what you mean by "verify". You seem to mean "prove that something is absolutely true".
That is not what I meant. I should have clarified that I meant something along the lines of "we've collected a bunch of data that don't contradict <something>".
I've edited my meaning into the post because I don't want my stupid equivocation to distract from more significant issues.
Go further: reproducible experiment can tease out details to many decimal places of likelihood. Something can be known so thoroughly that we can make statements like "this is certain to behave as predicted millions or billions of times more often that not", which is pretty close to certain knowledge.
I'm not sure that I agree. I think that once you get into the realm of "absolute truth" (which is what I'm interpreting your post as saying -- apologies if I'm mistaken), you've left science behind. IMHO, science cannot (and does not aim to) deliver certain knowledge. Instead, it produces useful approximations.
There is no quantity of correlation that promotes one iota of certainty or probability.
The issue is Matt, many people like this gentlemen here actually believe that scientific experimentation offers explanations.
How many times have we heard, "there must be a rational explanation", when it fact never has a rational explanation ever been provided for any phenomenon.
We can't involve degrees, when the extreme principles infer that no such claims can be made. There is an unknown amount of probability given any proposition.
The only form of falsifiability we're capable of is in whether or not a person is conforming to the traditional use of language. If I say that 2 + 2 = 5, then I am wrong, since the rules for mathematical language are understood with certainty based on our tradition.
If I claimed that a plane IS powered by fairy dust and yet it is NOT powered by fairy dust, then I am technically wrong since I abused the use of language.
> it's fair to say that experiment shows causation
No, that experiment doesn't show causation.
> I do think 'perhaps' we could use correlation more to assume causation. I think we are too cautious
Why do you think that? Is it because you think that science is moving too slowly (thus wasting time and money)? Is it because you think that good results are mistakenly discarded?
"using correlation more to assume causation" should produce results more quickly, both correct and incorrect. Do you understand the costs of incorrect results? First, incorrect results can have negative consequences for the consumers: for example, drugs that have harmful impacts and no beneficial ones for the people taking them (in addition to the cost of the drug, and the opportunity cost of not doing something else known to be beneficial). Second, time is wasted following up incorrect results and work based on incorrect results would be worthless. Third, the incorrect results would have to be overturned (and this communicated to everybody who had taken them as correct).
I think deciding whether to "use correlation more to assume causation" needs to take into account the costs and benefits of incorrect results, as well as the costs and benefits of correct results. Do you have any data on this? (unfortunately, I do not)
> certainly the haters always bring in correlation to stop science articles they think don't follow their beliefs
Can you support this accusation with examples of it happening to articles that actually do a good job of showing their result is something more than a correlation? Or are you saying that this happened to articles that just present a correlation?
Wow, I had no idea that such a thing existed. That is far more feature-rich. Thanks for posting it.
However, one advantage (or disadvantage depending on your P.O.V :) ) of the Clojalyzer is that it's not tied to Clojars. So, for instance, if you have a Java project with a single Clojure script in it, you can Clojalyze that. While it is tied to Github, that's not fundamental to how it works and I plan on improving it soon, to also allow copy/paste and fetching from other urls (at the very least).
First: did they even find some kind of correlation, or is it just a statistical aberration? Second: if they found a real correlation, is their explanation of why that correlation exists correct?
IMHO, their evidence does not support their claims. Where are the controls? What are the sampling biases? What are possible alternative explanations, and why are those not considered? What is the relationship between a person's response to a hypothetical situation and his/her actions in a real one?
They greatly overstate their findings in the discussion section. It's almost painful to read:
> We have shown that people’s moral judgments and decisions depend on the native-ness of the language in which a dilemma is presented, becoming more utilitarian in a foreign language.
> Most likely, a foreign language reduces emotional reactivity, promoting cost-benefit considerations, leading to an increase in utilitarian judgments.
> This discovery has important consequences for our globalized world as many individuals make moral judgments in both native and foreign languages. Immigrants face personal moral dilemmas in a foreign language on a daily basis, sometimes dilemmas with even larger stakes such as when serving as a jury member in a trial.
> Given that what we have discovered is surprising and unintuitive, increasing awareness of the impact of using a foreign language may help us check our decision-making context and make choices that are based on the things that should really matter.
> Foreign languages are used in international, multilingual forums such as the United Nations, the European Union, large investment firms and international corporations in general. Moral choices within these domains can be explained better, and are made more predictable by our discovery.
Wait, what? When was that covered in the rest of the paper?