Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"what works for beginners is not necessarily what’s best for experts.... this study by itself isn’t proof that it’s objectively bad for the people who use it heavily."

More than that. Three languages were chosen. One was based on what previous studies had shown works, and one sounds like it was basically a ROT13'ed perl (only with symbols), and from what some were saying on Reddit it was an overcomplicated dialect of perl (though IMHO that would change little). Then, in a study in which they pitted Perl against a language designed to do well in studies, the language designed to do well in studies prevailed.

Even though I do question whether there may be a wee whiff of tautology about this whole exercise, this is interesting because it shows that studies may be able to distinguish between the features of two languages w.r.t. how easily they can be taught and learned. This is valuable stuff, millions upon millions of dollars to society's worth of valuable stuff. What it says about Perl is simply that it was not designed to do well in such studies, quelle surprise, and in particular it doesn't say anything at all about Perl relative to any other real language. $YOUR_FAVORITE_LANGUAGE may well have fared worse, and $YOUR_MOST_HATED_LANGUAGE may well have fared much better.

It is a true shame that the underlying valuable message of the study is being missed by so many people in a tearing hurry to go "hurr hurr, perl t3h suckz, hurr hurr".



Then, in a study in which they pitted Perl against a language designed to do well in studies, the language designed to do well in studies prevailed.

No, it's not surprising that perl performed poorer than Quorum. The surprise is that perl did as poorly as Randomo, which should not be sensible given that the operators and keywords are gibberish.

I generally agree with what you're saying, but the point of the study is not that the well-designed language did better. Who wouldn't expect that? But there is no tautological question when you realize that the result they are focusing on is the near-even performance with Randomo, not the failure to exceed or match Quorum.

It's pretty clear that they aren't engaging in perl bashing, but trying to provide data that backs up their experiential findings as teachers.


From what I've understood, Randomo and Quorum's syntaxes share the same structure. Only keywords and operators changed. Therefore, Randomo has a much better structure than what we could expect from a truly randomly generated language.

So, while it still looks pretty bad for Perl, I'd say it's not that bad.


It doesn't look bad for Perl at all, because the entire setup of this study is flawed to the point of making the conclusion a non sequitur.


It always amazes me when people can dismiss academic papers -- the result of (at least) several weeks of work -- with a glib comment uttered anonymously, and no support given.

Have you read the study? If not, you can find it here: http://www.cs.siue.edu/~astefik/papers/StefikPlateau2011.pdf


I have read the study which is why i made that comment. In fact, i've even contacted the author to gain more details about how they selected code samples to serve as learning aids and how all the other code samples looked like.

Additionally, i am not anonymous. Google my name and you will find all about me, including, if you look for longer than a minute, my full name and address.

Lastly, there was no need to give support in that comment by repeating what others and myself have said copiously all over the comments here. This was my main post towards that. Feel free to argue it: http://news.ycombinator.com/item?id=3153249


What it says about Perl is simply that it was not designed to do well in such studies...

I disagree. If it says anything, it's that if you give bad code to people who don't know anything about programming, they'll have trouble writing good code.


If you give a noob the task of writing a program in a language the noob has never seen before it makes sense the noob would write code that looks like random gibberish to a Perl programmer. Therefore, it's not surprising the random gibberish code would would resemble "a language design by chance" more than it would look like Perl.


The point he was making wasn't about the output, but about the fact that the learning material given to the test subjects was the worst perl imaginable.


There were 18 frigging participants to the study, for chrissake! You can't get any meaningful statistical data from such a ridiculously small sample!

The methodology may be sound, they may be good students and deserve a good grade, but there are absolutely zero conclusion to draw from this paper except for the fact that it may be valuable to make a larger study.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: