I believe that the classical generated parser has generally better performance potential - there is less function calling, less string passing (but this may be avoided easily in combinators too) and more opportunity for optimalization if you have the whole grammar AST in hand.
(Hmm, I must definitely try to port it to PEG.js :-)
As for Ruby, I am not sure if it is even possible to create a PEG grammar. Its lexer and parser are heavily interconnected and there is a lot of state involved. If the grammar will be created, I don't think it would be pretty.
I knew somebody would raise this point :-) You are right that the evaluation order would be wrong for "-" and "/".
I will probably implement support for left recursion in PEG.js - it is possible (see e.g. http://www.vpri.org/pdf/tr2007002_packrat.pdf). After that, the grammar could be rewritten to evaluate in the correct order.
(Another alternative - which works right now - is to change the parsing expressions to something like "additive ([+-] additive)*" and deal with the whole chain of operations with the same priority at once. I didn't use this in the example as I wanted it to be as simple as possible.)
Don't use that paper! I tried to use the parsing technique in that paper while working on a parser for use at the CME, and it caused me weeks of headaches.
First, you'll note the algorithm is extremely complicated--nothing like the simple top-down algorithm that makes PEGs so attractive. Not only is it complicated, it misses basic refactoring issues--some logic is duplicated across functions, and the functions interact in ugly ways.
Second, it doesn't even handle left recursion correctly. Throw a ruleset like this at their parser
A -> B "a"
B -> C "b"
C -> B / A / "c"
and it will explode into a million little pieces, because the authors did not account for any recursive rule having multiple recursion points. Don't even try something like
Interesting, thanks for the warning. I only skimmed through the paper today and noted that the algoritm seems complex, but I didn't attempt to understand it in detail.
What was your final result? Did you implement the left recursion in the way the paper describes, invented/found some other way or abandoned the whole idea?
The approach I'm working on uses the same "growing the seed" idea, but in a different way.
It involves the memo entries being able to remember which left-recursive results they are dependent on. This way, when a left-recursive rule produces a result that is dependent on itself, it knows that this match can possibly be "grown" through repeated iterations. That's a basic sketch of the idea. Performance properties remain the same in the case of left-recursive rules that are not interdependent. I don't really know what they are like for large numbers of interdependent left-recursive rules--but if you have a language like that, better to use an Earley or GLR parser.
I'm still working on it. It passes a battery of test cases, two of which I posted above, but I'm not 100% confident in it just yet. Also, as posted above, I'm trying to get permission from the higher-ups to release the code into the wild.
I begun to work on the master thesis when there was no mod_rails, Rails hostings were less common and they were more expensive than PHP hostings. So the other commenters in this thread are right about the "deployability" reason.
When the work was being finished, I aleready saw that the real-world usefulness of the compiler is minimal. This is why I didn't develop the compiler further after finishing the thesis.
The PCNTL extension is about process control, not threading. I don't see how it could help to solve the limitations mentioned in the section 7.1 of the thesis.