As one might guess, there is a lot wrong with this list even within there stated goals. My examples are drawn from mathematics, since that's what I know. They appear to use the journal to classify category, which doesn't work very well since many of the best results are published in general journals. Additionally, since citation counts vary so widely between sub-fields, there is a strong pull towards selecting misclassified work from higher-citation fields. For example the paper "High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension" is listed in geometry but belongs elsewhere, and there are no probability papers in the category "Probability and Statistics with Applications". Also, the "Pure & Applied" category is meaningless. That list seems to be the most cited papers from five arbitrary journals. I guess it's a reminder that these problems are hard to automate, and that your work doesn't have to be perfect to share.
Cognitive Science suffers from the same problem of misclassifications from higher-citation fields (neuroscience).
Agreed that projects don't have to be perfect but it does have to have some functionality to ship... I don't see how I could use this could help me construct a course reading list or to improve my understanding of my academic field, given the problems.
Also, were you able to find any papers in number theory? That's a huge gap as it is one of mathematics's primary subfields. Analysis seems to represented, as well as topology (via "geometry").