Not to criticize the hard work that went into doing this feature (I worked on a project using wikipedia/wiktionary data), all the things that had to be achieved to come up with a "simple" preview features are just made hard because the data in wiki media is not machine friendly. Things like the obvious priority order of fields and bizarre templates that one needs to implement to parse the data makes the job unbelievably hard in the first place.
In UCG gardens — as with data structure and algorithm design — there are trade-offs among retrieval difficulty (friction, for humans; time complexity, for machines), update difficulty, centralization and skill set of contributors, and centralization and skill set of editors, and the complexity of the structures themselves.
IMDB, CYC, Wolfram, and various RDF data sets, sample this space differently, and have different amounts of data and richness, probably as a result.
Yeah one of my first web scraping projects was using Wikipedia because I figured it would be easy to parse and have a fairly standard format, right? Well at least it was a good and sobering first lesson about cleaning data.