I used to work with Wiki markup for a living. The time you think you'll save with regex hackery is quickly chewed up by the time wasted eternally tweaking your regexes to catch yet another corner case -- it's much better just to parse for real from the get go, just like it's much better to use a real HTML/XML parser than trying to do the same job badly with regexes.
I used to make scrappers for living and trust me that it all depends on the particular situation and your requirements. Real HTML parsers are easier and safer for general type of work, but they quickly get very heavy on memory when parsing big DOM trees. If all you need from a page is a few strings, like e.g. just a product price (very common task), using regexes is far superior approach performance-wise. It's both faster and uses less memory (so you can run more parallel workers) and also if you write it well it's immune to many small html/design changes as long as the pattern you look for is not changed.
I'm sure that works well on product pages, which are just the same template reused for every single product. I'm afraid it will fail spectacularly on wiki pages, which are handcrafted by humans with all the completely unpredictable randomness that entails.