I can accept Postel's Law as being a great idea for fairly low-impact things like markup languages. XHTML is a good example here: it turns out it wasn't an awesome idea, because if the author of an HTML file forgets to close a tag, I'd rather the browser make a best effort at displaying a document that might be a little janky, than show me nothing at all.
But if we're talking configuration files for applications? No. Absolutely not. If I get anything even slightly off, do not under any circumstances respond by launching the application into an unpredictable state. Fail immediately and tell me why. Same principle applies for RPC messages.
The reductio ad absurdum here is weak typing. If Postel's Law were actually a generally applicable law, then PHP4 would be widely considered to be the pinnacle of language design. I think most people would agree that it's closer to the nadir.
But still... context matters, XHTML was a mistake. Which implies that Postel's Law is true in at least some contexts.
There's still a few nice things about XHTML that I miss.
It's really helpful in debugging and catching mistakes. I'll actually force it on for dev and test systems just to quickly identify errors. I've caught hundreds of issues with templates that way. Sure there are markup validators, but the always on strict was nice. And still usable with XHTML5... So long as the web page is being generated by your code, the strictness is a win I think. And you can turn off strict in browsers by serving XHTML5 as HTML5 with text/html content type.
The responseXML in XHR for XHTML is really nice, and still available though mostly useless. I wish when XHTML was abandoned that a responseParsedDOM was offered to avoid some of the exploitable hacks people came up with instead.
XML transforms using XSL could do some pretty nifty tricks with static docs and no other processors but your browser.
So, yeah, don't feel it was wholly a mistake. Sure for random content or user generated isn't a good idea, and it'd be nice if there were clean ways to handle that (not iframes), but saying that your app shouldn't have a strict rendering is like saying JSON should be forgiving of misplaced braces... If you're feeding bad JSON to your modern JS driven app, well, that's your fault and there should be errors and it should be fixed. Similar for XHTML for your server side app IMO.
Good news: XHTML was never abandoned. It still exists today as an optional serialization format for HTML5. I am using it in practice on my website and described it in great detail: https://www.nayuki.io/page/practical-guide-to-xhtml
The main thing you lose (no idea why XHTML5 doesn't add support for this) is <noscript> is ignored. Obviously if you did any other form of JS detection in a session, you can just use that to offer alternate content.
Eh, even without <noscript>, you could achieve those things by declaring them in the HTML code and then writing a script on the page that hides them immediately.
Interesting. It isn't officially supported. Browsers do seem to honour it though without parsing errors, even as a tag in the header (say for a redirect fallback instead of JS in a handler). Tested Firefox and Chromium - good to know, had had issues there in the past. Thanks!
Postel's law is a way to aid in adoption, not a way to increase correctness.
If Product X accepts malformed input I, but product Y does not, then product X appears to "work better" than product Y and people will adopt X more. (The other half of the law also helps in adoption; if you emit very conservative output, then your output works with everybody else as well, also making your product look better).
If authors of webpages only had access to browsers that implemented strict XHTML, then there would be a lot fewer missing close-tags out there. Things have largely been sorted out now, but for a while it was a case of "I have to be just as good at rendering complete garbage as IE is, or nobody will use my browser" which I hesitate to label as "positive" in any meaningful sense.
Because it's a user agent and as the user I want it to degrade as gracefully as possible. It doesn't serve my interests to refuse to render anything just because the author of the website forgot a </b> tag somewhere. I'd rather read the text just with formatting other than what the author intended, than not read the text at all. Don't punish me for someone else's typo.
By that logic, broken SVGs and the like should also be rendered leniently. That doesn’t make any sense.
If HTML had been strictly schema-validated from the start, nobody would be arguing for this.
It’s certainly true that HTML being parsed leniently helped in it being picked up by amateur website authors in the early days, because they weren’t confronted with error messages (though they were confronted with “why doesn’t this render as I expect” instead). But that has little to do with user expectations by browser users.
Well it seems to work for TCP at least, which is where it comes from. Of course it's not the correct approach for everything, but calling it "one of the crappiest ideas in CS" might be a tad harsh.
EDIT: Of course there are better ways to be robust than to try to just accept whatever garbage is thrown your way because "be liberal in what you accept." So for example since this is about config files, you could easily just tell the user that their stuff is wrong and tell them how to fix it.
Postel's law is for implementations of underspecified standards and for backwards compatibility, the problem is the misguided attempts to somehow use it in new standards.
IMO that's long been proven to have been a very bad idea in retrospect. So good riddance.