Have you ever worked on a project that has to build on diverse systems that you may not control, like most open source projects?
Inflicting -Werror on yourself is fine, but inflicting it on others is onerous, because there's no way you can guarantee that your code produces no warnings on their compiler with their library and header versions.
>Inflicting -Werror on yourself is fine, but inflicting it on others is onerous, because there's no way you can guarantee that your code produces no warnings on their compiler with their library and header versions.
This is precisely the reason it is in fact necessary.
When writing code that is intended to be portable and is expected to generate reproducible floating-point calculations in numerically intensive calculations, warnings on different platforms and header versions is a godsend.
When I receive a bug report that says, "I am using operating system distribution X, version Y, with library version Z, and specific test W produces warnings when I type `make check`," it makes me happy.
Why not just use #error? Then the build will crap out in any event. I've found #error to work on a wider range of compilers anyway (I'm not sure #warning is standardized).
I think I would deal with this on a case-by-case basis. If the project was primarily for Objective-C/Apple I think I would use -Werror against the most recent set of tools. My experience writing software for the iOS platform indicates this wouldn't be an onerous initial constraint. And, there's nothing to stop others from changing the build settings.
I would absolutely relax the constraint if I was maintaining a large cross-platform OSS project. In that case, I think your criticism is on point. I hadn't considered this situation as I've not been the maintainer of such a system.
Inflicting -Werror on yourself is fine, but inflicting it on others is onerous, because there's no way you can guarantee that your code produces no warnings on their compiler with their library and header versions.