What matters is the effect. If someone puts up code on GitHub which purports to compute the Fibonacci series, but mistakenly does a `rm -r ~/*` then surely there should be a way to flag that code or author ("consequences").
If you find a flaw in a published paper, the normal course of action is to e-mail the authors letting them know what you think is wrong. If authors think this is a devastating flaw, they will retract their paper. Even if they don't, you can still publish a note yourself explaining why these results are flawed. This happens all the time and the 'consequences' you desire are already built into the system. Flawed papers don't do very well on citation or influence metrics.
But much more importantly, the flaw in your mental model is thinking that claims in published papers are expected to be correct. They aren't. The goal of a paper is to advance the field by stimulating discussion and new research. The unfortunate consequence of this is that laypersons can't directly use research papers as an authoritative source describing our current understanding of a subject.
The github analogy still holds up. Non-programmers shouldn't be downloading code off github, compiling and running it because they have no way of evaluating whether the code really does what it claims to do. They're probably better off getting applications from curated app stores. Accessible scientific publications written by recognized experts in the field are the equivalent of curated app stores. That is where laypersons should be looking to understand the state of the art in scientific knowledge.
What matters is the effect, but the person responsible for the effect should be the person deploying the code. Not all code on github is close to "done". If you take my alpha Fibonacci-sequence code and choose to run it on your nuclear power plant, I'm responsible for the mess - you are.
This is just kind of ridiculously fallacious. Surely a slightly inaccurate paper is not comparable to malicious code. That's not a fair comparison to make.