If only. Ethics are reached via consensus. Two calculators can indeed produce different results if the axioms supporting them differ.
And good luck calculating some of these axioms, such as "Why is it my duty not to kill someone?" You could argue, "Well in the end, a society enabling such behavior at scale would be no society at all," to which one might reply, "I have no interest in letting others do as I do.", and you can't calculate away violent sociopaths. The rest of us derive our principles from functioning mammalian emotional circuits, but at some level we rest our case on subjective axioms.
Based on whether they take you to places you don’t want to end at, which is an incomplete measure but quite a pragmatical one. E.g. if your set of axioms end at “erase half of the population by force”, then perhaps revisit your axioms.
That's what soulofmischief is saying. If your reasoning ends at somewhere you don't like emotionally, then your axioms are bad i.e. your actual axioms are emotional. Which is fine!
This is probably too big a topic for a whole side-branch on this, but modern meta-ethics teaches a range of possible approaches. Some notions of ethics are relativist, and are about the fact that moral norms are produced by some given society. But under some constructions that's just a procedural truism rather than a position on the content or the nature of morality itself.
Then you have moral realism, a perfectly respected position, which can encompass things like utilitariansim and other ism's. And this might seem silly derail, and I'm trying not to, but this is important at the end of the day, because "ethics is reached via consensus" can mean a lot of things that cash out with completely different practical implications. It's the difference between, for instance, deciding we need to be consensus oriented and vote, or be research oriented and concerned with deepening our scientific understanding of things like insect consciousness and whether the physical effects of sleep deprivation fall under the traditional definition of torture.
>And good luck calculating some of these axioms
Not wrong, they can easily get computationally intractable. So I think one has to account to some degree for uncertainty. Here again, I worry that the intended upshot is supposed to be that we simply give up or treat the project of moral understanding like a cosmically impossible non-starter. I like to think there's a middle ground between where we presently stand and the hypothetical future where we've got perfect knowledge.
Absolutely not! This is cultural relativism, and frankly, it would be circular: how exactly are we converging on a consensus if not from some preexisting sense of the good?
The only defensible objective basis for the good is the nature of a thing and what actualizes the potentials determined by that nature, thus actualizing the thing as the kind of thing it is. Morality, only possible for things that have the capacity to comprehend their options for action (intellect) and choose freely among them (will) on the basis of that understanding, therefore concerns the question of whether an act performed by a thing furthers or frustrates the actualization of that thing.
By cutting off my arm for no proportionate reason, I do an immoral thing, because it is my nature to have that arm, but if I have gangrene in that arm that threatens my life, then removing the gangrene with the undesirable side effect of losing an arm is morally justifiable, even if the loss of the arm is not good per se.
Murdering a human being is gravely immoral, because it directly contradicts my nature as a social human being in a very profound and profoundly self-destructive way. However, killing a would-be murderer in defense of my life or that of another is a morally very good deed; it is in accord with my social nature, and indeed can be said to actualize it more fully in some respect.
> The rest of us derive our principles from functioning mammalian emotional circuits
Please refrain from making such silly pseudoscientific and pseudophilosophical statements.
That being said, calculation is insufficient, because such calculation is formal: it explicitly excludes the conceptual content of propositions. But concepts are the material "carriers" of comprehension of what things are. We can also analyze concepts. Now, we can say that we can calculate a formal deduction according to formal rules, but we cannot calculate a concept or its analytical products. This is the produce of abstraction from concreta. Formal systems abstract from these. They are blind to conceptual content, on purpose. And having used a formalism to derive a conclusion, we must interpret the result, that is, we must reassign concepts to symbols that stand in for them. So formal systems are useful tools, but they are tools.
> how exactly are we converging on a consensus if not from some preexisting sense of the good?
Well, there is this mechanism of imprinting our current moral settings (both declared and actually demonstrated) onto mostly blank-slate minds of the children, so that the next generation has mostly the same morals as the current one but with minor differences: so the ethics can "evolve" in time but that doesn't mean there is any end-state "consensus" they're trying to reach.
I've never thought that cultural relativism is supposed to be bad/wrong - I thought that kinds of thinking is superstitious, a bit racist, and are an undesirable strong basis for many kinds of hostilities in the world that it shouldn't be a formal majority point of view.
One cannot realistically construct the ethics procedurally and reproducibly from blank slate, so holding a false beliefs that one can or do have such set of "scientific" ethical standards only justify genociding oppositions.
Ethics is just half-broken loose set of heuristics developed and optimized evolutionarily. It probably can't even be properly quantized into text. It's nothing that stands up to scientific computational scrutiny. And there we step into cultural relativism as a principle; there are lots of behaviors we humans show as "ethical" acts that sometimes seem random and not universal, that also seem to work where it is done, and maybe not work where it is not done, so you can't say which one is it.
And good luck calculating some of these axioms, such as "Why is it my duty not to kill someone?" You could argue, "Well in the end, a society enabling such behavior at scale would be no society at all," to which one might reply, "I have no interest in letting others do as I do.", and you can't calculate away violent sociopaths. The rest of us derive our principles from functioning mammalian emotional circuits, but at some level we rest our case on subjective axioms.