Hacker Newsnew | past | comments | ask | show | jobs | submit | staticassertion's commentslogin

https://www.richardcarrier.info/archives/14879

Richard Carrier takes an extremely similar position in total (ie: both in position towards "is ought" and biological grounding). It engages with Hume by providing a way to side step the problem.


You can be a physicalist and still a moral realist. James Fodor has some videos on this, if you're interested.

Granted, if humans had utility functions and we could avoid utility monsters (maybe average utilitarianism is enough) and the child in the basement (if we could somehow fairly normalize utility functions across individuals so that it was well-defined to choose the outcome where the minimum of everyone's utility functions is maximized [argmax_s min(U_x(s)) for all people in x over states s]) then I'd be a moral realist.

I think we'll keep having human moral disagreements with formal moral frameworks in several edge cases.

There's also the whole case of anthropics: how much do exact clones and potentially existing people contribute moral weight? I haven't seen a solid solution to those questions under consequentialism yet; we don't have the (meta)philosophy to address them yet; I am 50/50 on whether we'll find a formal solution and that's also required for full moral realism.


Who cares if we all agree? That has nothing to do with whether something is objectively true. That's a subjective claim.

I can think of multiple cases.

1. Adversarial models. For example, you might want a model that generates "bad" scenarios to validate that your other model rejects them. The first model obviously can't be morally constrained.

2. Models used in an "offensive" way that is "good". I write exploits (often classified as weapons by LLMs) so that I can prove security issues so that I can fix them properly. It's already quite a pain in the ass to use LLMs that are censored for this, but I'm a good guy.


They say they’re developing products where the constitution is doesn’t work. That means they’re not talking about your case 1, although case 2 is still possible.

It will be interesting to watch the products they release publicly, to see if any jump out as “oh THAT’S the one without the constitution“. If they don’t, then either they decided to not release it, or not to release it to the public.


There are hardline constraints in the constitution (https://www.anthropic.com/constitution#hard-constraints) would at least potentially apply in case 1. This would make it impossible to do case 1 with the public model.

(1) could be a product, I think. But yeah, fair point.

> A well written book on such a topic would likely make you rich indeed.

Ha. Not really. Moral philosophers write those books all the time, they're not exactly rolling in cash.

Anyone interested in this can read the SEP


Or Isaac Asimov’s foundation series with what the “psychologists” aka Psychohistorians do.

The key being "well written", which in this instance needs to be interpreted as being convincing.

People do indeed write contradictory books like this all the time and fail to get traction, because they are not convincing.


"I disagree with this point of view so it's objectively wrong"

Or Ayn Rand. Really no shortage of people who thought they had the answers on this.

The SEP is not really something I'd put next to Ayn Rand. The SEP is the Stanford Encyclopedia of Philosophy, it's an actual resource, not just pop/ cultural stuff.

I recommend the Principia Discordia.

Or if you really want it spelled out, Quantum Psychology

Don’t just read one person’s worldview, see what Aristotle, Kant, Rawls, Bentham, Nietzsche had to say about morality.

You're making a lot of assertions here that are really easy to dismiss.

> It tells us that (large-scale) existence is a requirement to have morality.

That seems to rule out moral realism.

> That implies that the highest good are those decisions that improve the long-term survival odds of a) humanity, and b) the biosphere.

Woah, that's quite a jump. Why?

> So yes, I think you can derive an ought from an is. But this belief is of my own invention and to my knowledge, novel. Happy to find out someone else believes this.

Deriving an ought from an is is very easy. "A good bridge is one that does not collapse. If you want to build a good bridge, you ought to build one that does not collapse". This is easy because I've smuggled in a condition, which I think is fine, but it's important to note that that's what you've done (and others have too, I'm blanking on the name of the last person I saw do this).


> (and others have too, I'm blanking on the name of the last person I saw do this).

Richard Carrier. This is the "Hypothetical imperative", which I think is traced to Kant originally.


Even if we make the metaphysical claim that objective morality exists, that doesn't help with the epistemic issue of knowing those goods. Moral realism can be true but that does not necessarily help us behave "good". That is exactly where ethical frameworks seek to provide answers. If moral truth were directly accessible, moral philosophy would not be necessary.

Nothing about objective morality precludes "ethical motivation" or "practical wisdom" - those are epistemic concerns. I could, for example, say that we have epistemic access to objective morality through ethical frameworks grounded in a specific virtue. Or I could deny that!

As an example, I can state that human flourishing is explicitly virtuous. But obviously I need to build a framework that maximizes human flourishing, which means making judgments about how best to achieve that.

Beyond that, I frankly don't see the big deal of "subjective" vs "objective" morality.

Let's say that I think that murder is objectively morally wrong. Let's say someone disagrees with me. I would think they're objectively incorrect. I would then try to motivate them to change their mind. Now imagine that murder is not objectively morally wrong - the situation plays out identically. I have to make the same exact case to ground why it is wrong, whether objectively or subjectively.

What Anthropic is doing in the Claude constitution is explicitly addressing the epistemic and application layer, not making a metaphysical claim about whether objective morality exists. They are not rejecting moral realism anywhere in their post, they are rejecting the idea that moral truths can be encoded as a set of explicit propositions - whether that is because such propositions don't exist, whether we don't have access to them, or whether they are not encodable, is irrelevant.

No human being, even a moral realist, sits down and lists out the potentially infinite set of "good" propositions. Humans typically (at their best!) do exactly what's proposed - they have some specific virtues, hard constraints, and normative anchors, but actual behaviors are underdetermined by them, and so they make judgments based on some sort of framework that is otherwise informed.


And in Rust (yes, safe Rust can have memory safety vulnerabilities). Who cares? They basically don't happen in practice.

I strongly suspect the same thing - that they weren't using agents at all in the reports we've seen, let alone agents with instructions on how to verify a viable attack, a threat model, etc.

Presumably they are saying that you'd end up using a lot of `unsafe`. Of course, that's still much better than C, but I assume that their point isn't "You can't do it in Rust" it's "You can't translate directly to safe rust from C".

> Of course, that's still much better than C

Exactly. "can't translate to safe Rust" is not a good faith argument.


If anything, writing unsafe code in Rust is also fun. It has many primitives like `MaybeUninit` that make it fun.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: