Abortion discussion

General philosophy message board for Discussion and debate on other philosophical issues not directly related to veganism. Metaphysics, religion, theist vs. atheist debates, politics, general science discussion, etc.
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10273
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: Abortion discussion

Post by brimstoneSalad »

Volenta wrote: But I'm still not convinced that utilitarianism is flawed. Utilitarianism is not per definition anti-exploitation indeed, but if you would make it possible in society to exploit when the benefits would outweigh the costs of the action itself[...]
Of course, and there are many things that can mitigate the problem for most common applications, but they are not really solutions to the fundamental weakness.
Volenta wrote: The utility monster is very hypothetical, so it's hard to imagine whether it really is wrong to favor the utility monster over the others.
Let's say we evaluate based on raw intelligence approximating moral value (which it does, since it parallels sentience).

A super intelligent evil being comes to Earth. It enjoys seeing us suffer. It's so intelligent that, by comparative magnitude, its joy at seeing us suffer should be considered greater than our misery at suffering.

Is this acceptable?

Why or why not?

The issue of opportunity cost may come up, but this is a red-herring; it doesn't present any real, functional solutions for practical application of utilitarianism (is doing something less than ideal wrong, even if it comes out in the positive?).

Utilitarians disregard this thought experiment as unrealistic (which shouldn't matter if we're talking theoretical correctness), but it isn't really; it can apply to ordinary humans as the Utility monsters of our own world.
Volenta wrote: I'm not saying utilitarianism is perfect as it is right know, but I'm not aware of any better form of consequentialism.
Take the first person, the acting agent, out of the Utility equation with respect to his or her actions to determine the goodness or badness of a personal action on the world around him/her only, calculate moral relevance relative to capacity and personal benefit/sacrifice (normalize it in accordance to such).
Globally, maximize moral action, not utility.

Bye bye Utility monster.
User avatar
Volenta
Master in Training
Posts: 696
Joined: Tue May 20, 2014 5:13 pm
Diet: Vegan

Re: Abortion discussion

Post by Volenta »

brimstoneSalad wrote:Let's say we evaluate based on raw intelligence approximating moral value (which it does, since it parallels sentience).

A super intelligent evil being comes to Earth. It enjoys seeing us suffer. It's so intelligent that, by comparative magnitude, its joy at seeing us suffer should be considered greater than our misery at suffering.

Is this acceptable?

Why or why not?

The issue of opportunity cost may come up, but this is a red-herring; it doesn't present any real, functional solutions for practical application of utilitarianism (is doing something less than ideal wrong, even if it comes out in the positive?).

Utilitarians disregard this thought experiment as unrealistic (which shouldn't matter if we're talking theoretical correctness), but it isn't really; it can apply to ordinary humans as the Utility monsters of our own world.
I think the only problem lies in the fact that the intelligent being has joy for the wrong reasons, which I already said utilitarianism doesn't deal with very well. And I think this is easier to see in the context of making an artificial highly sentient robot that is programmed to enjoy the suffering of others. You could even go that far by saying that you have a moral obligation to build that robot, because it increases utility. The problem is of course how you can determine what a good and wrong reason is. You're basically trying to do this below (↓), so let me address that one.
brimstoneSalad wrote:Take the first person, the acting agent, out of the Utility equation with respect to his or her actions to determine the goodness or badness of a personal action on the world around him/her only, calculate moral relevance relative to capacity and personal benefit/sacrifice (normalize it in accordance to such).
Globally, maximize moral action, not utility.

Bye bye Utility monster.
I think you're doing something very useful to separate moral action and moral relevance, but I think you're still not there yet. Now you could objectively say whether some action is moral, and objectively say how relevant the action is. This is a great way of presenting and thinking about it. But you still have to balance the values to determine whether you have an moral obligation to execute the action. If the moral action and relevance are both positive—so the action is moral and it is relevant to do so—you could say that you ought to do it, but this is harder to do when the acting agent should be sacrificing in the process.

It is easy to see that the moral relevance should be based on more than just the benefit/sacrifice of the acting agent, otherwise some very moral actions aren't considered relevant which they should be. It somehow should have a relationship with the moral action itself, but again, how to balance/normalize. You said 'capacity', but I'm not sure that I understand what you mean with that. Could you go a little bit deeper for me?
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10273
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: Abortion discussion

Post by brimstoneSalad »

Volenta wrote:But you still have to balance the values to determine whether you have an moral obligation to execute the action.
I see obligation as another matter entirely, which is based on commitment or goal.

If a person has made a commitment to be evil, they have an immoral obligation to do evil things and avoid good.
If a person has made a commitment to be good, they then have a moral obligation to do good things and avoid evil.

The extent depends on the commitment itself.

Obligation can only be determined with respect to what kind of commitment or moral goal - or if you like, existential self-identity - you adopt.

Moral calculus can only give us the moral value of an action. Only an individual (or society) can decide what moral obligations should be accepted based on what kind of person/people they want to be.

The most primitive obligation is to simply not be a bad person- and that is to at least break even, in each possible respect, and not do more harm than good in the world.
Socially, though, you may also be obligated to put forth a certain amount of effort regardless of effect (e.g. from each according to his/her ability).
Volenta wrote:If the moral action and relevance are both positive—so the action is moral and it is relevant to do so—you could say that you ought to do it, but this is harder to do when the acting agent should be sacrificing in the process.
Thus why it depends on the extent and kind of obligation you accept, or create, for yourself. Merely breaking even is a pretty easy pill to swallow. Aiming for perfection will almost undoubtedly see you martyred. Morality isn't a slippery slope to the latter ends of extremes, but depends on the nature of the person- and decides the nature of the person.
Volenta wrote:It is easy to see that the moral relevance should be based on more than just the benefit/sacrifice of the acting agent, otherwise some very moral actions aren't considered relevant which they should be.
Should they be?

I'm not so sure we get as much credit when we do something primarily for our own benefit. If you would have done something anyway for personal benefit, good effects are nice, but more of a side effect.
Volenta wrote:You said 'capacity', but I'm not sure that I understand what you mean with that. Could you go a little bit deeper for me?
The ability to freely do it, and the intelligence to understand it.

A tornado may harm people, but it has no intelligence, and so can not act immorally- it has no capacity for moral behavior.
The Utility monster (or robot, if you like), is a profoundly intelligent being, so has immense capacity; this magnifies the relevance of its moral responsibility.
Post Reply