(LONG RANT) Can negative utilitarianism solve the "utility monster" problem?

General philosophy message board for Discussion and debate on other philosophical issues not directly related to veganism. Metaphysics, religion, theist vs. atheist debates, politics, general science discussion, etc.
Post Reply
ole_92
Newbie
Posts: 8
Joined: Tue Nov 03, 2015 2:02 am
Diet: Vegan

(LONG RANT) Can negative utilitarianism solve the "utility monster" problem?

Post by ole_92 »

Before I begin, I just want to say that I'm new to morality, so I apologize if I mischaracterize a certain idea or philosophy. If I do, it's because of honest ignorance. I'm very willing to learn.

I've been reading the threads on this forum and educating myself about various forms of consequentialism (and that is certainly what I subscribe to in regards to morality). There are minor objections that I have to the mainstream views here, such as the dichotomy between hedonism and preferences. I would argue that there is no situation where a person may knowingly wish for something that would cause them more suffering than the alternative. Even in cases of volunteering to be tortured, the person chooses to do that because if they didn't, they would feel guilty and that would presumably feel worse off than being tortured. They may be mistaken and change their mind during the torture (and thus they have had an uninformed preference), but at the time of their decision, they necessarily do a little calculation in their head, and choose which option they think would cause them the least suffering. They may change their mind, but that would simply mean they had an uninformed preference and lacked the required knowledge. With that logic in mind, I came to the conclusion that preference consequentialism collapses down to hedonistic consequentialism.

I reject the idea that the pleasure machine is a valid objection to hedonism. Even if 100% of people said they preferred to live their real lives and not use the machine, that wouldn't mean they are right, or that their preference was well informed. That would be a bandwagon fallacy. There are plenty of reasons why they might choose not to use the machine, such as ignorance in case they never tried it, a desire to fulfill some obligations in real life, and a fear of the unknown. The reason they want to fulfill obligations is to feel better about themselves, and the reason they fear the unknown is because they feel better when they have the comfort of the known. It's all about feeling better in the end.

Another small disagreement I had with the views in this forum is the argument in favor of altruism as opposed to utilitarianism. It seems rather arbitrary to exclude the wellbeing of the agent who is making moral decisions. My wellbeing is just as important as anyone else's, from the objective point of view of the universe. The only reason presented in favor of altruism as opposed to utilitarianism that I saw, was the "utility monster" problem. But even if it was true that the problem poses a serious issue for utilitarianism that goes against our intuitions, it would still be incorrect to choose a different ideology and sacrifice rationality for it. It would be more honest to bite the bullet and admit that if the utility monster was real, we would be morally obliged to feed it. Or come up with a counter argument. But I don't believe that arbitrarily invalidating the wellbeing/preferences of the moral agent is a coherent strategy.

So now I will briefly explain why I'm a negative utilitarian, and how I think that might help solve the problem. If you're still reading, that is.

Almost all of the pleasures that we experience are basically a relief/reduction from some form of discomfort/suffering. Things such as hunger, thirst, horniness, information "hunger", and loneliness need to exist first, before we can feel pleasure when we satisfy them. In most cases there must be a negative state first, in order to feel good by eliminating that negative state. We may or may not be able to eliminate those states, which is why some people say it's a zero-sum game - you could only strive to reduce suffering and the most you can reach is bliss (complete neutrality and peace). But essentially your wellbeing ends up negative overall, unless you're insentient, in which case you're at 0.

There are a few more complicated pleasures, such as orgasms, awe, love, drugs etc. But even those could be explained by having a need for them first (needs are negative). A feeling of orgasm can be explained by acknowledging the sequence of events prior to orgasm. A number of tensions in the body, gradual accumulation of both mental and physical frustration, and so on. When all of that had been accumulating for many hours, and is suddenly relieved in a matter of seconds, no wonder it feels fantastic. The faster the relief, the bigger the discomfort, the better it feels. Drugs that make us what we call "high" allow us to stop worrying about everything, and feel a fraction of bliss. Now I don't deny that there are pleasure centers in our brains that may get triggered, as well as dopamine and other hormones, but I still think that the overwhelming majority of our pleasures could not exist without a prior discomfort, or at least would not be felt so strongly.

Trigger warning: rather depressive couple of paragraphs below.
If you're thinking of making the common objection to negative utilitarianism, the "red button" that would painlessly eliminate all life (or all sentient life at least), I'll have to disappoint you. I don't think it's an objection at all. It would indeed be the right thing to do to press it. I hope I would have the guts for it - but I doubt it since I'm addicted to life and have optimism bias just like everyone else. But when you look at it rationally, and consider all the suffering that is inflicted on humans, on domesticated animals, on wild animals, you will have to concede that in this world, the amount of suffering is ridiculously disproportionate to pleasure (even if you think that pleasure is more than just a reduction of suffering). There are a handful of winners, and a ton of losers (and winners lose in the end anyways). Evolution and nature is brutal and grotesque and meaningless. There's nothing intrinsically good about life (only wellbeing has intrinsic value). Life can be described in 4 words: consumption, reproduction, cannibalism, addiction. It serves no purpose to the universe, and unless you believe in God, you accept that it was designed unintentionally, unintelligently, and by chance. No harm is done if life ceases to exist, just like there's no harm in the fact that Martians don't exist. Our importance is in our heads. The only thing that has value in this universe, is sentient experience. And sentience evolved because it was an evolutionary advantage to feel negative triggers, so the organism could move away from pain and danger and have a better chance of survival.

I'm sorry, I'm digressing into the territory of antinatalism and efilism. The bottom line is, it would not be an immoral thing to do to end it. We're doomed either way, and life will have to end sooner or later. The question is, do we allow all the tremendous suffering to go on for billions of years, or do we rip of the band aid off quickly and get it over with? The zebra getting its internal organs eaten while it's still alive will agree with me. And so would the child who's suffering from leukemia this very second. No amount of pleasure that I receive can possibly balance that out.

So anyways. After all that, I will finally propose my solution to the problem. For a strong negative utilitarian the scale of wellbeing would start at 0 and go into the negative. The morality of an action is determined by how much suffering it produces (or alleviates), taken everyone affected into account. So let's see if we can solve the utility monster problem if we adhere to this interpretation of morality.

Let's say we have the monster, and 10 people who live with it. Presumably the monster has to cause 100 units of suffering to everyone else to feed itself. Everyone is very hungry, barely staying alive, and suffers greatly. The impact of the monster is -1000. If the monster isn't fed, the "poor" monster feels very unhappy and suffers -10000 points. That tells us it is better for it to eat, than for it not to eat. But what if we could euthanize the monster? Then the total impact of the monster would be 0, since dead monsters can't be deprived of anything! Since we don't take into account the positive pleasures that the monster would otherwise enjoy, we're under no obligation to keep the monster alive at everybody else's expense! And if the monster is a negative utilitarian itself, it would be morally obliged to commit suicide.

I won't be surprised if I messed something up in this post, but at this point I'm pretty sure it all works out. Hope this wasn't a complete waste of time to read!
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10367
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: (LONG RANT) Can negative utilitarianism solve the "utility monster" problem?

Post by brimstoneSalad »

Hi ole, welcome,
ole_92 wrote:I would argue that there is no situation where a person may knowingly wish for something that would cause them more suffering than the alternative.
This is incorrect; people do this all of the time. It's even a trope: http://tvtropes.org/pmwiki/pmwiki.php/M ... owMuchIBeg

This is the distinction between primitive cognition, and more advanced metacognition.

Like your example:
ole_92 wrote:Even in cases of volunteering to be tortured, the person chooses to do that because if they didn't, they would feel guilty and that would presumably feel worse off than being tortured. They may be mistaken and change their mind during the torture
Here's the disproof: They may also know they will change their minds during torture, which in itself proves that they weren't misinformed, and knew it would cause more net pain than personal pleasure experience.

Our ability to be metacognitive and make choices for ourselves that we know we will later disagree with and regret under duress proves this.

This is particularly true when the person will die either way, so an extended experience of guilt is out of the question and does not explain the choice.

So, your conclusions just don't follow:
ole_92 wrote:(and thus they have had an uninformed preference), but at the time of their decision, they necessarily do a little calculation in their head, and choose which option they think would cause them the least suffering. They may change their mind, but that would simply mean they had an uninformed preference and lacked the required knowledge.
https://en.wikipedia.org/wiki/Non_sequitur_(logic)
This is not necessarily true. This can be true sometimes, but not all of the time. Changing one's mind under duress doesn't mean one was uninformed of the nature and experience of that duress.

If you water board me long enough, I'll beg you to kill me, but in advance I can say please ignore me when I say that because in my right mind (not under said duress) I do not want to die and would like to come out of the other side of that experience alive.
I say this not in ignorance of the experience, but in full knowledge of it and my psychology (I won't go into how I know what it feels like, but if you want to experience it at least do so under professional supervision -- note that it's not something I recommend).
ole_92 wrote:With that logic in mind, I came to the conclusion that preference consequentialism collapses down to hedonistic consequentialism.
This conclusion is false, so, none of the rest of what you wrote that follows from this is sound.
ole_92 wrote:It seems rather arbitrary to exclude the wellbeing of the agent who is making moral decisions.
It's not arbitrary, it's intrinsic to the concept. Morality is consideration for the interests of others. Being self interested just has nothing to do with morality; it isn't necessarily immoral, it's just amoral. It's something aside from morality, and only becomes a problem for morality when it interferes with it.
ole_92 wrote:My wellbeing is just as important as anyone else's, from the objective point of view of the universe.
The universe is neither sentient nor intelligent in any way. It is not capable of being moral. Only sentient beings with enough intelligence to comprehend moral agency are. So, this supposed "objective" POV is irrelevant.
Altruism is a process for individuals to engage in.
ole_92 wrote:The only reason presented in favor of altruism as opposed to utilitarianism that I saw, was the "utility monster" problem.
Not sure what you've read. Can you reference the posts?
This is not why we favor altruism inherently, but an additional practical argument that's made now and then.
ole_92 wrote:But even if it was true that the problem poses a serious issue for utilitarianism that goes against our intuitions, it would still be incorrect to choose a different ideology and sacrifice rationality for it.
Correct. And there is perhaps an altruistic form of the Utility monster that is more sympathetic. At core, it's not necessarily soluble, because the problem can be framed to make the Utility monster an entire society (living in the monster) whose survival depends on another smaller society's (humanity's) destruction.
ole_92 wrote:Almost all of the pleasures that we experience are basically a relief/reduction from some form of discomfort/suffering.
Or the opposite could be arbitrarily asserted: That all discomfort and suffering are from absence of pleasure.
If you look into physiology, though, the dual nature of carrot and stick really couldn't be much more apparent.
We respond to deprivation of either as the opposite, just as something colder than your skin temperature feels cold, or something warmer feels warm regardless of your core body temperature -- this, however, goes both ways. So, your argument kind of falls apart there (for the third time).
ole_92 wrote:A feeling of orgasm can be explained by acknowledging the sequence of events prior to orgasm.
You can create ever more convoluted ad hoc models to try to explain any sensation, but it's really irrelevant when you can't negate the validity of the opposite assertion.
Or: That which can be asserted without evidence can be dismissed without evidence.
Like any religion (say, Islam or Christianity), they can make up equally plausible sounding nonsense, but without any evidence to distinguish their models as correct over another it's meaningless and unscientific,

In terms of the evidence, the only thing we have to go on is a notion of context between negative and positive stimuli.
There is no more reason to believe one (of pleasure or pain) predominates over and negates the significance of the other, any more than the common Christian claim that evil is only the absence of good because a person used free will to not allow 'God' into his or her heart: Why isn't good just the absence of evil?.
ole_92 wrote:Trigger warning: rather depressive couple of paragraphs below.
I hope migrating from a negative utilitarian outlook to an altruistic outlook based on preferences will help you overcome this quite disturbing apocalyptic predisposition. You realize you're basically talking like a cartoon villain, right?
ole_92 wrote: Let's say we have the monster, and 10 people who live with it. Presumably the monster has to cause 100 units of suffering to everyone else to feed itself. Everyone is very hungry, barely staying alive, and suffers greatly. The impact of the monster is -1000. If the monster isn't fed, the "poor" monster feels very unhappy and suffers -10000 points. That tells us it is better for it to eat, than for it not to eat. But what if we could euthanize the monster? Then the total impact of the monster would be 0, since dead monsters can't be deprived of anything! Since we don't take into account the positive pleasures that the monster would otherwise enjoy, we're under no obligation to keep the monster alive at everybody else's expense! And if the monster is a negative utilitarian itself, it would be morally obliged to commit suicide.
Sure. But all of this follows from a broken system with no semblance to reality or the definition of morality. Also, it's important to remember that in said system it would be even better to just kill everybody with this utility monster slaying super power.

The reason the utility monster is a problem is not because it's illogical, but because it's off putting, and makes people reject the ethical systems for emotional reasons. We don't reject systems using a litmus test of whether they can support a utility monster.
The red button is even more off putting than the utility monster. If you've solved one issue, it's only by creating a much bigger one.
ole_92
Newbie
Posts: 8
Joined: Tue Nov 03, 2015 2:02 am
Diet: Vegan

Re: (LONG RANT) Can negative utilitarianism solve the "utility monster" problem?

Post by ole_92 »

Hello brimstoneSalad. Alright, you made good points, but I'll try to counter them anyway.
brimstoneSalad wrote:Here's the disproof: They may also know they will change their minds during torture, which in itself proves that they weren't misinformed, and knew it would cause more net pain than personal pleasure experience.
They also know they will change their minds if they don't choose the torture. Both options seem bad. But the emotional torment of being responsible for some greater harm would be worse than physical torture (or so they think). So in the end, they choose the lesser of the two evils, given the information they have.
brimstoneSalad wrote:This is particularly true when the person will die either way, so an extended experience of guilt is out of the question and does not explain the choice.
Well, the experience right before death (or at the time they make the decision) would feel like guilt too. And apparently it would be worse for them to feel guilty, than to be tortured.
brimstoneSalad wrote: This is not necessarily true. This can be true sometimes, but not all of the time. Changing one's mind under duress doesn't mean one was uninformed of the nature and experience of that duress.
If you water board me long enough, I'll beg you to kill me, but in advance I can say please ignore me when I say that because in my right mind (not under said duress) I do not want to die and would like to come out of the other side of that experience alive.
And why do you think you want to come out alive? That's right, because you think your life will result in a net positive after that. You think that the torture is worth it to continue living. It's like going to the gym to get some benefit later, or working to make money. It's always a trade off. I'll defend the idea that your preference to be tortured would be uninformed below.
brimstoneSalad wrote:I say this not in ignorance of the experience, but in full knowledge of it and my psychology
Even if you have been tortured before, you can't reliably say you know how it feels, due to an inaccuracy of the remembering self, as opposed to the experiencing self. We forget how suffering feels relatively soon after it ends. The experiencing self provides the only accurate information about wellbeing, so unless they are currently experiencing it, they can't be 100% informed. They rely solely on memory and predictions. This is why an informed preference for net suffering is impossible, and this is why you would change your mind, given the intolerable level of suffering during torture.
brimstoneSalad wrote:but if you want to experience it at least do so under professional supervision -- note that it's not something I recommend
I am curious about this. I wonder if there's a way to organize that.
brimstoneSalad wrote:It's not arbitrary, it's intrinsic to the concept. Morality is consideration for the interests of others. Being self interested just has nothing to do with morality; it isn't necessarily immoral, it's just amoral. It's something aside from morality, and only becomes a problem for morality when it interferes with it.
But your definition of morality is also arbitrary. Morality is about what's good and bad. And good and bad can be experienced by all sentient beings, including the moral agent. What you're describing is preference altruism. Of course if your premise is "preference altruism=morality", then it follows, but you're just asserting it.
There is no self interest in morality per se. There's interest for everyone, without the arbitrary distinction between "me" and "not me". Everybody to count for one, nobody for more than one (at least generally).
brimstoneSalad wrote:The universe is neither sentient nor intelligent in any way. It is not capable of being moral. Only sentient beings with enough intelligence to comprehend moral agency are. So, this supposed "objective" POV is irrelevant.
Altruism is a process for individuals to engage in.
I wasn't trying to assign moral agency to the universe. All I was saying is that objectively everyone's wellbeing matters, and wellbeing of equally sentient beings matters equally. There is no rational reason to exclude the moral agent from the calculation. I got the phrase "POV of the universe" from Peter Singer and Henry Sidgwick. It's a good defense of morality being objective, at the very least.
brimstoneSalad wrote:Or the opposite could be arbitrarily asserted: That all discomfort and suffering are from absence of pleasure.
Do you really think that hunger, thirst, desire for sex, and loneliness are just absences of pleasure? Surely they are negative feelings. And what about torture, is that some kind of extreme lack of pleasure? No, suffering is a real, negative value. They are deprivations. We need something to happen to relieve them and get back to neutral. The feeling of relief feels good. Maybe I'm an alien and not a good example of how humans function. :?
brimstoneSalad wrote:If you look into physiology, though, the dual nature of carrot and stick really couldn't be much more apparent.
Well, you wouldn't feel good eating a carrot unless you were hungry prior to that. You need a negative stimuli first. The dual nature of carrots and sticks is true - carrots relieve the discomfort, and sticks cause it. It's consistent with my argument.
brimstoneSalad wrote:In terms of the evidence, the only thing we have to go on is a notion of context between negative and positive stimuli.
There is no more reason to believe one (of pleasure or pain) predominates over and negates the significance of the other, any more than the common Christian claim that evil is only the absence of good because a person used free will to not allow 'God' into his or her heart: Why isn't good just the absence of evil?.
I think you're misunderstanding my position. There are certainly negative and positive stimuli. I'm saying that the reason most of positive stimuli are positive, is because there was a prior negative stimuli before that. When the need/want/preference/deprivation has been fulfilled, the previously positive stimuli stops being positive. Imagine you really enjoyed getting a massage. After a few hours it would stop feeling good, because it did its job - it relieved you from your tensions and gave you the relaxation that you craved. You will have to accumulate it over some time, to be able to enjoy it again. It's true for every positive feeling I can think of. There are gray areas such as certain drugs and pleasure machines, but again, would it really feel good to be constantly high? I may be wrong, but I think you'd get bored, and you'll need another form of a negative stimuli so you could feel good negating that. Perhaps a pleasure machine could simulate this roller coaster of need-fulfillment, or at least give us a memory of a prior discomfort, so we can feel good having it reduced/eliminated.
brimstoneSalad wrote:I hope migrating from a negative utilitarian outlook to an altruistic outlook based on preferences will help you overcome this quite disturbing apocalyptic predisposition. You realize you're basically talking like a cartoon villain, right?
Yeah, I'm aware my views are still unpopular. It's really just a giant thought experiment. But I hope that people don't dismiss them right away. I'm still convinced that NU is correct. I may not be able to argue for it very well since I'm only just starting and feel undereducated, but there are some respectable minds who subscribe to negative utilitarianism, such as David Pearce. I should read more of his work and see if I can use some of his points. His approach is less depressing than mine - he advocates for technology to phase out involuntary suffering. Obviously that's an option, but it's painfully slow and I have doubts that we could ever reach it, given our nature.
brimstoneSalad wrote:Sure. But all of this follows from a broken system with no semblance to reality or the definition of morality. Also, it's important to remember that in said system it would be even better to just kill everybody with this utility monster slaying super power.
Well, not necessarily. The 10 people's actions could have been eliminating more suffering than they were causing - they may have been vegans and altruists, so it's wrong to kill them ;) Some people's footprint is positive.
And monsters tend to feed on other people's wellbeing (that's the whole point), so the monster would need to find another group of people, and use them instead (otherwise it wouldn't be a utility monster). It's better to kill the monster, than not kill the monster.
The only thing better than killing the monster would be to press the red button - that would achieve the goal instantly.
But if that's not an option, you can kill the utility monster, and that would be the second best moral thing to do.
brimstoneSalad wrote:The reason the utility monster is a problem is not because it's illogical, but because it's off putting, and makes people reject the ethical systems for emotional reasons. We don't reject systems using a litmus test of whether they can support a utility monster.
The red button is even more off putting than the utility monster. If you've solved one issue, it's only by creating a much bigger one.
Whether it's off putting or not should be irrelevant. We shouldn't reject ethical systems just because they are repugnant. Anyone who does that is just dishonest.

I'm curious, can you answer these questions?
1. Do you agree that the amount of suffering is greater than the amount of happiness in today's world? Think about all the wars, famines, torture, rape, poverty, animal agriculture, and nature. Do you agree that the "bad" is not outweighed by the "good"?
2. If yes, doesn't it follow logically that it's better to press the red button?
3. If not, how bad does it have to get before it's too much? There must be a limit when you admit this system is dysfunctional and produces more harm than good.

Edit: I just thought of a potential argument for altruism and against utilitarianism. If the moral agent's wellbeing matters, then if the agent is very unhappy they would be morally obliged to commit suicide. Obviously it feels intuitively wrong to demand that.
On the other hand, if the agent's actions result in a net reduction of suffering of others, then their existence is definitely justified.
But still, it just feels wrong to demand already miserable people to either be super altruistic to justify their existence, or to die...
So I don't know, perhaps I'm wrong, and altruism is right. Although I am reluctant to trust intuitions.
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10367
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: (LONG RANT) Can negative utilitarianism solve the "utility monster" problem?

Post by brimstoneSalad »

As I explained to Mr. Purple in another thread, these are models, and in particular ad hoc models: Your model is not predictive, and you do not allow for it to be falsifiable. The point is that you can not disprove pleasure based models which say pain does not exist, since they are equally 'explanatory'.

The way the 'executive' function of a brain works is as a difference engine: It selects that activity with the most positive weight (accumulated through all influence from interests and other predispositions -- some of these are projections for future pleasure/pain, some are not) and least negative. It's a weighted vote from all cooperating and competing parts of the mind which together (or in part, depending on existential identity) make up the self.

It works whether everything is positive (each part votes FOR the propositions with varying weight), or everything is negative (each part votes AGAINST propositions with varying weight), or whether you have a mix of the two (parts vote for FOR and AGAINST propositions with varying weight).

The reason to believe we operate on pleasure and pain is empirical evidence of brain function: various pleasure centers and pain centers which manage these stimuli, and are generated by different experiences. See Occam's razor.
ole_92 wrote:And why do you think you want to come out alive? That's right, because you think your life will result in a net positive after that.
This needs to stop. If you want to ask a question, then ask a question. Please don't assume you know me and answer in such a patronizing way for me.

This is the prediction of YOUR unfalsifiable ad hoc hedonistic model (or, your model of the hedonist model you assume others hold out of delusion of the meaningful existence of pleasure as an impetus), this it not the ultimate answer from every functional model, and It's not why I don't want to die.

I don't want to die not because I'm seeking greater pleasure: I'm not a hedonist, and my life is not pleasure oriented. I don't want to die because I have interests, and things I want to do and accomplish for their own sake.
It is a side effect that accomplishment also yields pleasure, but I'd rather accomplish them and then die before realizing it (losing that ability to experience the pleasure from the accomplishment, which I assure you is nominal), than to merely think I've accomplished it before dying (thus experiencing the pleasure) but fail to actually accomplish what I wanted.
In addition to that stronger motivation based on my other interests, I also don't want to die simply because I have an innate interest in living itself which is unrelated to future or present sense experience.

I'm sure you'll continue with your condescending defense that I'm just ignorant and I don't know my own mind (even though you're foisting a model you don't even accept upon me and assuming I'm also delusional in that regard), but that is what I choose, and you have no basis to contradict that other than your pet ad hoc models which are not based off the reality of cognition, and can only explain behavior by accusing others of ignorance and/or delusion.

You're no different in that regard from a Christian who asserts that the only reason atheists can be moral or love others is because deep down inside they secretly know 'God' and are inspired by him, but just deny him because of their egos.
It's insulting, and I'd ask you to stop doing it and realize you're proposing an unfalsifiable ad hoc model and explaining all behavior with reference to that, not revealing some underlying truth of reality and human cognition.
ole_92 wrote: You think that the torture is worth it to continue living. It's like going to the gym to get some benefit later, or working to make money. It's always a trade off.
According to YOUR unfalsifiable ad hoc model (of what I assume you think is delusional cognition anyway, since you reject the substantive nature of pleasure).

According to the Christian model, it's because you secretly know you will go to hell and infinite torture is greater than any amount you could experience on Earth.
Did that just convince you to become a Christian? Because I can construct a bullshit model with no evidence of its reality that happens to conveniently explain everything just like every other model because I created the model specifically to do that?
How is that persuasive?

You find yourself in the same predicament here of a Christian who is unable to explain why Islam is wrong in objective terms. These are just different models, and all at least apparently (or superficially) internally consistent.
ole_92 wrote:
brimstoneSalad wrote:Or the opposite could be arbitrarily asserted: That all discomfort and suffering are from absence of pleasure.
Do you really think that hunger, thirst, desire for sex, and loneliness are just absences of pleasure?
I am not asserting that model, I am saying it can be asserted just as yours. Just like the Christian model above. I'm attempting to get you to understand what models are, and why they are an exercise of mental masturbation, not an exploration of the deep nature of human cognition.
By the same mental gymnastic you do to explain away pleasure, we could explain away pain as just being the absence of pleasure.
Detailed unfalsifiable ad hoc models like this with no connection to reality are the hallmark of pseudoscience.
ole_92 wrote: Surely they are negative feelings. And what about torture, is that some kind of extreme lack of pleasure?
No, a negative feeling is just a lack of positive feeling. A withdrawal that feels negative, but really it's just missing positive.
Torture is just total lack of pleasure, obviously. Any moron can see that. :roll:

Are you starting to get it? This is what you're doing. It's not science, it's not neuroscience, it's not psychology, it's not philosophy. It's a crude metaphysics, and it carries no weight since it presents no benefit over any other equally consistent model -- and it's insulting to people to tell them how they think like that (particularly when done without evidence).
ole_92 wrote:They are deprivations.
Right, so you admit it! They are deprivations -- of what? Pleasure. When you are deprived pleasure, it feels like suffering, that's how we define suffering: just lacking pleasure. It's actually totally neutral.

Checkmate!
(Are you getting the point?)
ole_92 wrote:We need something to happen to relieve them and get back to neutral.
No, suffering is neutral, we desperately crave to get back to positive because that's what we're accustomed to. This is why, once we're back into positive this feeling of relief wears off as we get used to being in the positive again and take it for granted. Suffering is an illusion created by the lack of pleasure, just like darkness is a lack of light, and then your eyes adjust to the sunlight. Duh. :lol:
(Anything?)

I can only hope the point has not been lost on you like it was on Mr. Purple in the other thread. There are a limited number of ways that I can explain this, and if you don't get it now, you might never manage. Maybe somebody else can find another way to explain it.
ole_92 wrote:Maybe I'm an alien and not a good example of how humans function. :?
No, you have been subject to cognitive bias, because you have viewed your actions and feelings through the lens of confirming this model which you believe in.
If you were a Christian, or you believed in the positive only model, you would be viewing your actions and feelings through that lens and would be using every life experience you have as proof of one of those models instead.

That's very, very normal. But it's not critical thinking.

In an ad hoc model, whether it is the pleasure only model, the pain only model, or the "you secretly believe in god" model, they all work out just fine. Why? Because it's all ad hoc bullshit. It's easy to arbitrarily create a model to retroactively "explain" actions and feelings in whatever way you want.

The only model actual observation from physiology leads us to believe is one that includes negative and positive feedback, and which (based on behavioral science) is guided by multiple conflicting interests (values) in the mind which provide intrinsic motivation based on the strength of those values (not some secret subconscious nonsense where every choice is guided by carefully considered foresight into future emotional pain).

Any model based on only positive feedback, or only negative feedback, or that is purely hedonistic, may seem possible, but they are obtuse and overly complicated. In terms of informatics, they're sub-optimal, and not solutions that would evolve naturally.
This veers into the science of information processing and evolutionary neuroscience, and I don't think you have the prerequisite knowledge now to engage in that kind of discussion, but I want to encourage you to read up on that since it may help disavow you of these mistaken assumptions you hold in a way I can not do on an internet forum.
ole_92 wrote:Well, you wouldn't feel good eating a carrot unless you were hungry prior to that. You need a negative stimuli first. The dual nature of carrots and sticks is true - carrots relieve the discomfort, and sticks cause it. It's consistent with my argument.
That is some grade-A bullshit. Delicious food doesn't stimulate pleasure?

Apparently consistent with your argument? Yes, and so is the Christian model apparently consistent with itself, and the pleasure only model where hunger is just the withdrawal of the pleasure of satiation. Eating a carrot if you're already satiated (in a state of pleasure) has diminishing returns, and may even reduce the pleasure from comfort because you could become too full.
Whatever, it doesn't mean anything, and it doesn't mean it's true.

Apparent internal consistency does not mean something is correct.
Go look at the flat Earth society -- they have ad hoc explanations for everything too. Does that mean the Earth is flat?

I could pull a model based on beauty out of my ass too, and do the same elaborate mental gymnastics to make it consistent.

Anything anybody does is based on maximizing beauty, pleasure and pain don't exist, there's only the canvas of beauty in the world. When somebody is hungry, they are driven to eat in order to continue creating beauty, they only think it's for other reasons based on delusion and ignorance, otherwise beauty would be lessened if they became too thin. Over eating would reduce beauty, thus why they stop when full. But some people must be fat, or starving, in order to show by contrast how beautiful others are. People avoid death to create more beauty. People initiate conflict and war to create beauty -- dull and boring is not beautiful.

Is this helping at all? Any realization of how absurd ad hoc models are?
Any notion of how insulting yours is, when you tell other people how they think, or that they're ignorant and delusional for not realizing all of their behavior is dictated by your model?

ole_92 wrote: I think you're misunderstanding my position. There are certainly negative and positive stimuli. I'm saying that the reason most of positive stimuli are positive, is because there was a prior negative stimuli before that.
I'm not misunderstanding.

But now you've said "most", which is not all, thus you can't really be a true negative utilitarian. You just weigh negative experiences more. Which means, since there are positive experiences, the utility monster can still exist in its classical form (being a unique case of positive gain). You have solved nothing.
ole_92 wrote: I'm saying that the reason most of positive stimuli are positive, is because there was a prior negative stimuli before that.
And the reason most negative stimuli are negative is because there was a prior positive stimuli before that. So what? They work together, and they play off each other to create balance which guides behavior.
There's plenty of research on this, if you'd do some reading in behavioral psychology.
This even happens for things that aren't pleasure and pain, like illusory colour perception. Google search images for "color illusion" or something like that. Relative to one colour, we perceive its opposite more keenly even in a hue neutral tone.

ole_92 wrote: Imagine you really enjoyed getting a massage. After a few hours it would stop feeling good, because it did its job - it relieved you from your tensions and gave you the relaxation that you craved.
Imagine you really hated spicy food. After a few days it would stop being as irritating because you got used to it. All sensory stimuli, negative and positive, have diminishing returns.
Constant negative stimuli is eventually ignored or normalized and loses its effect.

Variable rate, whether it's pleasure or pain, is more keeny felt.
ole_92 wrote: There are gray areas such as certain drugs and pleasure machines, but again, would it really feel good to be constantly high?
The pleasure stimulating your brain directly? Yes.
You would, however, stop being sentient after some time, but that's another issue which is problematic for the hedonistic model: it places the "self" in the wrong mechanism, but sentience is irrelevant. Hedonists only care about the actual nerve firing arbitrarily being called pleasure.

I created an image showing this in the other thread, about the two models.

As to this:
ole_92 wrote: They also know they will change their minds if they don't choose the torture. Both options seem bad.
What are you talking about? Torture is clearly more unpleasant. You may want to read what I said again. This sounds like more assumption on your part.

I know the torture will yield more net suffering than the potential grief. Particularly if I'm going to be killed either way. Look at the thought experiments I laid out for Mr. Purple in the other thread.

1. Your family will be killed, causing you some limited grief, but then so will you; the grief will be less than maximal pain and will last only a second.
2. You will be tortured for maximal pain for a day, then killed.

You choose #1, right?
We know the pain of torture must outweigh the grief or guilt.
ole_92 wrote: But the emotional torment of being responsible for some greater harm would be worse than physical torture (or so they think). So in the end, they choose the lesser of the two evils, given the information they have.
No, they don't think that. They know very clearly otherwise. And they will be killed, so they won't have to suffer for more than a moment from the guilt.
Your whole argument hinges on unfalsifiable assertions of inherent ignorance and delusion. You basically have to claim people aren't capable of any real decisions or reasonable knowledge. You are insultingly removing human capacity for agency to support your ad hoc hypothesis.
ole_92 wrote: Well, the experience right before death (or at the time they make the decision) would feel like guilt too. And apparently it would be worse for them to feel guilty, than to be tortured.
No, as in the example above, the torture is clearly worse.
ole_92 wrote: Even if you have been tortured before, you can't reliably say you know how it feels, due to an inaccuracy of the remembering self, as opposed to the experiencing self.
There you go again.
Ever moving goal posts.

Tell me how your hypothesis could be falsifiable. Give me an experiment that could be done to prove you wrong.

You'll never admit it's wrong, with every bit of evidence, you'll just move the goal posts and claim ever deeper subconscious levels of knowledge or ignorance and delusion -- whatever suits your hypothesis at the time.
This is what makes these positions so intellectually dishonest.
ole_92 wrote: But your definition of morality is also arbitrary.
No, it's a semantic argument based on common usage and consistency.
What threads have you read?
ole_92 wrote: Morality is about what's good and bad.
Here you're assuming the conclusion of hedonism, which is wrong. Try again.
I'm going by something like the golden rule, talking about moral action -- not a global system.
ole_92 wrote: But if that's not an option, you can kill the utility monster, and that would be the second best moral thing to do.
Killing the monster isn't usually an option either. It is a monster, after all. It is to sacrifice or not to sacrifice.
ole_92 wrote: Whether it's off putting or not should be irrelevant. We shouldn't reject ethical systems just because they are repugnant.
I didn't say that. I said it was an additional practical argument. It is an argument for not spreading your system, since people will reject it and opt for something with even worse outcomes (even measured by your system).
ole_92 wrote: 1. Do you agree that the amount of suffering is greater than the amount of happiness in today's world? Think about all the wars, famines, torture, rape, poverty, animal agriculture, and nature. Do you agree that the "bad" is not outweighed by the "good"?
2. If yes, doesn't it follow logically that it's better to press the red button?
3. If not, how bad does it have to get before it's too much? There must be a limit when you admit this system is dysfunctional and produces more harm than good.
No, no, and no.
ole_92 wrote: Edit: I just thought of a potential argument for altruism and against utilitarianism. If the moral agent's wellbeing matters, then if the agent is very unhappy they would be morally obliged to commit suicide. Obviously it feels intuitively wrong to demand that.
On the other hand, if the agent's actions result in a net reduction of suffering of others, then their existence is definitely justified.
But still, it just feels wrong to demand already miserable people to either be super altruistic to justify their existence, or to die...
So I don't know, perhaps I'm wrong, and altruism is right. Although I am reluctant to trust intuitions.
This is just a practical argument, not an argument against the truth of the system itself.
I do wonder why you wouldn't kill yourself if you thought life was nothing but suffering, though, and you believed your mind functioned hedonistically. Not as a moral prerogative, but just as a matter of cognitive function if you seriously believed there was only suffering ahead, how could you choose otherwise unless the mind is not actually hedonistic?
Seems that's enough of a proof against the hedonistic framework. Or it proves you're delusional, or that you don't actually believe your own assertions about the degree of suffering in life.
ole_92
Newbie
Posts: 8
Joined: Tue Nov 03, 2015 2:02 am
Diet: Vegan

Re: (LONG RANT) Can negative utilitarianism solve the "utility monster" problem?

Post by ole_92 »

Okay, it seems like you know much more about psychology and physiology than I do, so I don't think I'm equipped to keep arguing. Perhaps my model is unfalsifiable, and humans are something more than selfish animals. I'll have to do more research and see whether than can be proved or disproved. But I'll address a few things you said.
brimstoneSalad wrote:I don't want to die not because I'm seeking greater pleasure: I'm not a hedonist, and my life is not pleasure oriented. I don't want to die because I have interests, and things I want to do and accomplish for their own sake.
It is a side effect that accomplishment also yields pleasure, but I'd rather accomplish them and then die before realizing it (losing that ability to experience the pleasure from the accomplishment, which I assure you is nominal), than to merely think I've accomplished it before dying (thus experiencing the pleasure) but fail to actually accomplish what I wanted.
In addition to that stronger motivation based on my other interests, I also don't want to die simply because I have an innate interest in living itself which is unrelated to future or present sense experience.
I want to clarify that trivial pleasure isn't the only thing I was talking about. My definition of hedonism is more broad. There is a whole range of things, like feeling of accomplishment, fulfillment, duty, peace, reward etc. I don't believe we can take any action independent of those, much like I don't believe in free will. But again, I'll have to read more about it to come up with a better hypothesis. I apologize for sounding insulting, it wasn't my intention. I was just trying to explain something that feels ridiculously obvious to me.
brimstoneSalad wrote:Right, so you admit it! They are deprivations -- of what? Pleasure. When you are deprived pleasure, it feels like suffering, that's how we define suffering: just lacking pleasure. It's actually totally neutral.
No, deprivations are negative in and of themselves. They are desires, wants, needs, preferences. They are a "stick" of nature, a mechanism to make us do what we do. They feel exclusively negative to me. But I guess I'm just suffering from just cognitive bias and lack of critical thinking.
brimstoneSalad wrote:But now you've said "most", which is not all, thus you can't really be a true negative utilitarian. You just weigh negative experiences more. Which means, since there are positive experiences, the utility monster can still exist in its classical form (being a unique case of positive gain). You have solved nothing.
Firstly there are a few different types of negative utilitarianism. There are weaker and stronger versions.
Secondly, there may be some positive experiences, but I place zero value on them. They are irrelevant and there's no need for them. I'm exclusively oriented to relieve suffering. That is the only thing that matters and that is needed. I am a true negative utilitarian, and the utility monster problem is still solved.
brimstoneSalad wrote:I know the torture will yield more net suffering than the potential grief. Particularly if I'm going to be killed either way. Look at the thought experiments I laid out for Mr. Purple in the other thread.

1. Your family will be killed, causing you some limited grief, but then so will you; the grief will be less than maximal pain and will last only a second.
2. You will be tortured for maximal pain for a day, then killed.

You choose #1, right?
We know the pain of torture must outweigh the grief or guilt.
Well, it depends. I'll tell you what I would do. At the time of my calculation, if I chose the #2 I would feel good about myself and get to live another day! But that would be quickly replaced by regret. :D And a realization that I didn't know what I was getting into, and admitting my mistake. It would have been an uninformed preference. So I would choose #1. I don't like to make mistakes.
ole_92 wrote:Even if you have been tortured before, you can't reliably say you know how it feels, due to an inaccuracy of the remembering self, as opposed to the experiencing self.
brimstoneSalad wrote:There you go again.
Ever moving goal posts.

Tell me how your hypothesis could be falsifiable. Give me an experiment that could be done to prove you wrong.
Okay. Here's an experiment. Instead of asking them in advance, how about you torture someone in real time, and ask them during the torture about their choice to be tortured. Ask they if this is really what they want.
If they continue insisting that is their informed preference, it would be true that you can have an informed preference for net suffering. And I will admit I was wrong. Get them to experience it at the time of making the decision, so they actually know what they're talking about.
brimstoneSalad wrote:I didn't say that. I said it was an additional practical argument. It is an argument for not spreading your system, since people will reject it and opt for something with even worse outcomes (even measured by your system).
Most people already opted for something with bad outcomes. I can't think of any other species that has such a negative impact on wellbeing of others. Humans are the utility monster.
brimstoneSalad wrote:I do wonder why you wouldn't kill yourself if you thought life was nothing but suffering, though, and you believed your mind functioned hedonistically. Not as a moral prerogative, but just as a matter of cognitive function if you seriously believed there was only suffering ahead, how could you choose otherwise unless the mind is not actually hedonistic?
Seems that's enough of a proof against the hedonistic framework. Or it proves you're delusional, or that you don't actually believe your own assertions about the degree of suffering in life.
I am addicted to life. It's a lot like being addicted to cigarettes. You have little choice in it, and it's not easy to just quit. When I'm no longer hooked, I would just kill myself. Hopefully it's legalized - I don't wanna make a mess :|
Much like smokers feel cravings, I feel duties and obligations, preferences and desires. I am very certain there is only suffering ahead, but I have optimism bias, which is makes it feel less intolerable. By fulfilling these duties and doing the right thing, my suffering is reduced.
I'm hoping to use my situation to make a positive net impact on the world. I was already harmed, but I can still try to prevent others from being harmed.
If I advocate for veganism and antinatalism, I could hope to prevent some new lives from being born. It's pretty shitty to bring them into existence into a world like this without any consent.
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10367
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: (LONG RANT) Can negative utilitarianism solve the "utility monster" problem?

Post by brimstoneSalad »

ole_92 wrote: I want to clarify that trivial pleasure isn't the only thing I was talking about. My definition of hedonism is more broad. There is a whole range of things, like feeling of accomplishment, fulfillment, duty, peace, reward etc.
I know. Which is why I gave the dying without experiencing those example.
I would rather achieve and not know about it than fail and think I've achieved. I value the achievement in and of itself, detached from my knowledge of it, much more than the pleasure it produces for me to have achieved.
ole_92 wrote: I don't believe we can take any action independent of those
Well, you're very wrong. We act based on values more more than foresight and expectation of reward; we do the latter too, of course, but it's only part of the story.
I recommended this book to Mr. Purple, but I think you may be more inclined to read it:
http://www.amazon.com/Thinking-Fast-Slo ... 0374533555
ole_92 wrote: much like I don't believe in free will.
Our values and beliefs dictate our action and inform habit. If we value protecting our children more than life, we will protect our children at the cost of our lives; if we don't, then we won't. It's very easy to predict, I'm not talking about something magical.
ole_92 wrote: I was just trying to explain something that feels ridiculously obvious to me.
That's the problem; if something feels that obvious, you probably haven't thought about it enough or adequately challenged yourself on the topic.
Next time you think something is obvious, spend more time reading arguments against it instead of for it. It will serve you much better.

Somebody like David Pearce is the last person you should ever read at this point, since that's only going to confirm your beliefs. That's something I would read, but you should avoid like the plague.
ole_92 wrote: They feel exclusively negative to me. But I guess I'm just suffering from just cognitive bias and lack of critical thinking.
I was just giving an example of a pleasure only model, and the rationalizations that would come with it. But sure, that is true to some substantial extent. You need to real a lot more that you disagree with, and avoid confirmation biases.
Look for evidence of other models in your feelings and actions.
ole_92 wrote: Secondly, there may be some positive experiences, but I place zero value on them.
You can't substantiate why you do this. It's as arbitrary as choosing Christianity over Islam when both are substantiated only by "faith", or preferring positive experiences only and ignoring the negative. Either would be an intellectually wrong position to hold.
Arbitrary frameworks are only useful if you ca convince people to agree with them for other reasons; we can't convince people they are correct, because they have no claim to it.

If you admit they exist, you've functionally defeated your position (which is a good thing, because if you don't admit it, you look like a crazy person and probably do worse than defeat your position).

ole_92 wrote: Well, it depends. I'll tell you what I would do. At the time of my calculation, if I chose the #2 I would feel good about myself and get to live another day! But that would be quickly replaced by regret. :D And a realization that I didn't know what I was getting into, and admitting my mistake. It would have been an uninformed preference.
It would clearly NOT have been an uninformed preference, because you already know now that under that duress you will change your mind.
It may not be perfectly informed, but you wouldn't be perfectly informed when trying to change your mind either; it is surely not uninformed though.

Do you deny that it's possible to make a decision in the present knowing you will regret it in the future and why, but still be committed to that decision when it's made and knowing (even when you regret it due to being under duress) that it's the morally right decision?
ole_92 wrote: So I would choose #1. I don't like to make mistakes.
It wouldn't have been a mistake, it would have been the right thing to do. But let's adjust #1 to control for your bizarre indifference to murdering people:
1. You will be killed right away (less than a second to grieve), and your family will all be tortured to death over the next ten years; maximal torture, with variable rate and type so they can't be desensitized to it.

There is no doubt that #2 is the more moral choice, but #1 is the hedonistic one (for somebody who only values morality to the extent it makes him or her feel good/bad).
ole_92 wrote:Okay. Here's an experiment. Instead of asking them in advance, how about you torture someone in real time, and ask them during the torture about their choice to be tortured. Ask they if this is really what they want.
That's a flawed experiment: how about you get them drunk or high, then ask?
People are not in a rational state of mind when under duress from torture. The panic that is triggered means they would do anything, even agree to a hundred times more torture at a later date, to make it stop right then (does that sound rational to you?).

You're irrationally favoring the decision made under duress, which would surely be regretted after the torture is over if they live even an instant to do so.
If you're using a standard of regret, this test fails.
ole_92 wrote: If they continue insisting that is their informed preference, it would be true that you can have an informed preference for net suffering. And I will admit I was wrong. Get them to experience it at the time of making the decision, so they actually know what they're talking about.
This is not falsifiable, because it's not a condition of rational choice. You might as well say:

"Set it up with two buttons, right and left hand, the right twitches for yes and the left for no. Now paralyze the subject and hook up pulsing electrodes to the right hand so it twitched uncontrollably and the left is impossible to move. If the subject answers no under these conditions, I will admit I'm wrong."
ole_92 wrote: Most people already opted for something with bad outcomes. I can't think of any other species that has such a negative impact on wellbeing of others. Humans are the utility monster.
So, your goal is to attempt to spread your ideas, thus making more people reject veganism (where they might have otherwise accepted it if it weren't being promoted by cartoon super villains)?
How is this outcome better than more people going vegan but not becoming negative utilitarian hedonists, and using some other moral framework instead which is actually acceptable to normal human beings?
ole_92 wrote: I am addicted to life. It's a lot like being addicted to cigarettes. You have little choice in it. When I'm no longer hooked, I would just kill myself. Hopefully it's legalized - I don't wanna make a mess :|
Sure...
ole_92 wrote: Much like smokers feel cravings, I feel duties and obligations, preferences and desires.
If you were dead, you would not feel any of those.
ole_92 wrote: I'm hoping to use my situation to make a positive net impact on the world. I was already harmed, but I can still try to prevent others from being harmed.
Or you might just be making things a lot worse by discouraging more people from becoming vegan. The more widespread and better known OOS and antinatalism are, the more hostile people will be to veganism. You can't spread intuitively abhorrent ideas like that and expect people to be receptive to them.

I'm not saying you should kill yourself. Once you spend more time examining the flaws in your philosophical beliefs you may come to very different conclusions. Maybe put a pin in the whole idea of suicide until you understand these ideas a bit better.
ole_92
Newbie
Posts: 8
Joined: Tue Nov 03, 2015 2:02 am
Diet: Vegan

Re: (LONG RANT) Can negative utilitarianism solve the "utility monster" problem?

Post by ole_92 »

Hi again. Okay, I learned something. Let me respond to some things you said.
brimstoneSalad wrote:I would rather achieve and not know about it than fail and think I've achieved. I value the achievement in and of itself, detached from my knowledge of it, much more than the pleasure it produces for me to have achieved.
Honestly I feel the same intuitively. But how is it possible to value something that I don't know about? The fact that I value it (whatever it is) presupposes that I have prior knowledge about it.
For example, I can say I value converting someone to go vegan, regardless of my knowledge. But until I know I succeeded, how can I possibly value it? I would value it if I knew, so it's a hypothetical statement. But I can't actually value it (and be aware of valuing it) until I find out about it (whether it's true or not). To me it seems that knowledge is essential. I can't disagree with you, so I'm not saying that you're wrong. I just can't get my head around it.
brimstoneSalad wrote:Well, you're very wrong. We act based on values more more than foresight and expectation of reward; we do the latter too, of course, but it's only part of the story.
I recommended this book to Mr. Purple, but I think you may be more inclined to read it:
http://www.amazon.com/Thinking-Fast-Slo ... 0374533555
Added to my "to read" list, thanks!
brimstoneSalad wrote:Our values and beliefs dictate our action and inform habit. If we value protecting our children more than life, we will protect our children at the cost of our lives; if we don't, then we won't. It's very easy to predict, I'm not talking about something magical.
Yeah, I can definitely agree that people will sacrifice their lives for their values. I'm not so convinced that they could choose net suffering, but I will keep thinking about it.
brimstoneSalad wrote:Somebody like David Pearce is the last person you should ever read at this point, since that's only going to confirm your beliefs. That's something I would read, but you should avoid like the plague.
Good point. One of the most important things I learned from this conversation. I keep falling into the trap of reading things that affirm my current understanding. Not the smartest approach.
brimstoneSalad wrote:If you admit they exist, you've functionally defeated your position (which is a good thing, because if you don't admit it, you look like a crazy person and probably do worse than defeat your position).
I admit they may exist (although I will have to look up and see if I can find clear evidence of that), but if they do, I don't believe that they have any intrinsic value. Ok, let me ask you this. Don't you think that reducing negative feelings is more urgent than improving the positives (or in your case, ending extreme violations of will as opposed to minor violations of will)?
If person-1 suffers torture at -10, and person-2 is quite happy at +6, and we have 4 wellbeing points to spend, wouldn't it be better to alleviate 4 points of person-1's suffering, than to please an already happy person-2 with those points?

I personally think that wellbeing points should be spent solely on reducing suffering, starting at extreme forms and gradually moving up when the worst forms have been alleviated. If and only if everyone is at 0, nobody is in the negative any longer, does it make some sense to talk about improving happiness. That's got nothing to do with morality the way I see it though. That's just pleasing people - a subjective, preference based approach (a preference to please, not moral obligation to please).

But yes, that may sound arbitrary for you (just like your definition of morality sounds arbitrary for me), but at this point I can't prove why I think that is true.
brimstoneSalad wrote:It would clearly NOT have been an uninformed preference, because you already know now that under that duress you will change your mind.
It may not be perfectly informed, but you wouldn't be perfectly informed when trying to change your mind either; it is surely not uninformed though.

Do you deny that it's possible to make a decision in the present knowing you will regret it in the future and why, but still be committed to that decision when it's made and knowing (even when you regret it due to being under duress) that it's the morally right decision?
No, of course I don't deny it. I do question whether the decision is informed. There is a huge gap between remembering and experiencing selves. The person (unless currently experiencing torture) is just making a prediction. I imagine they would think something on the lines of "yes, I will change my mind, but it will be worth it, because it's the right thing to do, so I'm going to do it". To me that seems 100% synchronous with current predictions of net wellbeing.

I was thinking about values and beliefs, and realized that I don't even know what we're talking about. Doing things in accordance to one's values/beliefs makes them feel good, and vice versa. You might say that's just a side effect, but I say that's the inherent function of them, a basic requirement. Why would we strive to act in accordance to our values if they didn't have a profound effect on how we feel? How can values even function, without altering our conscious state? To my best understanding, we evolved to feel good when we see and do things that live up to our standards, and feel bad when things go against them. Maybe I'm missing something. I really don't get it. What is a "value" other than a mechanism that makes us do things, by giving us positive and negative stimuli, carrots and sticks? What is a difference between a value and a preference? Value = preference, but stronger? How can it ever possibly work independent of positive and negative stimuli, that drive all other preferences that we hold? What makes it qualitatively different?

If value is merely a preference, then I can just say that we prefer/value not to be tortured more than we prefer/value being moral. Valuing our wellbeing is stronger than valuing morality. What makes moral values/preferences qualitatively different from other values/preferences? If it is true that moral values are stronger, then their drive - the stimuli that affects our wellbeing must be stronger, and that is why one might choose to be tortured for their values. So in the end, it always and necessarily comes down to how it affects our wellbeing. If you argue that's it's not the case, can you provide an alternative explanation for that phenomena?
This is why I drew the analogy to free will. It seems like a magical claim, that a preference can function without the stimuli on our wellbeing.

Imagine if I said that one of my values is that "rape is bad". And then I went on to rape someone, or watch people get raped. If I felt completely indifferent, if it didn't have any impact on my wellbeing (I wouldn't get righteously angry/sad/judgmental), would it still be considered my value? I would say no. I would be a very confused individual.
I will read the book you suggested, but at this point I'm unconvinced (or rather the more I think about it, the more confused I become).
brimstoneSalad wrote:It wouldn't have been a mistake, it would have been the right thing to do. But let's adjust #1 to control for your bizarre indifference to murdering people:
1. You will be killed right away (less than a second to grieve), and your family will all be tortured to death over the next ten years; maximal torture, with variable rate and type so they can't be desensitized to it.

There is no doubt that #2 is the more moral choice, but #1 is the hedonistic one (for somebody who only values morality to the extent it makes him or her feel good/bad).
I'm not proposing egoist hedonism as the basis of morality. I completely agree with you that #2 is the right thing to do morally.
My point is that we can not *always* act morally, regardless of our values and beliefs. Given enough suffering, most of us would change our minds, and we would have a real, undisputed preference to act immorally, just to make it aligned with our perception of net wellbeing. A preference which is indisputably stronger than a preference to act morally.
When I said "mistake", I meant something that one regrets later, and that's not the same as an immoral action.
brimstoneSalad wrote:That's a flawed experiment: how about you get them drunk or high, then ask?
If you want to ask them whether the drug/alcohol experience is worth it, then yes, you should ask them during those states. Not after, not before, but during.
If you want to ask them whether the torture experience is worth it, then ask them during the torture. I don't understand why you would focus on people's memory/predictions, when clearly the experience state is the only accurate criteria.
brimstoneSalad wrote:People are not in a rational state of mind when under duress from torture. The panic that is triggered means they would do anything, even agree to a hundred times more torture at a later date, to make it stop right then (does that sound rational to you?).

You're irrationally favoring the decision made under duress, which would surely be regretted after the torture is over if they live even an instant to do so.
If you're using a standard of regret, this test fails.
If they sign up for one hundred times more torture at a later date, they will regret it during the next torture. Just like they regret volunteering for this torture. Both decisions were mistakes, in the way I use that term.
But it is so obvious that they think it would be worth it to make the second mistake. They want to experience at least some form of relief, and stop the current suffering. They think it's the lesser of the two evils, even though they are completely wrong about it.
brimstoneSalad wrote:This is not falsifiable, because it's not a condition of rational choice. You might as well say:

"Set it up with two buttons, right and left hand, the right twitches for yes and the left for no. Now paralyze the subject and hook up pulsing electrodes to the right hand so it twitched uncontrollably and the left is impossible to move. If the subject answers no under these conditions, I will admit I'm wrong."
This is an unfair analogy, because the subject has no choice (not even an irrational choice) in it. My experiment allows for choices, and choices would be made by the subject, given their current understanding of their future prospects (which won't always be informed). There can be examples of people who do not regret being tortured - either the level of torture is too low, and/or they are immune to it.

Also, the line isn't so black and white between rational and irrational. Who says that we are always rational when we're not getting tortured? Who is to say that we are rational when we volunteer to be tortured? And rationality according to what standard? A moral nihilist or an egoist would say that the only rational choice is the one that benefits the agent, so according to them, sacrificing oneself for others is profoundly irrational.
But why does it even matter? Whether the calculation is made with a clear mind or not, its result dictates what we will do. It isn't necessarily what we ought to do morally, no matter how hard we try. And it doesn't necessarily result in a net benefit, like in your example of signing up for 100 times more torture.
brimstoneSalad wrote:So, your goal is to attempt to spread your ideas, thus making more people reject veganism (where they might have otherwise accepted it if it weren't being promoted by cartoon super villains)?
How is this outcome better than more people going vegan but not becoming negative utilitarian hedonists, and using some other moral framework instead which is actually acceptable to normal human beings?
I don't remember making anyone reject veganism due to my negative utilitarian hedonism. I don't usually get this deep into my theories when I talk about veganism. Maybe utilitarianism is as much as I would mention. When people reject it, it's because they don't care.

But even advocacy of people who don't hold those ideas, such as TVA, results in rejection. Haven't you seen comments that say "I will eat twice as much meat because of you"? Having said that, he did say many times that eating animals is immoral because it causes unnecessary suffering, and that's pretty damn close to my views.

People go vegan for a variety of reasons, and not all of them we consider rational (deontology and religion are some of the examples). I'm reluctant to accept that one approach is the best, "one size fits all". For example, there are Christian vegans, and it's possible to find verses in the Bible that support that decision. Inclusiveness is not a bad thing, given our current situation.
I don't mention the red button when I talk to non-vegans about veganism, if that's what you're worried about. :lol:
brimstoneSalad wrote:Or you might just be making things a lot worse by discouraging more people from becoming vegan. The more widespread and better known OOS and antinatalism are, the more hostile people will be to veganism. You can't spread intuitively abhorrent ideas like that and expect people to be receptive to them.
What does OOS stand for?
Antinatalism and veganism are pretty independent of one another. I don't need to mention one when I'm advocating for another. Most of the time when I talk about them together is when I'm trying to convert an antinatalist to go vegan. 8-) I noticed that they are more receptive to it than your average carnist, due to their concern for suffering. Many of them already accept the premise that it's wrong to impose life (and a risk of extreme suffering as a consequence) on a sentient being.

But at the same time I think it's important to spread AN, independently from vegan advocacy. I wouldn't say that this is intuitively abhorrent. Tell me what's abhorrent about this:
https://www.reddit.com/r/antinatalism/c ... al_choice/
I don't know, it's pretty common for people to have some form of compassion. It obviously doesn't work with people who have extreme parental instincts, but I don't waste my time trying to convince them.

P.S. I realize that I typed a lot, and I don't want to be a burden, so I don't expect that you respond to all of it, or any of it. But you've given me food for thought, so I thank you for that.
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10367
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: (LONG RANT) Can negative utilitarianism solve the "utility monster" problem?

Post by brimstoneSalad »

ole_92 wrote: Honestly I feel the same intuitively. But how is it possible to value something that I don't know about? The fact that I value it (whatever it is) presupposes that I have prior knowledge about it.
It doesn't presuppose you have specific knowledge about it, only general knowledge about the subject sufficient to come to a coherent value.

There may or may not be a kitten in this box. If there is a kitten in this box, I hope it is not suffering, as I hope beings in general are not suffering.
Values are conceptual, and general.
It doesn't have to be about this kitten specifically that I have some kind of certain knowledge about and familiarity with.

Now if you had no idea what a kitten was, and no idea even generally what a sentient being was, and no idea what suffering was, then you could not hold such a value.

Values are ideas, like mathematical concepts. Even if there was nobody in the world to have a notion of it, pi would still be pi, and sentience would still be sentience -- even if there were no words for those things anymore.
ole_92 wrote: But until I know I succeeded, how can I possibly value it?
I know 1 + 1 = 2 in the past, but until I actually put these individual apples together, two groups of one apple, into one group and count them to find that they are now one group of two, how can I possibly know that math still applies to this application?
ole_92 wrote: I would value it if I knew, so it's a hypothetical statement.
It's not hypothetical, it's logically true unless by definition you only value things you know about. As if you changed the definition of "two" to mean "The nature of a thing or group of things which I have personally experienced and counted to be two, and not the conceptual number itself".

Just because you don't personally experience a value being realized doesn't mean it doesn't still exist as such. Just because you don't personally count the apples as adding up to two when one and one are combined does not mean there aren't now two apples.
ole_92 wrote: But I can't actually value it (and be aware of valuing it) until I find out about it (whether it's true or not).
You can both value it, and be aware of valuing it, just as I'm aware that all instances of 1+1 in the universe =2, seen or unseen, and I'm aware that I would prefer all kittens known or unknown to me to not be suffering.
ole_92 wrote: I'm not so convinced that they could choose net suffering, but I will keep thinking about it.
Every day you are alive, you choose net suffering (with respect to what you think you know), because you are compelled by something else.

Values themselves, when they influence actions, are not entirely unlike fears or addictions: All of these are factors that influence our behavior, along with pleasure and pain, and the expectation of those experiences. Psychological forces are in a tug of war with each other. Purely hedonistic elements of avoiding suffering don't always win, and that's probably a good thing.
ole_92 wrote: I admit they may exist (although I will have to look up and see if I can find clear evidence of that), but if they do, I don't believe that they have any intrinsic value.
You're arbitrarily choosing not to value them, despite that behavioral evidence proves that the beings experiencing them value them.
An animal will experience pain in order to experience slightly greater pleasure.
Both pain and pleasure are valued, and have clear exchange rates. We can quantify exactly how much pain an animal will go through for exactly how much pleasure, and at what point the animal considers it no longer worth it.

You dismiss this clear behavioral evidence without cause, or with an unsubstantiated assertion that pleasure doesn't exist, and that the animal is only experiencing a small pain to reduce a larger pain.
ole_92 wrote:Don't you think that reducing negative feelings is more urgent than improving the positives
No.
ole_92 wrote:(or in your case, ending extreme violations of will as opposed to minor violations of will)?
This is a completely different question. How did you think this was the same?
The correct comparison would be "Ending violations of will as opposed to enabling realizations of will"

Both are important, but it depends on degree.
I would ask the individual which he or she prefers in that situation, since there are clear exchange rates between the two.
I could choose to accept a violation of will in order to realize my will in another way, or I could surrender the ability to realize some aspect of my will to avoid a violation of my will. Whichever choice I make (if it's informed and of sound mind) demonstrates my preference for one over the other in this particular case, and shows which was larger (the violation, or the realization).
ole_92 wrote:If person-1 suffers torture at -10, and person-2 is quite happy at +6, and we have 4 wellbeing points to spend, wouldn't it be better to alleviate 4 points of person-1's suffering, than to please an already happy person-2 with those points?
No, because you're using a linear scale, and your example does not map to reality.
In reality, experience itself tends to be non-linear. We experience diminishing returns with resource expenditure.

Spending the same resources on each person may, in reality, increase the happy person to +7, but reduce the suffering of the unhappy person to -1. This is what we see from our use of resources. It's much easier to drastically decrease suffering with the same resources than to make a happy person happier. This is a question of application of resources and effective altruism.

If you use an unrealistic scenario like that, where there are magical wellbeing points that are unconnected to resource expenditure, and the subjects do not experience diminishing returns but have a linear relationship with experience, then it doesn't matter who you allocate the points to, assuming all other consequences are equal.

If you are imagining -10 to be more bad than +10 is good, then you are imagining it incorrectly.
ole_92 wrote:I personally think that wellbeing points should be spent solely on reducing suffering, starting at extreme forms and gradually moving up when the worst forms have been alleviated.
This is a personal bias you have. This is not philosophically true in any way.
However, because in reality there is no such thing as wellbeing points, and we're dealing with resources instead, it is empirically true that we should devote attention to alleviating suffering because this is simply easier and more effective (this is an issue of practicality).
ole_92 wrote:But yes, that may sound arbitrary for you (just like your definition of morality sounds arbitrary for me), but at this point I can't prove why I think that is true.
It is arbitrary, and if you can't prove why it isn't, that's a serious problem.

As to the definition of morality: Why is a circle round? Why is a circle not a closed shape with four straight sides and corners instead?
This is a semantic question which I would be glad to address in more depth if you can come to terms with the rest.
ole_92 wrote:I do question whether the decision is informed.
If you know you will change your mind under duress and even beg to be killed, how is that uninformed in any meaningful way?
ole_92 wrote:There is a huge gap between remembering and experiencing selves.
Sure, and if you want to, you can consider them different people. Two colleagues who take turns controlling the body. The experiencing self is amoral and hedonistic, operating with the "reptilian brain" for the most part, the remembering self is contemplative and moral, ruled by the prefrontal cortex ( https://en.wikipedia.org/wiki/Prefrontal_cortex ).

So, as the remembering self making a rational decision about what to do, you can sentence ten people to ten years of maximal variable torture, or you can sentence your colleague to a day of such torture (who will take over the executive role the moment the torture begins).

The important thing to remember is that we consider these states of extreme duress to be "not ourselves", and for good reason. Our sense of self is connected to our values and moral being -- to what we want to be -- when we're overcome with impulse or animal drive, our sense of humanity and what we regard as our selves is lost.
ole_92 wrote:How can values even function, without altering our conscious state?
How can math? If an apple rolls down a hill and joins another apple at the bottom, are they not two apples if there's nobody around to count them?
ole_92 wrote:What is a difference between a value and a preference?
They're basically synonyms. We can have preferences about things conceptually, that we do not experience (like the state of the unknown world).
ole_92 wrote:Why would we strive to act in accordance to our values if they didn't have a profound effect on how we feel?[...] What is a "value" other than a mechanism that makes us do things, by giving us positive and negative stimuli, carrots and sticks?
We wouldn't, but that's like asking why we would strive to do math if it wasn't useful to us in a particular situation or pleasurable -- we don't, but that doesn't mean the concepts behind mathematics aren't still valid.
Values, when held, express themselves by compelling behavior, and they do that through immediate feedback.

I'm going to post this image again, because you may find it more useful to understand what's going on:
interestvsegomodels.gif
Like I've said in the past, interests are like bidders in an auction, or voters in a political context.
They're all trying to "win" the decision, and they have limited money (or votes) to bid with (this is the influence they have in terms of affecting the difference engine).
The interest to avoid pain and seek pleasure is one of those bidders/voters. The auctioneer, or politician, it selling to the highest bidders, or pandering to the most voters -- that's where the decision comes in. Sometimes it will even lie to achieve those ends (this is a rationalizing mind, not a rational one).

The important thing to remember is that when they compel choices (through our feelings about those choices), that does not mean that we expect the ultimate consequences of those choices to yield more pleasure than pain: the only way that would always happen is if the interest to avoid pain and seek pleasure was the only interest present.
Compelling choices only means the executive 'feels' slightly better about making one choice rather than another in that moment -- the difference engine has more influence to that choice than against it -- expectation of pleasure or pain in the future is only one factor of many that influence that choice. Fears and/or values and/or goals and/or preferences or what have you also influence that instant of decision.

You may decide to do something moral that will result in a lot of pain. The idea that it will result in a lot of pain makes the decision feel bad, but the idea that it's moral makes the decision feel good. If the good feeling outweighs the bad in that instance, the decision will be made. It comes down to a contest between the interests. In a normal and sound state of mind which is considered the self, that decision is rational.

The hedonistic experience of net pleasure from making that decision is really nominal. It's not a hit of coke or an orgasm. It's just something one feels ever so slightly better about than bad about. Once it comes time to pay the price of that decision -- the known price in terms of pain -- the hedonistic experience can be extremely negative (in obvious excess of any nominal pleasure from making the decision).
ole_92 wrote:If value is merely a preference, then I can just say that we prefer/value not to be tortured more than we prefer/value being moral. Valuing our wellbeing is stronger than valuing morality.
Sometimes that is the case, but the proof of the pudding is in the eating. If you make the choice in your current rational state of mind (as your self) to be tortured in order to do the right thing, obviously you value morality more.

When it comes to the actual instance of torture, the rules of the auction/election have been changed: the reptilian brain has barred the door, and won't let morality into the auction room to bid, or has had its bank account infused with an inordinate amount of money/stuffed the ballot boxes.
When we come into a state of mind like this, we consider that not being ourselves, because all of the things we existentially identify with have been overridden.

The case of the red button that yields maximal pleasure is an even better metaphor.

Currently, I have no interest in pushing the red button. It has been wired into my brain to stimulate pleasure directly to such an extreme degree that once I feel it I can not help but slam the button until I die. I know this. All of the interests present know this.

Pressing the red button would be an act of inviting that new interest (an interest in pressing the button) into the auction room. But because it has deeper pockets than anybody else present, nobody there wants to let this thing into the auction room (nobody else will ever win a bid again). They'll do anything to stop it from getting in, and they should. They'll work together to bar the door. The hedonistic interest may be fighting them on it, but it will lose since everybody else is in complete agreement that this thing can not get in.
The executive responds to this by electing never to press the button, and to avoid the situation.

This is NOT because they do NOT know what the button does, it is because they DO know what it does. It's a decision made because of the knowledge of the consequences, not in ignorance of it.

I'll put it another way:

Try it, you'll like it! (assume this statement is true)

There are two responses to this that are extremely different:

A. I don't believe that I will like it, therefore I will not try it because I expect not to like it.
-This is ignorance.

B. I know I will like it, but I do not want to like it, therefore I will not try it because this is not something I want to like.
-This is insight, and informed consent, not ignorance.
ole_92 wrote:What makes moral values/preferences qualitatively different from other values/preferences?
This is the wrong question to be asking.

What makes values/preferences that we associate with our sense of self and what we want to be (like morality) different from those that we associate with unwanted qualities of environment or vice (like addiction)?

What makes values/preferences we currently hold (like not being straight edge) different from those that we could potentially hold if they are initially forced upon us (like being addicted to coke, or the pleasure button)?
ole_92 wrote: If it is true that moral values are stronger, then their drive - the stimuli that affects our wellbeing must be stronger, and that is why one might choose to be tortured for their values.
Right there. That's why your hypothesis is unfalsifiable by the experiment you proposed. Because you will only assert this at the outcome if the person is asked while being tortured and elects to continue, rather than accepting that it proves you wrong.

E.g.
ole_92 wrote:There can be examples of people who do not regret being tortured - either the level of torture is too low, and/or they are immune to it.
That's all you will assert in those cases, and your ad hoc hypothesis is never falsifiable.
Why didn't you think of this when you proposed the experiment? Is it because you were in "prove I'm right" mode, rather than "challenge my own assumptions" mode?

You still have yet to come up with any experiment that would falsify your hypothesis. In any case where a person does not break down and recant during torture, you would just say the person is just delusional and/or expecting more pain to come from breaking.

And, by the way, some people shut down or endure under torture rather than breaking, although since research on it is unethical, there are typically only anecdotal accounts: http://boards.straightdope.com/sdmb/sho ... p?t=740635
ole_92 wrote: I'm not proposing egoist hedonism as the basis of morality. I completely agree with you that #2 is the right thing to do morally.
My point is that we can not *always* act morally, regardless of our values and beliefs.
We can not always act. We can not always act rationally. We can not always think. That's irrelevant. Can you choose not to pull your hand away from a hot stove?
In the context that you're being given that choice, you are not currently being tortured, and you are given the choice: #1 or #2. You can clearly choose the right choice at that point if you want.
ole_92 wrote: Given enough suffering, most of us would change our minds, and we would have a real, undisputed preference to act immorally, just to make it aligned with our perception of net wellbeing. A preference which is indisputably stronger than a preference to act morally.
And if you were lobotomozed, the same might apply. This is irrelevant: You are changing the person through an extreme experience of duress, cutting off the effect of parts of the brain, creating an altered state of 'consciousness', and destroying the original sense of self that person had.

This does not prove your point any more than the ability of a lobotomy to affect behavior, or being connected to electrodes that control your hand movements to answer questions automatically without the physical ability to choose would.
ole_92 wrote: When I said "mistake", I meant something that one regrets later, and that's not the same as an immoral action.
Why do you care about regretting something later? Why does it matter to you now, while making the choice between #1 and #2?
Is the idea that you will regret the choice for a day more unpleasant to you in this moment than the idea of your family being tortured to death with maximal suffering (at variable rate if need be) for ten years?
ole_92 wrote: If you want to ask them whether the drug/alcohol experience is worth it, then yes, you should ask them during those states. Not after, not before, but during.
Sorry, that's nothing short of an idiotic notion. It's transparently and universally accepted as incorrect.

Let's imagine another pharmaceutical intervention:
I dose you up with something that makes you very angry and sadistic, and I disable your empathy with another targeted drug -- turning you into a completely different person -- and you spend all day flaying your family to death, and you agree that you're having a great day and that it's a good idea for you while under this influence.

Given knowledge of that fact, you should (in your current sane state of mind) choose to be released when so dosed (rather than be locked up) and set loose on your family, otherwise you're being irrational because clearly this is a pleasurable experience to have -- even if after wards and ever since you regret it.
It doesn't matter how you feel about it before or after, only during -- no matter how out of your right state of mind (or not yourself) you are. You must choose to experience that day of flaying your family, or you're just being ignorant and making an uninformed choice based on irrationality. You couldn't choose to be locked up during that day, because you know during that day you will regret having been locked up and being unable to flay your family to death.
And you can't decide after that day is over that your pleasure of flaying your family was less than the regret you have now, because your memory may be clouded now. Even if you've had the experience before and deeply regret flaying your last family, you must do it again if given the choice since you know you'll enjoy it while doing it and while so dosed regret not being able to do it if you're locked up.

Really? What?

ole_92 wrote: I don't understand why you would focus on people's memory/predictions, when clearly the experience state is the only accurate criteria.
Surely, once you're dosed up to make you enraged and remove your empathy, you want me to obediently release you upon your family when you ask. Only your current state could be relevant, your memory is unreliable, the past and future mean nothing.

So, you're going to choose now, before being dosed, for me to release you upon your family (or do whatever you want me to do later, in that state) when you later ask me, right?
Or will you ask me now to NOT release you upon your family while you're dosed up even if at that time you change your mind and want me to?
ole_92 wrote: If they sign up for one hundred times more torture at a later date, they will regret it during the next torture. Just like they regret volunteering for this torture. Both decisions were mistakes, in the way I use that term.
During the current torture, they will regret it if they elect to continue being tortured instead of taking the deal to get 100 times more later. During the later torture, they will regret having chosen to get immediate relief in the prior torture in exchange for the 100 times more later.

They're going to regret it either way, so according to you, both decisions are "mistakes" for these torture victims. These decisions are of course obviously not the same; one involves much more pain.
But since regretting the decision is the only metric you use to define it being a mistake, all decisions are wrong (since every decision is imperfect and will result in some regret), and nothing we do matters. Right?
ole_92 wrote: But it is so obvious that they think it would be worth it to make the second mistake. They want to experience at least some form of relief, and stop the current suffering. They think it's the lesser of the two evils, even though they are completely wrong about it.
Really? You think they're being that introspective while they're being tortured?

I could easily trap you in an endless torture loop you agree to. You agree to be tortured three times for a million dollars.
During the first time I offer you a thirty second break for a million dollars, you accept. During the second time, I offer you a thirty second break in exchange for adding on three more tortures, you accept. During the third time, I do the same as the second. Infinite torture that you choose. It will never end, because you will never say no and elect to just finish the current torture right now.

And because saying yes and no will both be regretted, and it's equally a mistake either way, it's perfectly sensible to agree to more torture since it's all the same/doesn't matter since saying no would be a mistake too. You're so clearly in your right mind and can make rational decisions during torture. :roll:
ole_92 wrote: Also, the line isn't so black and white between rational and irrational.
It is, actually, but it's relative to how we define the self.
ole_92 wrote: Who says that we are always rational when we're not getting tortured?
Not I. But being tortured is a pretty good way for most people to lose touch with the sense of self and their values, and be incapable of making decisions that are in line with those.
ole_92 wrote: Who is to say that we are rational when we volunteer to be tortured?
You'd have to look at whether the decisions made are in line with the self, and advancing those interests rationally.
ole_92 wrote: And rationality according to what standard? A moral nihilist or an egoist would say that the only rational choice is the one that benefits the agent, so according to them, sacrificing oneself for others is profoundly irrational.
What is the agent?

Agents have different goals and motivations. If an agent defines its goals relative to altruistic interests, then sacrificing oneself may be the most rational way to further those interests (or it may not, if those interests are better furthered in other ways such by staying alive).

This is a problem with egoism: When they assume all goals are hedonistic, they err. Sometimes self destruction is rational in furtherance of one's goals.
ole_92 wrote: But why does it even matter? Whether the calculation is made with a clear mind or not, its result dictates what we will do.
No they don't. As indicated in every thought experiment I presented. They may indicate what we try to do, or ask to do, but with knowledge and foresight we can choose to sabotage those attempts.

The man who will become a monster can lock himself up in a timed vault that will not release him until his period of monstrosity is over. Our past selves can affect our present and future selves, and inhibit our abilities to choose, or limit our practical choices in full knowledge of what those future preferences will be.
ole_92 wrote: It isn't necessarily what we ought to do morally, no matter how hard we try.
We can force ourselves to, by limiting our choices in those circumstances and accepting the regret as a cost of doing business.
ole_92 wrote: And it doesn't necessarily result in a net benefit, like in your example of signing up for 100 times more torture.
To hedonism? No, it doesn't, because we are not inherently hedonistic when we're rational, and we're not always rational when we are inherently hedonistic.
I don't know what your point here is.
ole_92 wrote: I don't remember making anyone reject veganism due to my negative utilitarian hedonism.
You, as part of the sum total of misanthropic veganism, affect the perceptions of veganism. If you avoid these things in public, that's good. But if you really want to influence people, you need a better framework for when you get into these discussions.
ole_92 wrote: But even advocacy of people who don't hold those ideas, such as TVA, results in rejection.
No mechanism is perfect, but some work much better than others.
ole_92 wrote: I'm reluctant to accept that one approach is the best, "one size fits all". For example, there are Christian vegans, and it's possible to find verses in the Bible that support that decision. Inclusiveness is not a bad thing, given our current situation.
It's important to have Christian vegans, because there are many people who have irrational religious world views and it's very difficult to get them to use reason. However, when dealing with otherwise rational and secular circles, it's more important not to mix the message or taint it with misanthropy.
ole_92 wrote: What does OOS stand for?
It's a disturbing cult that seeks to eradicate all life on Earth. They're cartoon villains, basically.
ole_92 wrote: I don't know, it's pretty common for people to have some form of compassion. It obviously doesn't work with people who have extreme parental instincts, but I don't waste my time trying to convince them.
What happens if you convince all of the people with weak parental instincts not to reproduce, and all of those with strong ones reproduce anyway?
What kind of correlations can we expect in these people, and what kind of results from this behavior?

Voluntary human extinction is just a dysgenic breeding program for human stupidity and congenital apathy.

So yeah, good job on convincing all of the intelligent and compassionate people not to reproduce while doing nothing about the rest. :roll:
The road to hell is paved with uncritical confirmation bias.
You do not have the required permissions to view the files attached to this post.
ole_92
Newbie
Posts: 8
Joined: Tue Nov 03, 2015 2:02 am
Diet: Vegan

Re: (LONG RANT) Can negative utilitarianism solve the "utility monster" problem?

Post by ole_92 »

brimstoneSalad wrote:...
That's a whole lot of information to take in. I will reread it later and respond if I have any questions. Thank you!
Post Reply