An open invitation to stop your misinformed fad and start making an actual difference in the world.

Vegan message board for support on vegan related issues and questions.
Topics include philosophy, activism, effective altruism, plant-based nutrition, and diet advice/discussion whether high carb, low carb (eco atkins/vegan keto) or anything in between.
Meat eater vs. Vegan debate welcome, but please keep it within debate topics.
Post Reply
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10367
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by brimstoneSalad »

Mr. Purple wrote:Certainly. If you really have that view, then like i said before, i'm amazed you continued for as long as you did. I'm having that situation with a kid in math I'm tutoring currently :\ . It can be rough.
And what if he told you that, intuitively, 1 +1 = 3 to him, and he can't comprehend the value of adding 1 + 1 to equal 2, regardless of how much logic you use to support it since you can't compel somebody to believe in it who doesn't value it already?
Math is math, his intuition about it, and even its utility at "convincing" dumb people is irrelevant.
You'd probably tell him to fuck off too.

The difference being, you're not even paying me to teach you this stuff.

I was being nice and trying to help you understand. You were being disrespectful in return, and making my job harder.
If this was a debate of some kind, that might be different. It isn't, wasn't, and hasn't been. All you've been doing this whole time is straw manning and failing to understand my multiple explanations, even with illustrations.
While I have repeatedly demonstrated that I thoroughly understand your position better than you do -- often by correcting you on what egoism actually holds. You have not once accurately summarized back to me any of the concepts or thought experiments I have been trying to teach you -- you understand not one of them.
You don't even fully understand your own position. How are you supposed to understand anything else?

As I said, you need to study Classical Utilitarianism, and Randian Objectivism -- two popular systems that are about equidistant from the position you have been maintaining, and while it pains me to admit it about Objectivism, are at least consistent enough to engage in a discussion about (unlike your position, which is not coherent).
Mr. Purple wrote:I'm specifically referring to you telling me that if I were to read more, then i would obviously agree with you in this argument.
I was being generous, and giving you the benefit of the doubt. It's not an argument, it's an assurance and a vote of confidence.
Richard Dawkins wrote:It is absolutely safe to say that if you meet somebody who claims not to believe in evolution, that person is ignorant, stupid or insane (or wicked, but I'd rather not consider that).
The same is the case here: I am giving you the benefit of the doubt in assuming the problem is your ignorance.

It's perfectly true that you may just be stupid, insane, or wicked (by which I think he meant a liar who is being manipulative).

Would you have preferred that I qualified the statement like that?

And no, before you even try to argue it, this is an issue of logic and you could not simply reach a different conclusion without one of those being true, just as your math student can't reach the conclusion that 1+1=3 without ignorance, stupidity, insanity, or overt dishonesty.

Even in that statement I was being generous.
WHEN and IF you ever educate yourself on these issues and understand this argument, you will realize that this is true. Unless (once ignorance is eliminated) you are stupid, insane, or wicked.

Happy?

Again, it's not an argument against your position. It's encouraging you to go read, and then you'll understand -- because I'm assuming you aren't stupid, insane, or wicked.
Mr. Purple wrote:Something like " I can't spend any more time, you will have to rely on books from here on" would be the mature way of handling that. That isn't even polite, but at least it's not a bad argument.
But I very much CAN spend more time on this. You're just being a bad student. You're making me feel like I'm wasting my time in doing so because of that.

As far as I know, everybody else reading this thread is face-palming that you still don't understand even the most basic concepts I was explaining, all of which I have explained two or three times.
There are ample amounts of extant text written on these basic notions that you can study from books, which you will be more hard pressed to annoy.

Like I said, if you want to apologize for your behavior, I'll keep trying for a little while longer. Otherwise, if nobody else will, you're on your own.

Anybody else who has any confusion on this topic, I'm happy to address.
User avatar
Jaywalker
Full Member
Posts: 138
Joined: Fri Jan 22, 2016 5:58 pm
Diet: Vegan

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by Jaywalker »

Mr Purple, I think the issue here is somewhere along the line your position changed from ethical altruism (in conjunction with psychological egoism) to ethical egoism. Your latest posts do clarify where you stand now, but I feel like you got to this point by constantly updating your view on morality throughout the discussion without due consideration of the points raised.

brimstoneSalad touched extensively upon this already, and I'm risking being redundant here, but I just want to make sure you're not arguing for a position you don't actually hold or want to hold. Do correct me if I'm wrong.
Mr. Purple wrote:As a society, we should encourage right or wrong as being whatever maximizes happiness and minimizes suffering overall, because that is what has the greatest chance of coming back to benefit us, and generates bonus points for rewarding our empathy and other social biological reward systems.
The phrasing you employed here makes it seem like you're advocating for ethical egoism, that moral obligations are actions which ultimately benefit the moral agent. I suspect you don't actually think this is the foundation of morality, since this is more like the social contract - which may be a good reason to not act selfishly but has no implications in a moral system. Surely, you're a vegan not because it makes you happy through empathy but because it reduces suffering for others even in the absence of your empathy, right?

You can't imagine anyone having preferences unrelated to their happiness and suffering (neither could I until recently), which is fine in the context of morality. It doesn't matter if people's preferences are partly or wholly influenced by biological impulses, but problems arise when you try to build your moral foundations on selfishness - an action doesn't become moral just because it is prompted by an individual's biology. What kind of a moral system is it that allows any action to be moral as long as it provides pleasure/joy to the individual performing the action?

Even IF psychological egoism is real, subjective morality devolves into uselessness just the same, as demonstrated by the examples in this thread. I'll reiterate: psychological egoism being real would not mean it is the basis of morality.
Mr. Purple wrote:Yes, their moral "obligation" would be to causing suffering. I would still say that what they do is bad from the perspective of my moral system and assuming most people like you and i have similar values in pain and suffering we can be morally justified in acting against them because of our own moral obligations. Once again, just because the moral obligations(output) change depending on the biology(input) doesn't mean the moral framework(function) is relative and up for interpretation. Understand? Einstein's Special relativity makes objective claims relative to it's observer, but the theory isn't just personal opinion.(not a perfect analogy...)
You can't say what they do is bad, period. You can justify acting against them since this gives you pleasure somehow, but not because they did anything bad or morally wrong. Under this system, you have no grounds for claiming any action is immoral as long as it benefits the acting individual.
Mr. Purple wrote:Yeah, I addressed this a little in my previous post with the word hamster. Assuming the text reader can only comprehend valuing experiences involving joy and suffering, What argument could ever be made that would convince him what he was doing was wrong? [...]
This is irrelevant. People can fail to understand what is moral or choose to be immoral, this doesn't mean objective morality ceases to be as a concept. I'd suggest giving more consideration to the parts of your conversation about this being a model. You seem to think that if you can't comprehend these "interests" after limited introspection, then the model cannot include you or have any indications for you. This is just not true.
brimstoneSalad wrote:Lacking an apology from Mr. Purple, if anybody else has any questions, I'm glad to answer them. :)
I haven't had much time to use the forum or think on it but I currently find this framework counter-intuitive in a few places. I may ask a few questions later.
User avatar
Mr. Purple
Full Member
Posts: 141
Joined: Sun Sep 13, 2015 9:03 pm
Diet: Vegan

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by Mr. Purple »

Mr Purple, I think the issue here is somewhere along the line your position changed from ethical altruism (in conjunction with psychological egoism) to ethical egoism. Your latest posts do clarify where you stand now, but I feel like you got to this point by constantly updating your view on morality throughout the discussion without due consideration of the points raised.
I don't see value in altruism intrinsically, though it definitely feels like a value since i'm rewarded when im altruistic. I assume that means i'm not an altruist. I can totally see a conflict in that i seem to be believing both in ethical egoism and psychological egoism sometimes in my previous stated positions which doesn't make sense. Oh well, hopefully i can fix it. I clearly don't have a very firm grasp on this stuff yet. I sort of want to try to think in terms of the interest model just to simplify things or at least talk in your guys language. In the example on the previous page, wouldn't a hedonist be doing something wrong by saying an offensive word that people don't want said, like the lord's name in vain, even if nobody heard it? How does the interest model deal with judgement of a lion? is the lion bad for violating the sheep's interests?
You seem to think that if you can't comprehend these "interests" after limited introspection, then the model cannot include you or have any indications for you. This is just not true.
It seems like a hedonist accepting the interest model is just accepting that he will be doing good\bad things blindly from the values he can't perceive.

I know you are new to it, so no worries if you can't answer these.
User avatar
Jaywalker
Full Member
Posts: 138
Joined: Fri Jan 22, 2016 5:58 pm
Diet: Vegan

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by Jaywalker »

You could believe in psychological egoism and still use the interest framework or any other framework of morality. This is what I've always done and still do to an extent, which I'll try to explain later in this post.
Mr. Purple wrote:In the example on the previous page, wouldn't a hedonist be doing something wrong by saying an offensive word that people don't want said, like the lord's name in vain, even if nobody heard it?
No, because the moral theory being advocated here is consequentialist. This would be wrong if we for some reason only considered the interests of those religious people, but they are in reality outweighed by other interests (the hedonist wants to say the word, other people want the hedonist to say the word or want free speech). If only two people existed in the world and one of them wanted the word uttered while the other didn't, then the moral consequence of saying the word would have been neutral/amoral.

In our world, people usually don't care if offensive words are spoken in private. They may not even care if they are spoken publicly, if convinced there is no reason to oppose their usage - you may want to refer back to the parts of this thread where idealised interests were mentioned.
Mr. Purple wrote:How does the interest model deal with judgement of a lion? is the lion bad for violating the sheep's interests?
Just like a lion can't be a mathematician if it doesn't understand mathematics, it also can't be moral or immoral. The lion is amoral, it's not a moral agent - it's incapable of making moral judgements because it doesn't have a concept of right or wrong. This doesn't mean you shouldn't interfere with the lion under any circumstances. We may for instance be morally justified in preventing the lion from eating the sheep depending on the consequences.

I think many people have the wrong idea about morality. Morality isn't a property of nature, it's a concept that can be explored through rational systems with descriptive abilities - it's something we came up with (which doesn't devalue or invalidate it), just like math or logic. As mentioned: "You can do anything and call it right because there is no morality" is the same as "you can add 1 and 1 and get 3 because there is no math".
Mr. Purple wrote:It seems like a hedonist accepting the interest model is just accepting that he will be doing good\bad things blindly from the values he can't perceive.
He wouldn't be a hedonist in the ethical sense anymore if he actually adopted the model. He could still believe in psychological egoism - this is actually my current position, sort of. I have trouble grasping the idea that people aren't motivated solely by happiness or suffering. I've always thought people do good because it feels good, essentially. This is the result that introspection has provided me. But we're very bad at understanding counter-intuitive things, at least without extensive study, so I know that I can't come to a justified conclusion without researching the subject diligently. However, even without knowledge in this area, I can see interests provide a better foundation than happiness/suffering for a moral theory.

But consider this: People can forego happiness and choose suffering - self immolating monks endure torment in order to satisfy another desire. They may experience limited happiness but they clearly make a choice in favor of suffering. Why? What is the mechanism here that has persuaded the monk to do this?

This may simply be a semantic issue. Is happiness the word used to describe the state of consciousness that fulfilled interests provide? In that case, the monk's self immolation may have provided him with happiness even though he was in agony. You and I both need to study the works of experts and have a clearer understanding of what these words entail to have a coherent discussion on this area. :)

The bottom line is, morality is not determined by what feels good to do because that would render morality nonsensical. Morality is not something intuitive, it's a philosophical subject that requires study to understand and develop.
User avatar
Mr. Purple
Full Member
Posts: 141
Joined: Sun Sep 13, 2015 9:03 pm
Diet: Vegan

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by Mr. Purple »

Jaywalker wrote:No, because the moral theory being advocated here is consequentialist. This would be wrong if we for some reason only considered the interests of those religious people, but they are in reality outweighed by other interests (the hedonist wants to say the word, other people want the hedonist to say the word or want free speech). If only two people existed in the world and one of them wanted the word uttered while the other didn't, then the moral consequence of saying the word would have been neutral/amoral.
Then lets say there are more religious people in the world or that everyone else is a free speech hating religious person, but none of them hear you say "oh my god" casually. Would you agree then that it could be a very bad thing to say in that case because it violates so many people's interests? I understand that this is technically a violation of their interest, but it would be meaningless for me to call that morally wrong.
It has to track facts about experience in some way to matter for me. If i don't agree that a meaningful violation is happening in that case, why would I incorporate a moral view that would call this bad? If you can identify what i'm not seeing, I would appreciate it.

What if all the dead people throughout history didn't want you to do something like take the lords name in vain either? This seems like a massive and legitimate concern in the interest model since a meaningful violation of interest doesn't need to be tied to experiences right? (referencing the example about you wanting your dog taken care of after you die). It seems so silly.
Jaywalker wrote: He could still believe in psychological egoism - this is actually my current position, sort of. I have trouble grasping the idea that people aren't motivated solely by happiness or suffering. I've always thought people do good because it feels good, essentially. This is the result that introspection has provided me
This is the same way I feel
Jaywalker wrote: This may simply be a semantic issue. Is happiness the word used to describe the state of consciousness that fulfilled interests provide? In that case, the monk's self immolation may have provided him with happiness even though he was in agony. You and I both need to study the works of experts and have a clearer understanding of what these words entail to have a coherent discussion on this area.
I think that the positive experience from the changes the monk thought he could bring about through burning himself outweighed the negative experience of pain he thought he would receive. When i say suffering and pleasure, I'm just talking about any positive or negative experience. This is an extremely varied set of experiences. Not being satisfied with your life accomplishments generally is just as much a negative experience as being punched in the face even though they obviously feel very different. Believing you're going to get ice cream next week can be a positive experience that will determine behavior too, even if eating the ice cream is gross when you actually receive it(the monk may not have ended up changing anything). Is this how you are using the words too? I bet some amount of the confusion is just semantics, but semantics wouldn't explain the valuing of interests that can affect nobody's experiences.

I appreciate that you haven't called me an idiot yet. It's a refreshing change of pace. :)
User avatar
Jaywalker
Full Member
Posts: 138
Joined: Fri Jan 22, 2016 5:58 pm
Diet: Vegan

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by Jaywalker »

Mr. Purple wrote:Then lets say there are more religious people in the world or that everyone else is a free speech hating religious person, but none of them hear you say "oh my god" casually. Would you agree then that it could be a very bad thing to say in that case because it violates so many people's interests?
That could be immoral (depending on how you construct the moral ought) under an interest based system if those interests exist independently, but I thought the idealised interests approach covered this pretty well. Why do they have an interest in those words not being said? Most people have a more fundamental interest in knowing the truth. Wouldn't they stop caring about these words if they knew religion is make-believe?

This is also a utility monster issue, and the reason why altruism is being advocated as the ethical basis here, rather than egoism. See brimstoneSalad's post on page 4, search for "interest based utility monster".

I'm not sure why you don't find a consistent moral theory meaningful. It holds where others break down, it's demonstrably better, especially compared to ridiculous ones like hedonism. We may have, in our historical past, first gained an interest in helping others for egoistic reasons, but morality as a concept transcends that, and the only way it can be substantiated is by adhering to a non-arbitrary system. Maybe you are just attached to the is-ought mindset and aren't interested in morality substantiated beyond that.
Mr. Purple wrote:I think that the positive experience from the changes the monk thought he could bring about through burning himself outweighed the negative experience of pain he thought he would receive. When i say suffering and pleasure, I'm just talking about any positive or negative experience. This is an extremely varied set of experiences. Not being satisfied with your life accomplishments generally is just as much a negative experience as being punched in the face even though they obviously feel very different. Believing you're going to get ice cream next week can be a positive experience that will determine behavior too, even if eating the ice cream is gross when you actually receive it(the monk may not have ended up changing anything). Is this how you are using the words too?
I use those words similarly in everyday conversation, but when we're discussing the actual nature and mechanism of these feelings, I have no idea if I even understand what a feeling or thought is, other than I prefer some and avoid others, it's all very vague and abstract. So I sadly can't contribute to the conversation on this part.
Mr. Purple wrote:I bet some amount of the confusion is just semantics, but semantics wouldn't explain the valuing of interests that can affect nobody's experiences.
You can value it because it provides a better, more consistent moral theory, and that's all there is to it.
Mr. Purple wrote:I appreciate that you haven't called me an idiot yet. It's a refreshing change of pace.
I don't think you're an idiot! You seem to have above average intelligence. :D
User avatar
Mr. Purple
Full Member
Posts: 141
Joined: Sun Sep 13, 2015 9:03 pm
Diet: Vegan

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by Mr. Purple »

I'm not sure why you don't find a consistent moral theory meaningful. It holds where others break down, it's demonstrably better, especially compared to ridiculous ones like hedonism. We may have, in our historical past, first gained an interest in helping others for egoistic reasons, but morality as a concept transcends that, and the only way it can be substantiated is by adhering to a non-arbitrary system.
I do see consistent moral theories as important, but I don't see consistency(between people?) as the only important element to have in a moral theory. If a framework somehow led to the conclusion that stabbing my family was always best, I would have to assume I had a bad framework regardless of consistency or rational coherency.

Ill try to sum up my recent understanding the interest framework: What has intrinsic value to me is suffering and joy, but different people possibly have a different range of things that could make up their set of intrinsic values, and even if they don't, it would be too hard to tell the difference between what people think are their intrinsic values(they might just be looking as deep as their instrumental values) and what actually are their intrinsic values. We can't really make a 100% fully objective\universal framework from this. What the interest framework seems to do is make the language and definitions vague enough that, instrumental values, intrinsic values, and irrational beliefs, are all indistinguishable from each other. It just blurs everything together into "reasons for doing something" and then calls that whole category an intrinsic value(seems sketchy). That loss of resolution gives everyone the ability to participate in the same system, but there has to be a price paid somewhere(being controlled by dead people for one).

Since the only advantage this system has is universality, to the extent that it's prescriptions would have any pull is the difference between the value of universality and the intuition the system is telling them to violate. This would have a lot more value in a world where people are very different from each other, but I don't think that is actually the case.

That could be immoral (depending on how you construct the moral ought) under an interest based system if those interests exist independently, but I thought the idealised interests approach covered this pretty well. Why do they have an interest in those words not being said? Most people have a more fundamental interest in knowing the truth. Wouldn't they stop caring about these words if they knew religion is make-believe?
Ok, i see what you are saying, but try to understand the specific point i'm attempting to make(i don't always explain myself well enough). This argument sounds like when you ask a religious person if they would kill their own baby if god told them too and they say "god would never order that". If that's the argument, then it can get around anything. Im asking you to assume a scenario where the idealized interests of the past beings don't line up with your moral intuitions at all. Let's say we found a verified ancient lost record describing that millions of all knowing people wanted desperately for you to not do something you find pleasurable or good like enjoying a sunset, or providing for your family. Would you actually be willing to call yourself a terrible person for doing these things? Maybe you are fine with this, but I can't fathom this being the good moral outcome.
This is also a utility monster issue, and the reason why altruism is being advocated as the ethical basis here, rather than egoism. See brimstoneSalad's post on page 4, search for "interest based utility monster".
I see this talked about on page four, but i don't see how it is solved by altruism. The interest model does much worse in this scenario for me. Dying to satisfy the interests of a non existent utility monster seems like a much harder sell in the interest model than the hedonistic experience based alternative. I'm looking through the deontological thread now.

Edit: I guess it might be relying on the utility monster to be an altruist himself and have the choice to kill himself to solve the problem. But it still seems like the "good" outcome would be an altruist human trying his best to sacrifice himself for this being's pleasure. For this to be solved in the interest framework, wouldn't you have to assume that everyone becomes altruistic in their idealized form?

-
Some other questions im curious about:

Wouldn't the experience machine thought experiment also be a problem for the interest model as well? Just make a machine that aligns all your interests in such a way as to make them satisfied. Would you plug into that? Or are you not allowed to biologically change your interests since the interest would still remain in concept form?( Like what natural selection really intended for you or something silly like that?)

I was thinking about what brimstone said a bit more, and I thought of an example that might help illustrate one of my main problems with the interest model: Imagine in the near future a scientist wants to create a robot that is designed with the appropriate cognitive capabilities for the sole purpose of enduring the deepest and most agonizing suffering imaginable. A living hell machine. Since the only real interest here is the scientist wanting to build the robot, this robot's unimaginable pain would be a morally good outcome in the interest model right?
User avatar
Jaywalker
Full Member
Posts: 138
Joined: Fri Jan 22, 2016 5:58 pm
Diet: Vegan

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by Jaywalker »

Sorry for the late reply, busy week.
Mr. Purple wrote:I do see consistent moral theories as important, but I don't see consistency(between people?) as the only important element to have in a moral theory. If a framework somehow led to the conclusion that stabbing my family was always best, I would have to assume I had a bad framework regardless of consistency or rational coherency.
It has to be internally consistent, and it has to represent morality, not some other arbitrary concept. For instance, hedonism can actually be a consistent system if you're willing to admit you can do whatever you want if it benefits you, but this wouldn't be morality. Words have meanings, and morality is memetically tied to regard for others.

Not sure about the family part. What sort of framework would lead to that conclusion?
Mr. Purple wrote:Ill try to sum up my recent understanding the interest framework: What has intrinsic value to me is suffering and joy, but different people possibly have a different range of things that could make up their set of intrinsic values, and even if they don't, it would be too hard to tell the difference between what people think are their intrinsic values(they might just be looking as deep as their instrumental values) and what actually are their intrinsic values. We can't really make a 100% fully objective\universal framework from this. What the interest framework seems to do is make the language and definitions vague enough that, instrumental values, intrinsic values, and irrational beliefs, are all indistinguishable from each other. It just blurs everything together into "reasons for doing something" and then calls that whole category an intrinsic value(seems sketchy). That loss of resolution gives everyone the ability to participate in the same system, but there has to be a price paid somewhere(being controlled by dead people for one).
Suffering/happiness has the same problem. How do we compare the suffering of different people? It seems to boil down to the values individuals assign to their personal experience, but we don't have a way of directly comparing them. This is mostly about insufficient understanding of how sentience works, and why it's important to study this subject. Knowledge is crucial to morality.

An advantage provided by the interest framework is that individual interests are weighed equally against all other affected interests, not based on values assigned to them.
Mr. Purple wrote:Since the only advantage this system has is universality, to the extent that it's prescriptions would have any pull is the difference between the value of universality and the intuition the system is telling them to violate. This would have a lot more value in a world where people are very different from each other, but I don't think that is actually the case.
I'm not sure if pandering to irrational people is a metric of morality. We should substantiate morality first and come up with ways to convince others afterwards. In any case, I think the interest framework represents the real world well. Most people have the intuition that doing something others don't want, even if they don't know it's done (stealing from a wallet, spitting in their drink, etc.), is bad.
Mr. Purple wrote:This argument sounds like when you ask a religious person if they would kill their own baby if god told them too and they say "god would never order that". If that's the argument, then it can get around anything. Im asking you to assume a scenario where the idealized interests of the past beings don't line up with your moral intuitions at all. Let's say we found a verified ancient lost record describing that millions of all knowing people wanted desperately for you to not do something you find pleasurable or good like enjoying a sunset, or providing for your family. Would you actually be willing to call yourself a terrible person for doing these things? Maybe you are fine with this, but I can't fathom this being the good moral outcome.
Cursing would be immoral if people's interest in that was legitimate, yes, I did address that. I just wanted to further explain why we may find that example counter-intuitive. It's because it doesn't represent the mindset of humans as we know it - fulfilling that interest would work against the fulfillment of other and more fundamental interests held by everyone in the reality of our world.

If most people did happen to have that interest independent of other interests, then it would be immoral to violate it. This applies to every situation with idealised interests. If people's interest in torturing each other for eternity outweighed all other affected interests, it would be immoral to stop them. If everyone's sole interest was to see me starve, it would be immoral to feed myself (brimstoneSalad said somewhere that survival situations don't factor into morality, but not sure how he came to that conclusion without assuming a non-altruistic position). This is the result when you frame it in suffering/happiness terms as well. However, interests aren't gained in a vacuum. These interests would require a mind alien to us (at least to me). How would people gain these idealised interests in the first place without conflicting with their other interests?
Mr. Purple wrote:I see this talked about on page four, but i don't see how it is solved by altruism. The interest model does much worse in this scenario for me. Dying to satisfy the interests of a non existent utility monster seems like a much harder sell in the interest model than the hedonistic experience based alternative. I'm looking through the deontological thread now.
Now that I think about, it doesn't seem to solve it. I don't think it does worse than regular consequentialism though. I'll have to think some more on this and may add more later. That said, a hedonist wouldn't be required to sacrifice himself/herself, but that comes at the price of relinquishing any claim to morality.

By the way, I think I made a mistake in my previous post: "If only two people existed in the world and one of them wanted the word uttered while the other didn't, then the moral consequence of saying the word would have been neutral/amoral." - Actually, that's only in agent neutral consequentialism. In this altruistic interest based consequentialism (don't know what it's called) it could be moral or immoral to say the word depending on which of the two says it. It's amoral if a third person says it.
Mr. Purple wrote:Wouldn't the experience machine thought experiment also be a problem for the interest model as well? Just make a machine that aligns all your interests in such a way as to make them satisfied. Would you plug into that? Or are you not allowed to biologically change your interests since the interest would still remain in concept form?( Like what natural selection really intended for you or something silly like that?)
I'm not interested in plugging into the machine in the first place, I have an interest in maintaining my sense of self. Are you saying everyone's idealised interest would be to plug into the machine?
Mr. Purple wrote:Imagine in the near future a scientist wants to create a robot that is designed with the appropriate cognitive capabilities for the sole purpose of enduring the deepest and most agonizing suffering imaginable. A living hell machine. Since the only real interest here is the scientist wanting to build the robot, this robot's unimaginable pain would be a morally good outcome in the interest model right?
Yes, but isn't pain tied to sentience? How is the robot able to be sentient without having interests (or developing them, in your view)? If the robot doesn't care, it would be morally good.
User avatar
Mr. Purple
Full Member
Posts: 141
Joined: Sun Sep 13, 2015 9:03 pm
Diet: Vegan

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by Mr. Purple »

Sorry for the late reply, busy week.
No worries. Take as long as you like. :)
Jaywalker wrote:It has to be internally consistent, and it has to represent morality, not some other arbitrary concept. For instance, hedonism can actually be a consistent system if you're willing to admit you can do whatever you want if it benefits you, but this wouldn't be morality. Words have meanings, and morality is memetically tied to regard for others.
Well, What branch of philosophy would it be to talk about actions within egoism being right and wrong actions if not morality? That sounds like what morality is about, and fits all the definitions I can see. I don't see why you and brim are forcing such a restricted definition. I know the colloquial definition simply means regard for others, but i don't think that is how it is talked about in philosophy or in most definitions of the word.

I don't see how the interest model is any more internally consistant then other systems, I only see that it will fit a hypothetically wider range of people at the cost of persuasive power.

Internal consistency doesn't seem that difficult, and most systems achieve it while retaining more persuasive power than the interest model. Simple as this: look for a common thread between as many of your important beliefs as possible and then make a consistent rule that captures that common thread. The point of this is to try and deduce what your intrinsic values are. You may need to bite the bullet and reject a few expendable moral intuitions you have that don't fit this rule to insure consistency. Then you extrapolate that rule out to find other moral truths that you don't have intuitions or beliefs about. This could naturally lead to something like the interest model is if you weren't willing to make any cuts at all, or if all of your intuitions were in such conflict with each other that you couldn't possibly form any consistant rule between any of them except for the super vague "they are reasons for doing something".

These extrapolated judgments carry weight in situations that you don't have an intuition for only because they are created from intuitions you already have. Sacrificing myself to a utility monster is counter intuitive, but if you are drawing a line directly from my intuitions, then i can be reasoned with and convinced. If i can't see that line then it has no persuasive power.

Jaywalker wrote:Suffering/happiness has the same problem. How do we compare the suffering of different people? It seems to boil down to the values individuals assign to their personal experience, but we don't have a way of directly comparing them. This is mostly about insufficient understanding of how sentience works, and why it's important to study this subject. Knowledge is crucial to morality.

An advantage provided by the interest framework is that individual interests are weighed equally against all other affected interests, not based on values assigned to them.
Suffering and Pleasure are something we could actually measure scientifically in the near future, and there are a lot of aspects of suffering\pleasure that we can directly measure now. That honestly seems like one of the hedonistic framework's strong suits. All interests being weighed equally seems like a major disadvantage if it means the system can't accurately describe the value variations that actually exist. It would be yet another loss of resolution.
Jaywalker wrote:I'm not sure if pandering to irrational people is a metric of morality. We should substantiate morality first and come up with ways to convince others afterwards.
I don't see where irrationality fits into this honestly. Hedonistic frameworks aren't inherently irrational. The optimal\good outcome is defined by the framework and we can only tell if the person is irrational depending on the way they try to achieve that good\bad outcome of that framework. The use of a belief in reasoning doesn't automatically define you as a irrational person. To be rational, you just have to have your actions logically lead to the optimal outcome defined by your framework. If someone sees death as bad, and they know jumping off a cliff will kill them, then jumping off the cliff is an irrational action. But having the belief that jumping off a cliff is bad by itself doesn't make a person irrational. And what does "Substantiating morality" mean?

Even the interest framework seems to use intuition in choosing what process to assign value. Why pick "reason for action" as it's intrinsic value unless you already have the intuition\belief that good and bad has something to do with sentient beings and know this will include those sentient beings. If what you are wanting is a moral system that doesn't include any sort intuition\belief, then it seems like we could just make a moral system that says the good actions are actions that involve movement above 5 MPH, and bad actions are movement slower than this. This seems consistant and rational. It's not subject to any of those oh so terrible intuitions. It's also completely useless and misses the point. Peoples intuitions\beliefs about what is moral has to play some role. (unless we get scientifically advanced enough to describe peoples intrinsic values biologically, and i would definitely lean towards a framework that has this in mind)
Jaywalker wrote: In any case, I think the interest framework represents the real world well. Most people have the intuition that doing something others don't want, even if they don't know it's done (stealing from a wallet, spitting in their drink, etc.), is bad.
But that could easily be because they are only looking as deep as their instrumental values. I don't want someone to steal my wallet or spit in my drink either. People will have negative responses to things like these obviously, but that doesn't mean they are what those people intrinsically value. If you didn't get grossed out(negative mental state) at all by the thought of people spitting in your drink, would you still define it as a intrinsically bad thing? Probably not i would guess, but it's a hard to determine. People's intrinsic values aren't always apparent to them, but with enough introspection and experimenting this can become more accurate.
Jaywalker wrote:I'm not interested in plugging into the machine in the first place, I have an interest in maintaining my sense of self. Are you saying everyone's idealised interest would be to plug into the machine?
Realizing your interests is good though right? You will just have a millisecond of having your interest violated followed by all your interests being realized fully forever. If realizing interests is what your framework says is good, then aren't you being irrational for refusing it in the same way i was called irrational for refusing since my goal was to maximize positive experience?
I don't know if everyone's idealised interest would be to plug into the machine, but either way, we don't find this out simply by asking them. For example, A lot of people hearing the Experience Machine thought experiment don't know exactly why they don't like the idea of plugging into a machine to get a more pleasurable life, but they will usually just say something about it not being real or that they would prefer to live in the real world. We can tweak the variables of the experience machine to test to see if living in reality is really their intrinsic value. For example: If we flip the experience around and assume you are already in the machine.
http://www.danweijers.com/pdf/The%20Exp ... eijers.pdf
The following thought
experiment, the Trip to Reality, holds constant the realness of experiences inside and
outside of the machine, while changing a few other purportedly irrelevant factors.
Imagine that you leave your family for a weekend to attend a conference on the
Experience Machine thought experiment. While you are there, someone informs you that
you are actually in an experience machine. She offers you a red and a blue pill. She
explains that taking the blue pill will take you back to reality and taking the red pill will
return you to the machine and totally wipe any memories of having being in reality.
Being a curious philosopher you swallow the blue pill. It turns out that reality is fairly
similar to the world you have been experiencing inside the machine, except that your
experiences are a little mundane and do not feel quite as enjoyable as before. Some things
are different, of course. You discover that nearly all of your friends and family are either
in experience machines or do not exist in reality! Your father is there, so you spend time
with him. But, a few conversations reveals that he is not really the person you know as
„Dad‟. It is time to make the choice. Will you take the red pill so that you can go back to
your life, family and friends with no idea that it is not in fact real?
When it's asked this way people are fine with living in the fake world, so now we know that wanting reality wasn't intrinsic after all. Turns out it's probably status quo bias that makes people choose the way they do. This is the kind of work we should be doing to find what morally matters to humans, not just throwing up our hands and accepting the first thing a person tells us. It's just lazy and inaccurate.
Jaywalker wrote:
Mr Purple wrote:Imagine in the near future a scientist wants to create a robot that is designed with the appropriate cognitive capabilities for the sole purpose of enduring the deepest and most agonizing suffering imaginable. A living hell machine. Since the only real interest here is the scientist wanting to build the robot, this robot's unimaginable pain would be a morally good outcome in the interest model right?
Yes, but isn't pain tied to sentience? How is the robot able to be sentient without having interests (or developing them, in your view)? If the robot doesn't care, it would be morally good.
The robot wasn't made with the ability to act, so by the definition of "reasons for an action", he can't have interest. Unless you say the suffering is the act, but then the reason for the robot suffering is because that's what the scientist wanted, so the suffering becomes a good thing. This is one of the worst outcomes i've seen from a moral system. This situation doesn't seem improbable or far off either.
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10367
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by brimstoneSalad »

Jaywalker wrote: An advantage provided by the interest framework is that individual interests are weighed equally against all other affected interests, not based on values assigned to them.
We can compare relative values of interests based on behavior, or by simply asking.
If your behavior shows you'd rather be kicked yourself than see your dog kicked, and you answer clearly you'd also rather be kicked than have your dog kicked without you knowing about it, then we can establish comparative value within an individual.

Between individuals, value can be seen as relative to degree of sentience.

This is where the utility monster usually comes in -- it is so highly sentient, that its values are more important than everybody else's. And hypothetically, they are: We're talking about a being of god-like sentience. To it, we are ants.

The issue I take with such a thought experiment is that I don't necessarily believe such a being is possible.
But let's say, instead, we're talking about a being that within which lives (and relies upon it) a civilization of trillions of highly sentient beings (like humans). Now it's really just a trolley problem.

For the altruist, however, if the Utility monster is serving hedonistic ends, it can be seen as evil, so we can question whether we want to help an evil being achieve such evil ends.
If the Utility monster is actually justified in its needs (like as a guardian protecting its inhabitants who have an interest in surviving), then it becomes a much more sympathetic character in a tragic story.
Jaywalker wrote:Most people have the intuition that doing something others don't want, even if they don't know it's done (stealing from a wallet, spitting in their drink, etc.), is bad.
This speaks most strongly to the semantic point, but also comes down to Occam's razor, in terms of simplifying morality down to a single core provision and using reason to extrapolate from that.

Of course, there is also justification:
Jaywalker wrote: If everyone's sole interest was to see me starve, it would be immoral to feed myself (brimstoneSalad said somewhere that survival situations don't factor into morality, but not sure how he came to that conclusion without assuming a non-altruistic position).
Practical morality is an artifact of choice and justification.
Like a wild animal, when you don't have a choice, the action becomes amoral -- it is practically justified.
Like as if you are acting neither harmfully nor helpfully, actions can be seen as morally neutral.

A tornado has harmful consequences, but it is not in any practical respect immoral -- it is amoral. We could call it evil, but only in the most general sense due to its consequences. It's not useful to call actions that are not reasonably seen as choices "immoral".

This is more an issue of application.
As in "Yes, it was bad that you did that" followed by "but it was not your fault because you didn't have a choice".

We have to be clear when we're talking about judgement: whether of the action and its consequences alone, or of the individual.
Judgement of individuals is much more complicated than judgement of actions/events.
Jaywalker wrote: Yes, but isn't pain tied to sentience? How is the robot able to be sentient without having interests (or developing them, in your view)? If the robot doesn't care, it would be morally good.
If the robot were not sentient, then it could not really suffer, and sentience only exists in the context of interests.
It would just be activation of a nerve pathway that you arbitrarily labeled "suffering", but could have just as easily labeled "pleasure" or "spoon".

An interesting thing happens in animal behavior when exposed to intense pleasure or pain though (that is, even if you started out with something sentient): in either case, they go catatonic and stop acting, because it overwhelms all other feedback, and without any correlation between acting and response, they lose sense of self and anything else connected to the function of a mind.

Infinite suffering is not really a coherent notion. Hell AND heaven are both functionally identical to oblivion.
Post Reply