Just because there has been philosophical discourse on it doesn't mean it has the capacity to become a valid moral system. I can oppose egoism not only because it doesn't fit the definition of morality but also because it yields a completely useless system. It has already been said several times: If you truly adopt that egoist position, morality becomes meaningless. It's consistent in what it says, but doesn't actually provide a moral standard. It offers no way to resolve conflicts of interest, and depending on its moral principles, may render it impossible to make moral judgements. Is it right to do what one perceives to be in their best interest (deontological - as you first argued when addressing the experience machine), or is it right to do what actually is in one's best interest (consequential)? If it's the former, there is no wrong action to take, since according to you people always choose what feels good ("I think that the positive experience from the changes the monk thought he could bring about through burning himself outweighed the negative experience of pain he thought he would receive"). If the latter, there is no objective morally right outcome, only subjective - a sadistic serial killer who manages to evade capture is a good person.Mr. Purple wrote:Well, What branch of philosophy would it be to talk about actions within egoism being right and wrong actions if not morality? That sounds like what morality is about, and fits all the definitions I can see. I don't see why you and brim are forcing such a restricted definition. I know the colloquial definition simply means regard for others, but i don't think that is how it is talked about in philosophy or in most definitions of the word.
I really thought you'd at least consider utilitarianism after everything said on this, or at least reject egoism. I can't see why you'd advocate this useless system.
Sure, but what you're proposing isn't consistent. When fixed to be consistent, your system doesn't seem to have persuasive power, since even you are not convinced by its logical implications. Either you agree it is right to force you into the machine, to kill you painlessly for my pleasure, to steal from you if you don't notice the difference, etc, or you are being inconsistent.Mr. Purple wrote:I don't see how the interest model is any more internally consistant then other systems, I only see that it will fit a hypothetically wider range of people at the cost of persuasive power.
Internal consistency doesn't seem that difficult, and most systems achieve it while retaining more persuasive power than the interest model.
Do you mean pain and pleasure? How do we directly measure them now? Aren't they inferred from behavior and physical signs? As for interests being weighed equally, it works because they aren't arbitrarily given value. Instead, what matters is how many interests are violated. For example, if someone had to choose between killing a person who only cared about pain and a person who who cared both about pain and and continued existence, it would be better to kill the former. You could say the latter "valued" life more. That is if you imagine these interests in a bubble - in reality they're accompanied by many other interests, some more fundamental than others because they enable the rest to exist.Mr. Purple wrote:Suffering and Pleasure are something we could actually measure scientifically in the near future, and there are a lot of aspects of suffering\pleasure that we can directly measure now. That honestly seems like one of the hedonistic framework's strong suits. All interests being weighed equally seems like a major disadvantage if it means the system can't accurately describe the value variations that actually exist. It would be yet another loss of resolution.
People are irrational, they distort the rational form of hedonistic/egoistic frameworks, try (and fail) to rationalise away the contradictions between the framework and what they think is right. They mostly adopt an illogical version of these moral theories. Hedonistic frameworks can be rational, sure, but what pull do their actual prescriptions have on people? Their intuitions either lead to an illogical and invalid moral theory, or something that can't be called morality.Mr. Purple wrote:I don't see where irrationality fits into this honestly. Hedonistic frameworks aren't inherently irrational.
I mentioned this because you argued the interest framework wouldn't convince people. While it doesn't say anything about its validity, I actually think it'd be convincing for most people. You're focusing on undesirable hypothetical situations (which classical utilitarianism has even more problems with), and ignoring where it largely matches people's intuitions in real life.
Providing a consistent, objective, non-arbitrary framework for it, like people have been trying to do on this forum and elsewhere.Mr. Purple wrote:And what does "Substantiating morality" mean?
I lost you here. It doesn't use intuition (though some parts of it may be intuitive), it uses the accepted definition of morality (the only way morality is objective and makes sense) and uses logic to construct a coherent framework.Mr. Purple wrote:Even the interest framework seems to use intuition in choosing what process to assign value. Why pick "reason for action" as it's intrinsic value unless you already have the intuition\belief that good and bad has something to do with sentient beings and know this will include those sentient beings
I don't want to get into the machine and I don't want a different set of interests. I want to maintain who I am and I want to experience what I perceive to be the real world. You're basically destroying me and creating a new person with different interests. This is more about what the "self" is, and to what extent future interests (those that don't yet exist) matter in the framework.Mr. Purple wrote:Realizing your interests is good though right? You will just have a millisecond of having your interest violated followed by all your interests being realized fully forever. If realizing interests is what your framework says is good, then aren't you being irrational for refusing it in the same way i was called irrational for refusing since my goal was to maximize positive experience?
I'd already considered different variations of the machine before answering. Other people might prefer the machine but I still prefer the real world. I'd only prefer the machine world if I was in severe pain or distress, which would be a case where my interest in experiencing the real world was outweighed by my interest in avoiding suffering. If I valued only suffering/happiness, in order to be rational, I'd have to choose the machine no matter what.Mr. Purple wrote:For example: If we flip the experience around and assume you are already in the machine. [...] When it's asked this way people are fine with living in the fake world, so now we know that wanting reality wasn't intrinsic after all. Turns out it's probably status quo bias that makes people choose the way they do. This is the kind of work we should be doing to find what morally matters to humans, not just throwing up our hands and accepting the first thing a person tells us. It's just lazy and inaccurate.
I agree that we should be better equipped to understand idealised interests OR suffering and happiness. Knowledge is crucial to morality.