Logical consistency is necessary to call it a system and discuss it in a philosophical context.Mr. Purple wrote: What is the justification for applying all these criteria to filter moral systems? Are they all actually necessary to call something moral?
Empirical consistency is necessary to be relevant to the reality we live in. We're not in Narnia. A very different universe may call for or substantiate different rules, assuming morally relevant differences are even possible (they may not be, and probably aren't).
Objective and non-arbitrary deal with the context of the philosophical question.
I've already answered these questions. You need to ask something more specific if you don't understand, or maybe somebody else (Cirion? Miniboes?) can take a crack at explaining them in a different way.
Rationalwiki is fine for stuff like anti-vaccination and anti-evolution, and specific creationists, but it's not going to be a good source on philosophy or anything relating to social justice. They make little attempt to be unbiased and they don't (ironically perhaps, given the name) hold rational positions on these topics.Mr. Purple wrote:By objective do you just mean the normative version of morality(http://rationalwiki.org/wiki/Objective_morality)? Or does this list need to apply to things like interests\reasons as well?
See: https://en.wikipedia.org/wiki/Moral_objectivismob·jec·tive
əbˈjektiv/Submit
adjective
1.
(of a person or their judgment) not influenced by personal feelings or opinions in considering and representing facts.
More specifically:
https://en.wikipedia.org/wiki/Moral_universalism
https://en.wikipedia.org/wiki/Moral_realismMoral universalism (also called moral objectivism or universal morality) is the meta-ethical position that some system of ethics, or a universal ethic, applies universally, that is, for "all similarly situated individuals",[1] regardless of culture, race, sex, religion, nationality, sexual orientation, or any other distinguishing feature.[2] Moral universalism is opposed to moral nihilism and moral relativism.
This is a sensible prerequisite for having a rational and substantive discussion about what morality is. If you are not interested in using logic to discuss morality, or in finding common ground and being able to convince others of a particular moral position (instead, preferring the conflict averse "everybody is right" approach), this isn't the place for that.Moral realism (also ethical realism or moral Platonism) is the position that ethical sentences express propositions that refer to objective features of the world (that is, features independent of subjective opinion), some of which may be true to the extent that they report those features accurately. This makes moral realism a non-nihilist form of ethical cognitivism with an ontological orientation, standing in opposition to all forms of moral anti-realism and moral skepticism, including ethical subjectivism (which denies that moral propositions refer to objective facts), error theory (which denies that any moral propositions are true); and non-cognitivism (which denies that moral sentences express propositions at all). Within moral realism, the two main subdivisions are ethical naturalism and ethical non-naturalism.
[...]
Moral realism allows the ordinary rules of logic (modus ponens, etc.) to be applied straightforwardly to moral statements. We can say that a moral belief is false or unjustified or contradictory in the same way we would about a factual belief. This is a problem for expressivism, as shown by the Frege–Geach problem.
Another advantage of moral realism is its capacity to resolve moral disagreements: If two moral beliefs contradict one another, realism says that they cannot both be right, and therefore everyone involved ought to be seeking out the right answer to resolve the disagreement. Contrary theories of meta-ethics have trouble even formulating the statement "this moral belief is wrong," and so they cannot resolve disagreements in this way.
I'm not talking about Randian Objectivism, which is a particular failed attempt at establishing an objective framework. Rand was asking the right kinds of questions, her answers were just wrong. We can talk about why they are wrong if you want to make a thread on that.
There are a number of proposed ethical theories that attempt to be objective answers to these important questions, and it's worth discussing them, relativism and nihilism don't qualify for serious considerations because they aren't relevant .
I don't know what you mean by the second question.
You put it forward, by advancing relativistic definitions and defending egoism ("ethical" egoism, and psychological egoism).Mr. Purple wrote:As far as I can tell though, you were attacking an argument I never put forward.
That's not how it works. The burden of proof is on you to make an argument, not to just make some vague claims about feeling like I made fallacies or appealing to your personal incredulity as you have been doing.Mr. Purple wrote:If you aren't making any fallacies, then maybe show me why it's not as it appears.
If you do not understand the discussion well enough to make an argument, then you're asking me to teach you the subject matter. I prefer debate with somebody who understands the subject well enough TO debate it, but I can teach you too: doing so has some value since I can learn new ways of explaining things. But you have to do your part and pay attention, which I don't think you've been doing, and stop making assumptions that I'm constantly straw manning positions because my explanations differ from your preconceptions.
Like I told Teo in the flat Earth thread, you need to at least tentatively assume that I know what I'm talking about and stop arguing long enough to ask specific questions with the simple goal of understanding.
In the context of non-hedonistic egoism, it is not in your interests to change your interests to those that are more easily satisfied. It's satisfying your interests as they are. If it were, then you should go the Zen or Taoist route and just abandon all interests, or align all of your interests to things that are already happening, thus avoiding ever being dissatisfied. Become indifferent to all suffering, and have no goals or ambitions other than inner peace. That's very nearly anti-egoism; it's letting go of the self, not elevating it and gratifying it.Mr. Purple wrote: He and others that have it as their best interest would be responsible for changing him.
http://buddhajourney.net/letting-go-of-ego/
The idea of Egoism as you are advocating it is incoherent. There are so many concepts that we could actually talk about and having meaningful discussions on if you were not stubbornly blind to this simple fact.Our ego is is the failure and success of our lives. It creates and feeds our desires and greed, which ultimately leads us to suffering (dissatisfaction). To eradicate ego, we practice non-attachment, to things, people and ideas.
You want to talk about Zen? Fine, let's do that.
You want to talk about Randian Objectivism? Great, that's something we can discuss too.
You want to talk about nihilism? It's not much of a discussion, but it's at least a sort of coherent idea (the rejection of the discussion).
When we talk about Egoism, like when we talk about god, we run into a number of serious logical contradictions, and you have to take a left or right turn, not idle in the intersection of contradiction like you're doing. When you reach the ends of those roads, at their logical conclusions, depending on how you change or diverge from egoism, you find one of several conclusions.
The most consistent with egoism is simply "do whatever you want"; that comes from the interest driven egoism which is most consistent.
If you want to push the hedonistic worship of the pleasure center "egoism", then I explained the outcome of that too (and why it's arbitrary).
Earlier you said:
NO. That is the logical conclusion of what you're advocating.Mr. Purple wrote: You are the only one here using the mindless hedonist caricature.
You can not maximize pleasure stimulation and have a mind. The two are mutually exclusive. Maximal pleasure IS mindless.
When you keep rejecting the logical conclusion and calling it a caricature, it's obvious that you're in no way consistent on what you believe or are referencing.Mr. Purple wrote: I thought it would be clear i'm using the hedonistic version.
Either you're trolling, or you haven't understood anything I've been explaining because of some profound lack of understanding on your part.
I asked you to re-read what I had written before given new information and additional explanation: have you?
If you had read my posts carefully, you would have seen that I was clearly dealing with both, and the problems with each.Mr. Purple wrote: Have you been assuming some sort of preference egoism this whole time? Your arguments will be perceived as a straw man if you do that.
This is yet another clear indication you are not paying attention or making a sincere effort at understanding what I'm writing.
I already did. Many times. Please re-read my posts, and then ask more specific questions if you have any.Mr. Purple wrote: I don’t see a justification for this. Feel free to explain.
If you don't even accept consequentialism, that's an entirely different issue.Mr. Purple wrote: I don't think consequences are the determining factor of what is definitionally moral though.
Your hedonistic version -- which is necessarily pursuit of mindless pleasure stimulation -- is not necessarily different. That is potentially consequentialist (if you reject psychological egoism). It's also arbitrary.Mr. Purple wrote: Egoism isn't doing anything fundamentally different from a definitional standpoint either it seems.
Interest based egoism is different, because if evaluated relative to consequence deals only in efficacy based on factors beyond ones control, making the term meaningless. If we look only at intent then every intent becomes a moral intent because it's what the agent wanted.
That was DERIVATION not definition. I was very clear about that. Either you weren't paying attention, or you're deliberately straw manning me.Mr. Purple wrote: If that definition is to simplistic, and you think it should be changed, maybe egoism isn't counted as moral anymore, but the definition I posted seems better in describing the way people use morality normatively than your method of listing a bunch of rational principles.
If you didn't understand what I meant by that, you should have asked.brimstoneSalad wrote: In no way are we starting from what we think the definition should be, or what we feel the word means.
You're essentially asking me to teach you this, and then you don't pay attention when I explain something. There's no way you could have missed that if you had carefully read my post.
This thread is about how morality is properly defined; how good and evil are logically determined, etc.
The discussion is about the means of derivation, and why those aspects are important.
I'm not just telling you "what" morality is, I'm telling you WHY it is what it is, and the spectrum of ideas that can be regarded as contenders and which are worthy of debate and discussion (or able to be debated and discussed). There are certain prerequisites for an idea for it to even be possible to coherently discuss it.
Please try to include a link when referencing something like that.Mr. Purple wrote:The reason i chose that definition is that something like it was specified as the definition of morality in the normative section of the stanford philosophy encyclopedia: “ Those who use “morality” normatively hold that morality is (or would be) the code that meets the following condition: all rational persons, under certain specified conditions, would endorse it. “
Is this incorrect to you?
https://plato.stanford.edu/entries/morality-definition/
All rational persons? What is rational? Pure rationality could be seen as without emotion, without will, it's not even innately rational to want to exist. Rationality alone does not compel action, so I have a problem with the suggestion that a fully and exclusively rational person will necessarily DO anything.
Certain specified conditions? What conditions?
Assuming that rational person had moral motivations would just yield a case of circular reasoning.
Endorse? What does that mean? Does it imply practice, desire that others practice it, or just a recognition that it's acceptable as a definition?
(you said more than endorse) And does it mean without lying?
https://plato.stanford.edu/entries/morality-definition/ (again)
A uniquely poor argument. If we cherry pick these definitions, we can probably wedge just about anything in there.Consequentialist views might not seem to fit the basic schema for definitions of “morality”, since they do not appear to make reference to the notions of endorsement or rationality. But this appearance is deceptive. Mill (1861: 12) himself explicitly defines morality as
the rules and precepts for human conduct, by the observance of which [a happy existence] might be, to the greatest extent possible, secured.
And he thinks that the mind is not in a “right state” unless it is in “the state most conducive to the general happiness”—in which case it would certainly favor morality.
It becomes a failure of a definition by being overly broad and accommodating (beyond relying too much on assuming things about rational agents). A definition that fails to define is a failure of a definition.
I'm not prepared to argue that that's even necessarily true for rational agents and morality.
I have made a case for it: http://philosophicalvegan.com/viewtopic.php?t=1932#p19543
But it's fare from iron clad.
We can conceive of a rational person choosing to be evil instead of good. It can be argued that rationality always leads to morality, but only weakly, and it's a poor definition.
The Stanford article even goes into this:
It admits exceptions, and then touches on how weak the definition is. Like I said, even potentially circular (see the last bolded part; basically, a condition to be morally motivated; Stanford fails to recognize this though).Unless one thinks that rational people would endorse the moral system one is defending, one will have to admit that, having been shown that a certain behavior is morally required, a rational person might simply shrug and say “So what? What is that to me?” And, though some exceptions are mentioned below, very few moral realists think that their arguments leave this option open. Even fewer think this option remains open if we are allowed to add some additional conditions beyond mere rationality: a restriction on beliefs, for example (similar to Rawls’ (1971: 118) veil of ignorance), or impartiality.
Definitions of morality in the normative sense—and, consequently, moral theories—differ in their accounts of rationality, and in their specifications of the conditions under which all rational persons would necessarily endorse the code of conduct that therefore would count as morality. These definitions and theories also differ in how they understand what it is to endorse a code in the relevant way.
[...]
it is common to hold that no one should ever violate a moral prohibition or requirement for non-moral reasons. This claim is trivial if “should” is taken to mean “morally should”. So the claim about moral overridingness is typically understood with “should” meaning “rationally should”, with the result that moral requirements are asserted to be rational requirements. Though common, this view is by no means always taken as definitional. Sidgwick (1874) despaired of showing that rationality required us to choose morality over egoism, though he certainly did not think rationality required egoism either. More explicitly, Gert (2005) held that though moral behavior is always rationally permissible, it is not always rationally required. Foot (1972) seems to have held that any reason—and therefore any rational requirement—to act morally would have to stem from a contingent commitment or an objective interest. And she also seems to have held that sometimes neither of these sorts of reasons might be available.
The article is otherwise fairly good at identifying the differences in the discussion between descriptive morality in anthropology and morality in the context of philosophical discussions. Bad definition though.
I assume you didn't read more than a paragraph?
If you read the article, it demonstrates how useless the definition they give is by going into all of the exceptions and interpretations that stretch each of those words to the breaking point.Mr. Purple wrote:It would help a lot if you specified the actual definition you are using, and why this seemingly accepted one should be ignored.
If you'd like to discuss that definition more, and that article, please read the rest of it (or at least most of that section down to where they start talking about law and religion) and start a thread on it.
It's a poor definition because it fails at its purpose: defining. It's as bad as or worse than the typical theists' definition of god.
If you read the article, that's not quite what is discussed; rational people will not necessarily follow it (if that's what you mean by agree), and it comes with a number of conditions which as I said could be circular.Mr. Purple wrote: From what i can tell, in order for something to follow the main definition of morality, the person putting it forward just has to believe all rational people would agree to it, Not that everyone actually would.
But that subjective "if you believe it it's true" interpretation would just make it even less useful.
That's like saying Christianity should qualify as science if a Christian believes certain things. You believing something doesn't make it true. That's what is supposed to give definitions value. You believing a cat is a dog doesn't make it one.
Value is not positive/negative experience. It's not a brain state.Mr. Purple wrote:I never said it wasn't part of the whole system. I would still want science to answer which brain states constitute this emergent property of positive\negative experience.BrimstoneSalad wrote:Value is emergent. It's a part of a whole system. Your monomaniacal focus on some intrinsic value to be found in biology is blinding you.
It's as if you're saying you want science to determine the exact position of each hand on the clock that constitutes TIME and its passage.
Like you expect scientists to reveal that the ultimate answer to the question of "what is time" to be 3:15, on the first Monday in June 1257 C.E.
The mechanisms of a clock represent time only by moving together as a whole toward a purpose.
Pleasure and pain are gears:
https://en.wikipedia.org/wiki/Reward_system#Pleasure_centers
We know where most of them are. If that's all you want, then jam electrodes into your brain and be done with it.
It probably wouldn't cost that much to have done at some shady hospital in a poorer country.
That's like saying you want to maximize TIME so you hook your clock up to a power drill and crank it. If you're after "positive experience", that's what it is.
Euphoria is inherently mindless. Just like you can't have your cake and eat it too, you can't have your euphoria and remain a meaningfully sentient being or have any kind of awareness of the world.
1. It's ridiculous that you think there certainly groups of sadists out there getting more pleasure from torture than their victims suffer in pain. Meat eaters argue this sometimes about animal suffering, and even ignoring human harm that's bullshit.Mr. Purple wrote:A group of psychopaths getting more pleasure than the pain caused to the kidnapped victim they did terrible things to has almost certainly occurred and may be occurring frequently. That's great news under utilitarianism. I don't see how the consequences of Egoism are any worse than that.BrimstoneSalad wrote:Luckily for utilitarianism, there's no evidence of any such thing existing now.(Utility monster)
2. Utilitarianism, unlike actual egoism, DOES prescribe a change in interests, like therapy for the psychopaths and them learning to enjoy something else instead for a win-win outcome. The rape gang isn't the greatest good. Likewise, meat eaters are advised to learn to enjoy the vegan option instead.
3. Egoism recommends raping and torturing people even if it causes the victims MORE harm than it provides pleasure for the perpetrators as long as the perpetrators can avoid the consequences.
If you don't see how that's worse, I don't know what to tell you. You don't understand Egoism anyway, so I don't know why I'm trying to explain this. We need a head bashing into a wall icon.
If you think it's a tangent, you need to try harder to understand why it's relevant and read more carefully. I don't just add an extra paragraph in for fun.Mr. Purple wrote:A good majority of the time you go on long tangents disproving concepts that I would never say I agree with in the first place.
Stop skimming over 90% of my posts and ignoring most of what I say (including the most critical statements) because you don't understand how it's relevant.
I'm getting the impression that if you just read what I wrote, maybe twice, you wouldn't be so confused.
If you think something's not relevant to you, read it again. If you still don't understand the relevance, then ask. It's relevant.