An open invitation to stop your misinformed fad and start making an actual difference in the world.

Vegan message board for support on vegan related issues and questions.
Topics include philosophy, activism, effective altruism, plant-based nutrition, and diet advice/discussion whether high carb, low carb (eco atkins/vegan keto) or anything in between.
Meat eater vs. Vegan debate welcome, but please keep it within debate topics.
Post Reply
User avatar
Jaywalker
Full Member
Posts: 138
Joined: Fri Jan 22, 2016 5:58 pm
Diet: Vegan

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by Jaywalker »

Mr. Purple wrote:Well, What branch of philosophy would it be to talk about actions within egoism being right and wrong actions if not morality? That sounds like what morality is about, and fits all the definitions I can see. I don't see why you and brim are forcing such a restricted definition. I know the colloquial definition simply means regard for others, but i don't think that is how it is talked about in philosophy or in most definitions of the word.
Just because there has been philosophical discourse on it doesn't mean it has the capacity to become a valid moral system. I can oppose egoism not only because it doesn't fit the definition of morality but also because it yields a completely useless system. It has already been said several times: If you truly adopt that egoist position, morality becomes meaningless. It's consistent in what it says, but doesn't actually provide a moral standard. It offers no way to resolve conflicts of interest, and depending on its moral principles, may render it impossible to make moral judgements. Is it right to do what one perceives to be in their best interest (deontological - as you first argued when addressing the experience machine), or is it right to do what actually is in one's best interest (consequential)? If it's the former, there is no wrong action to take, since according to you people always choose what feels good ("I think that the positive experience from the changes the monk thought he could bring about through burning himself outweighed the negative experience of pain he thought he would receive"). If the latter, there is no objective morally right outcome, only subjective - a sadistic serial killer who manages to evade capture is a good person.

I really thought you'd at least consider utilitarianism after everything said on this, or at least reject egoism. I can't see why you'd advocate this useless system.
Mr. Purple wrote:I don't see how the interest model is any more internally consistant then other systems, I only see that it will fit a hypothetically wider range of people at the cost of persuasive power.

Internal consistency doesn't seem that difficult, and most systems achieve it while retaining more persuasive power than the interest model.
Sure, but what you're proposing isn't consistent. When fixed to be consistent, your system doesn't seem to have persuasive power, since even you are not convinced by its logical implications. Either you agree it is right to force you into the machine, to kill you painlessly for my pleasure, to steal from you if you don't notice the difference, etc, or you are being inconsistent.
Mr. Purple wrote:Suffering and Pleasure are something we could actually measure scientifically in the near future, and there are a lot of aspects of suffering\pleasure that we can directly measure now. That honestly seems like one of the hedonistic framework's strong suits. All interests being weighed equally seems like a major disadvantage if it means the system can't accurately describe the value variations that actually exist. It would be yet another loss of resolution.
Do you mean pain and pleasure? How do we directly measure them now? Aren't they inferred from behavior and physical signs? As for interests being weighed equally, it works because they aren't arbitrarily given value. Instead, what matters is how many interests are violated. For example, if someone had to choose between killing a person who only cared about pain and a person who who cared both about pain and and continued existence, it would be better to kill the former. You could say the latter "valued" life more. That is if you imagine these interests in a bubble - in reality they're accompanied by many other interests, some more fundamental than others because they enable the rest to exist.
Mr. Purple wrote:I don't see where irrationality fits into this honestly. Hedonistic frameworks aren't inherently irrational.
People are irrational, they distort the rational form of hedonistic/egoistic frameworks, try (and fail) to rationalise away the contradictions between the framework and what they think is right. They mostly adopt an illogical version of these moral theories. Hedonistic frameworks can be rational, sure, but what pull do their actual prescriptions have on people? Their intuitions either lead to an illogical and invalid moral theory, or something that can't be called morality.

I mentioned this because you argued the interest framework wouldn't convince people. While it doesn't say anything about its validity, I actually think it'd be convincing for most people. You're focusing on undesirable hypothetical situations (which classical utilitarianism has even more problems with), and ignoring where it largely matches people's intuitions in real life.
Mr. Purple wrote:And what does "Substantiating morality" mean?
Providing a consistent, objective, non-arbitrary framework for it, like people have been trying to do on this forum and elsewhere.
Mr. Purple wrote:Even the interest framework seems to use intuition in choosing what process to assign value. Why pick "reason for action" as it's intrinsic value unless you already have the intuition\belief that good and bad has something to do with sentient beings and know this will include those sentient beings
I lost you here. It doesn't use intuition (though some parts of it may be intuitive), it uses the accepted definition of morality (the only way morality is objective and makes sense) and uses logic to construct a coherent framework.
Mr. Purple wrote:Realizing your interests is good though right? You will just have a millisecond of having your interest violated followed by all your interests being realized fully forever. If realizing interests is what your framework says is good, then aren't you being irrational for refusing it in the same way i was called irrational for refusing since my goal was to maximize positive experience?
I don't want to get into the machine and I don't want a different set of interests. I want to maintain who I am and I want to experience what I perceive to be the real world. You're basically destroying me and creating a new person with different interests. This is more about what the "self" is, and to what extent future interests (those that don't yet exist) matter in the framework.
Mr. Purple wrote:For example: If we flip the experience around and assume you are already in the machine. [...] When it's asked this way people are fine with living in the fake world, so now we know that wanting reality wasn't intrinsic after all. Turns out it's probably status quo bias that makes people choose the way they do. This is the kind of work we should be doing to find what morally matters to humans, not just throwing up our hands and accepting the first thing a person tells us. It's just lazy and inaccurate.
I'd already considered different variations of the machine before answering. Other people might prefer the machine but I still prefer the real world. I'd only prefer the machine world if I was in severe pain or distress, which would be a case where my interest in experiencing the real world was outweighed by my interest in avoiding suffering. If I valued only suffering/happiness, in order to be rational, I'd have to choose the machine no matter what.

I agree that we should be better equipped to understand idealised interests OR suffering and happiness. Knowledge is crucial to morality.
User avatar
Jaywalker
Full Member
Posts: 138
Joined: Fri Jan 22, 2016 5:58 pm
Diet: Vegan

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by Jaywalker »

brimstoneSalad wrote:We can compare relative values of interests based on behavior, or by simply asking.
If your behavior shows you'd rather be kicked yourself than see your dog kicked, and you answer clearly you'd also rather be kicked than have your dog kicked without you knowing about it, then we can establish comparative value within an individual.

Between individuals, value can be seen as relative to degree of sentience.
Those values actually depend on the amount of interests working in conjunction within an individual, no? From the outside, it appears as though the individual is arbitrarily valuing one interest over another, while the affected interests are different. Maybe I got this wrong though. I haven't read the books you recommended.
brimstoneSalad wrote:This is more an issue of application.
As in "Yes, it was bad that you did that" followed by "but it was not your fault because you didn't have a choice".
But every moral agent always has a choice, and sacrificing yourself to satisfy another individual's interest seems like the ultimate altruistic act. Otherwise, I'm completely fine with taking survival situations out of the equation. I'm not even sure if I ought to accept ethical altruism.
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10370
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by brimstoneSalad »

Jaywalker wrote: Those values actually depend on the amount of interests working in conjunction within an individual, no?
Interests can't really be strictly quantified.
Is it an interest in eating (one interest), or an interest in eating breakfast, lunch, dinner, and dessert (four interests)?
There's no clear or evident place to draw lines between interests, since they are categorically indiscrete and kind of bleed into each other.

You can only really weigh them based on behavior.
Jaywalker wrote: I haven't read the books you recommended.
Those are mainly about cognition; interests are more conceptual.
Jaywalker wrote: But every moral agent always has a choice,
Free will is a bit of a can of worms.
Judgement is more important in terms of consequence. We judge people as responsible when it is useful to do so.
Jaywalker wrote: and sacrificing yourself to satisfy another individual's interest seems like the ultimate altruistic act.
Sure, that's still viewed as good (assuming you aren't failing to do more good in the long run by dying), but failing to do so is not bad.

Altruism and preference utilitarianism have the same results most of the time, so most of the time it doesn't matter which you follow.
User avatar
Mr. Purple
Full Member
Posts: 141
Joined: Sun Sep 13, 2015 9:03 pm
Diet: Vegan

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by Mr. Purple »

If you truly adopt that egoist position, morality becomes meaningless.
Can you explain the process of how a moral system becomes meaningless for you? I don't understand the reasoning behind excluding non altruistic moral systems like you are doing.
It offers no way to resolve conflicts of interest, and depending on its moral principles, may render it impossible to make moral judgements.
An egoistic moral system would use the same sort of moral reasoning that the other systems would use to convince people that things are good or bad.

I really thought you'd at least consider utilitarianism after everything said on this, or at least reject egoism. I can't see why you'd advocate this useless system.
I have learned a ton since the start of this thread and i wouldn't commit to any specific framework at this point. The interest system seems less strange now that I understand it, but it doesn't seem to have any clear advantage over the other systems. I know that any system i would pick up would be hedonistic, but i haven't worked out the details yet.

Once again, you need to make a strong case for why you are ignoring egoism completely as a possible candidate. Just saying that you aren't going to count it as a true moral system doesn't do much for me.
Sure, but what you're proposing isn't consistent. When fixed to be consistent, your system doesn't seem to have persuasive power, since even you are not convinced by its logical implications. Either you agree it is right to force you into the machine, to kill you painlessly for my pleasure, to steal from you if you don't notice the difference, etc, or you are being inconsistent.
Right now i'm just talking to you guys about the interest framework. I'm not proposing anything right now. The interest model would need to prove itself regardless of how inconsistent some other system was anyway.
Either you agree it is right to force you into the machine, to kill you painlessly for my pleasure, to steal from you if you don't notice the difference, etc, or you are being inconsistent.
To recap: I'm a normal human, so i would go though great lengths to insure these things didn't happen since I view them negatively, but all these things in isolation wouldn't be wrong to me separated from suffering and pleasure. If we were the only two humans, and you managed to kill me without me noticing, i wouldn't feel anything, so it would be fine. But if you asked me if I wanted to be killed, i would obviously say no since i have an interest in staying alive to maximize positive experiences. I just don't view things separated from experience to be bad. The experience machine is even acceptable to me if we use the trip to reality version.
People are irrational, they distort the rational form of hedonistic/egoistic frameworks, try (and fail) to rationalise away the contradictions between the framework and what they think is right. They mostly adopt an illogical version of these moral theories. Hedonistic frameworks can be rational, sure, but what pull do their actual prescriptions have on people?
I'm unclear on what you are meaning here. Hedonistic frameworks seem almost exactly as rational and consistant as the interest framework to me. I don't see the justification in calling it irrational. Hedonistic frameworks also seem significantly more persuasive to me since that matches with my actual intuitions. I gave the example before on how i could be convinced that being eaten by the utility monster was good using a hedonistic model, but I probably couldn't if the utility monster was dead people of the past. Persuasiveness of one model over another seems inextricably tied a person's intuition. I would like to hear an example where this isn't the case.

I think the interest model does a bit worse with persuasiveness because you could have completely conflicting interests with the other person you are talking to. There isn't the common grounding of pain\pleasure that a hedonistic model would have. The only persuasion tool the interest model has to work with is a shared belief in a quality of the interest model itself (universality). But then it still depends on how much the person you are talking has the intuition that universality is more important than their other beliefs\intuitions.
Providing a consistent, objective, non-arbitrary framework for it
Most moral systems seem to do this. I explained with the 5mph framework why consistent, objective, and non-arbitrary isn't enough to make a system that anyone would care about. Intuition\belief has to guide some aspect of any framework short of fully understanding our biology.
I lost you here. It doesn't use intuition (though some parts of it may be intuitive), it uses the accepted definition of morality (the only way morality is objective and makes sense) and uses logic to construct a coherent framework.
It would use intuition to choose the interest model over my made up 5mph framework in my previous post if all you are looking for is consistency and objectivity. Making any action that goes above 5 mph considered evil, and below 5 mph good, would be objective and consistant right? If your objection is that it doesn't count as your definition of morality, then I have no idea what you are talking about when you say morality. You and brimstone's definition isn't one i've ever seen. Here are the top 3 definitions when I googled it:

"Principles concerning the distinction between right and wrong or good and bad behavior."
"A particular system of values and principles of conduct, especially one held by a specified person or society."
"The extent to which an action is right or wrong."

None of these would even hint at excluding egoism or my 5mph framework. If you aren't using the definitions they use for philosophy, and you aren't using dictionary definitions, what definition are you using? What are the factors that make a system not count as real morality for you?
I don't want to get into the machine and I don't want a different set of interests. I want to maintain who I am and I want to experience what I perceive to be the real world. You're basically destroying me and creating a new person with different interests. This is more about what the "self" is, and to what extent future interests (those that don't yet exist) matter in the framework.
I agree with all of that. I don't actually expect you to answer that you will get into the machine. I was just trying to show why it's not illogical for me to reject the machine in the same way you are not illogical for rejecting having all your interests fulfilled.(in the context of being asked the question)
User avatar
Mr. Purple
Full Member
Posts: 141
Joined: Sun Sep 13, 2015 9:03 pm
Diet: Vegan

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by Mr. Purple »

I just found this talk by peter singer. It seems very relevant to the discussion we have been having. Let me know what you think. :

https://www.youtube.com/watch?v=SR-tzgU07XY

he talks about preference vs hedonistic frameworks, and even talks about biases in thought experiments and our inability to answer them correctly. Maybe I'm not crazy after all :P
User avatar
Jaywalker
Full Member
Posts: 138
Joined: Fri Jan 22, 2016 5:58 pm
Diet: Vegan

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by Jaywalker »

Interests can't really be strictly quantified.
I see. I thought my version made perfect sense as I was typing, guess it's unfalsifiable.
Mr. Purple wrote:Can you explain the process of how a moral system becomes meaningless for you?
I gave my best explanation in that paragraph, and I thought the explanations brimstoneSalad gave were pretty good too. I don't know what else I can say. :D

Regarding your other points, I do understand to some extent why you remain unconvinced, but again, I can't think of a way to address them without being redundant, sorry. I'd be interested to see what sort of hedonistic morality you settle on, though.
Mr. Purple wrote:I just found this talk by peter singer. It seems very relevant to the discussion we have been having. Let me know what you think.
That's a good talk, thanks!
inator
Full Member
Posts: 222
Joined: Sat Apr 04, 2015 3:50 pm
Diet: Vegan

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by inator »

Mr. Purple wrote:I just found this talk by peter singer. It seems very relevant to the discussion we have been having. Let me know what you think. :

https://www.youtube.com/watch?v=SR-tzgU07XY

he talks about preference vs hedonistic frameworks, and even talks about biases in thought experiments and our inability to answer them correctly. Maybe I'm not crazy after all :P
I'm late to the party, but here goes..

The talk does support the hedonistic framework to some extent, but it certainly doesn't support your combination of a hedonistic framework and an egoistic one.

In Singer's view, all hedonistic pleasure carries weight, no matter who experiences it. So there may be cases when a person is obligated to act against her pleasure to better satisfy the pleasure of others - not because it gives her more satisfaction to know that she's been altruistic, but because the amount of pleasure experienced in total is greater.
It's still a utilitarian point of view - just closer to the classical one rather than the preference utilitarian one.

I lean towards the preference utilitarian framework rather than the classical hedonistic one, because yes, I do think that there are interests that concern not just the personal state of mind, but also events outside of the subjective experience.
There are also many preferences that I'd want to have satisfied, even if experiencing them made me unhappy. Wanting (having desires) and liking (hedonistic enjoyment) are different things. Even though the two do correlate very strongly in most people, they correspond to separate neurological pathways.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2813042/
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2756052/

Whether we ought to be aiming for happiness or preference satisfaction (especially when they are in conflict) is another topic. But this is a way more nuanced discussion than the one about the validity of egoism versus utilitarianism.


*Also, it seems to me that Singer kind of dodged the last question about the morality of painlessly killing a being under the hedonistic framework. I would have liked to hear his perspective on it. But as he said, he's just exploring the hedonistic framework, and isn't quite convinced one way or the other yet.
User avatar
Mr. Purple
Full Member
Posts: 141
Joined: Sun Sep 13, 2015 9:03 pm
Diet: Vegan

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by Mr. Purple »

inator wrote:There are also many preferences that I'd want to have satisfied, even if experiencing them made me unhappy.
Is it possible you are just not looking at the full range of what could count as pleasure or suffering? The feeling of overcoming great odds, gaining greater social standing, or defending your families honor could all lead to a more positive mental state knowing you accomplished them or that you are the type of person who does these things. These could even cause great pain to accomplish, but still be worth it by comparison. If you are using this broad definition of pleasure\suffering and still believe you would want your preferences satisfied even if it causes net suffering, Then i just can't relate at all to your values. We would probably have to operate under separate moral systems.
inator wrote:Even though the two do correlate very strongly in most people, they correspond to separate neurological pathways.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2813042/
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2756052/
Those are actually very interesting, Thanks! I've been wanting stuff like that from the beginning. I'm not sure how those examples would lend support to something like preference over hedonism though. You would have to explain how you got to that conclusion more.
It seems like the wanting mechanisms and pleasure\suffering mechanisms being completely seperate from eachother would make it more plausible that the living hell machine I brought up on the previous page would be an issue for preference models. You couldn't as easily say that suffering necessarily implies a want to avoid that suffering if the mechanisms can be identified to occur independent of each other. What's your take on that? (Would it be wrong to build a machine that would feel immense suffering but wouldn't have have the ability to prefer that the suffering stop?)
inator wrote: The talk does support the hedonistic framework to some extent, but it certainly doesn't support your combination of a hedonistic framework and an egoistic one.
Hedonism vs Preference was the main topic there. It didn't seem to have much to do with utilitarianism vs egoism as far as I remember. Like I said in a post above, I'm not advocating for egoism right now, i'm simply saying any moral system I would take on would have to be hedonistic since that's all I can understand.

On the topic of egoism vs utilitarianism, I honestly can't tell what label fits me.
I have two intuitions in my head. First is that utilitarianism feels correct and that questions of good and bad generally boil down to overall negative vs positive experience. The second intuition is that I wouldn't believe that a good utilitarian outcome was actually good in the first place, if I didn't have some form of positive experience while believing or imagining the world to be in that state. Does that put me more in line with egoism or utilitarianism?
inator
Full Member
Posts: 222
Joined: Sat Apr 04, 2015 3:50 pm
Diet: Vegan

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by inator »

Mr. Purple wrote:I'm not sure how those examples would lend support to something like preference over hedonism though.
They don't lend support to preference over hedonism, but they do lend support to the existence of preferences lacking the hedonistic component - which you didn't seem to acknowledge as possible.

Whether a preference or hedonistic framework is more relevant for morality is another question.
But the two don't represent the same thing - as would be implied by your speculation that, by having preferences that appear to go against hedonism, "you are just not looking at the full range of what could count as pleasure or suffering". That seems to be a reductive understanding of how human (and possibly non-human) motivation works.
Mr. Purple wrote:It seems like the wanting mechanisms and pleasure\suffering mechanisms being completely seperate from eachother would make it more plausible that the living hell machine I brought up on the previous page would be an issue for preference models. You couldn't as easily say that suffering necessarily implies a want to avoid that suffering if the mechanisms can be identified to occur independent of each other. What's your take on that? (Would it be wrong to build a machine that would feel immense suffering but wouldn't have have the ability to prefer that the suffering stop?)
It's an interesting thought, but I don't think it's an example that makes sense evolutionarily.
Pleasure and pain signals exist precisely in order to create a competing system of preferences/incentives in favor of a certain evolutionarily advantageous behavior.
I don't know if suffering as a concept even makes sense without the accompanying desire to avoid it - the preference is what makes the experience positive or negative. I guess it wouldn't be suffering, just some sort of neutral sensory stimulation.

What would be possible, however, is to be in immense suffering, have the preference for it to stop, but also have a competing, stronger preference for something else that would require the suffering to continue.

The main idea is that hedonistic sensations are always accompanied by preferences, but preferences are not always accompanied by hedonism.
Which seems to be supported by the data.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2813042/
Different from “wanting” is “liking” or the core process of hedonic pleasure. In our hands, brain manipulations that cause “liking” almost always cause “wanting” too. This perhaps mirrors their close association in daily life. But many manipulations that cause “wanting”, as described above, fail to cause a match in “liking”. The brain appears relatively recalcitrant to stimulation of pleasure, unless exactly the right hedonic systems are activated.
Wanting can be separate from liking, whereas the reverse seems less likely.

http://www.wireheading.com/pleasure/liking-wanting.html on a University of Michigan study
It's relatively hard for a brain to generate pleasure, because it needs to activate different opioid sites together to make you like something more. It's easier to activate desire, because a brain has several 'wanting' pathways available for the task. Sometimes a brain will like the rewards it wants. But other times it just wants them.
Problem: large chunks of philosophy and economics are based on wanting and liking being the same thing.

Mr. Purple wrote:If you are using this broad definition of pleasure\suffering and still believe you would want your preferences satisfied even if it causes net suffering, Then i just can't relate at all to your values. We would probably have to operate under separate moral systems.
Hedonism is a reductive framework and I'm quite convinced that it can't describe the full range of human motivations (even using the broad definition that you mentioned).
The preference framework includes hedonistic motivations. But there are a lot of layers and complications to work out - like uninformed preferences vs. true preferences, present vs. future preferences etc. - which can make everything excessively complex and reduce the model's explanatory power. People like simple, general theories.

Choosing morally between hedonism and preferences is definitely difficult. Preference utilitarianism extrapolates the heuristic "give people what they want", and eventually hits the question "but what if they want something that's bad for them?" Hedonism extrapolates the heuristic "make people happy", and eventually hits the question "but what if they don't want to be happy?".

Intuitively I'd say that giving people what they want is more important. People generally only don't want to be happier if they have a stronger desire for something else.
Example: I'd rather know a hard truth (even if it doesn't lead to any personal benefit) rather than believe a comforting lie which leads to higher life satisfaction. Under these circumstances, I think that the moral thing for you to do is to tell me the truth and make me unhappy.

Mr. Purple wrote:Hedonism vs Preference was the main topic there. It didn't seem to have much to do with utilitarianism vs egoism as far as I remember. Like I said in a post above, I'm not advocating for egoism right now, i'm simply saying any moral system I would take on would have to be hedonistic since that's all I can understand.
Oh I see. I only mentioned that because of stuff like this:
Mr. Purple wrote:As a society, we should encourage right or wrong as being whatever maximizes happiness and minimizes suffering overall, because that is what has the greatest chance of coming back to benefit us, and generates bonus points for rewarding our empathy and other social biological reward systems.
Mr. Purple wrote:On the topic of egoism vs utilitarianism, I honestly can't tell what label fits me.
I have two intuitions in my head. First is that utilitarianism feels correct and that questions of good and bad generally boil down to overall negative vs positive experience. The second intuition is that I wouldn't believe that a good utilitarian outcome was actually good in the first place, if I didn't have some form of positive experience while believing or imagining the world to be in that state. Does that put me more in line with egoism or utilitarianism?
It seems to me that a moral system can only be objective and non-arbitrary if it's agent-neutral (which is one of the problems I have with egoism and altruism).
If you can grasp that an outcome can be positive even if it leads you to be worse off, then you're not in line with egoism.
IslandMorality
Newbie
Posts: 23
Joined: Mon Jan 11, 2016 6:53 am

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Post by IslandMorality »

brimstoneSalad wrote:
Where do you think mathematical axioms come from? How about those of logic, where do they come from?

Is math or logic something we just "decide"?
They are deduced, not simply decided (as in something people just made up).

Morality itself isn't just something people randomly decide, it's a principle of behavior, in contrast to principles like selfishness or sadism.
People can decide TO BE moral, as in deciding upon actions that reflect the principle, but they can't just decide what's moral or what isn't. There is a necessary foundation and consistency to any axiomatic system people can't just redefine on a whim.
Sorry for the "late" reply. Had shit going on and was sick of online debating XD
This was the core of our argument so until this is resolved Ill wait with addressing the rest of that immeasurably long post :lol:

Not gonna get into a debate about the philosophy of math and logic so Im not gonna answer your questions regarding them. I will however say that there is a nice parallel that can be drawn between the nature of logic(s) and our inability to come to an agreement.
There is no "logic" (unless ofc you're talking about the combined field of all logics). There are different kinds of logics (i.e propositional logic, predicate logic), each based on different kinds of axioms.
Same goes for morality. Your type of subjective morality holds interests as the basic axiom, my type of subjective morality does suffering. Im arguing that there is no objective standard for choosing one axiom over the other and that it's a matter of preference.
And as far as I can interpret your posts, the only thing you have done so far is just state that your axiom is objective with your only evidence being providing one anecdote of something you feel is morally right (not destroying the painting after the painter has died) that isnt covered under my system with the axiom of suffering.
And to that I replied with an anecdote of my own for which your system falls short for a large majority of people's feelings (giving bob the injection against his interests most people would not consider to be morally wrong)

ps: I read the majority of your reply so I'm aware you tried to divert the argument I was making with that anecdote by trying to include other people (mentioning that bob could infect others).
However to avoid that we can just replace "virus" with "an advanced alien race is holding the whole world hostage and will kill each person that hasnt had a needle stuck in his/her body within 24 hours, with advanced nanotechnology in a horribly painful way, just for the lulz because they can".
In other words... bob is the only one that will suffer :)
Post Reply