aba4w wrote: ↑Mon Aug 04, 2025 12:17 pm
Though I have a suspicion that having net actions that are largely altruistic is pretty hard and it's even harder to have an overall positive impact on the world if you're aren't a person of influence, given the suffering one causes by existing alone.
Beyond animal agriculture, the suffering humans cause is much more limited. Even environmental destruction for development is short lived until humans take over that environment -- it's not an ongoing thing.
Humans also produce a large amount of good, collectively, simply by participation in society. Discounting ones own pleasure still makes you 1/8,200,000,000 responsible for the net happiness of 8,199,999,999 people, which is virtually being responsible for one average person's happiness (a not insubstantial amount).
Even with animal agriculture, I'm not fully convinced that a human life is a net bad in impact. But without it, the good is pretty hard to deny.
It's not hard to make a pretty good argument for being net positive as a vegetarian or reducetarian, much easier still as a vegan. None of that is hard to do.
aba4w wrote: ↑Mon Aug 04, 2025 12:17 pm
My understanding is that Kantianism (at least strict Kantianism) is more of a fringe position even among deontologists, but I'm not sure about that.
Strict originalism sure, they all have their slight reformulations on the original recipe in attempt to fix problems -- it yet remains unsolved.
aba4w wrote: ↑Mon Aug 04, 2025 12:17 pm
It's not just deontology though, according to the philpapers survey from 2020, most philosophers lean towards virtue ethics, with 25% percent being exclusively in favor of it, more than any other exclusive group, so there is a divide here too.
Answering "virtue ethics" with no qualifiers is probably the equivalent to the "just do the math and don't think about the implications" among physicists with regard to quantum physics interpretations. When looking at the substantiation for those virtues, anybody who thinks about them will arrive at consequentialist or deontological reasoning. In any sophisticated practice, virtue ethics is subsumed by consequentialism and deontology, which is why most philosophers prefer reasoning about morality (most do not prefer virtue ethics, even though they're divided between deontology and consequentialism).
It's noteworthy that the virtue ethicists indicate a higher rejection of logic than others who want to reason more carefully and have a rational basis for their beliefs.
To see the significance of those responses, we'd have to ask follow up questions and deconstruct those positions more than the surveys do. That data is limited, and in itself would be a huge undertaking to try to figure it out from the responses given.
Because that data is lacking, I would disregard those positions as either philosophers who are not interested in the justifications for ethics and just arbitrarily choose virtue because of that disinterest and a desire to stay out of the main spat, who did not see a good answer that reflects the nuance and so chose something they thought would read as close despite the underpinnings of the virtues being believed to be consequential or deontological, or who are intuitionists who amount to an uncredible response (intuitionism is not uncommon, but it represents the non-rigorous pole of philosophy schools).
aba4w wrote: ↑Mon Aug 04, 2025 12:17 pm
I think I have problems with this view as a matter of principle though. Why should only the wellbeing of other beings be considered morally? If you take it to the extreme, it could be moral to burn yourself alive in order to prevent another person from breaking their arm.
You have a lot more to offer the world than self immolation to prevent a broken arm. Do not forget opportunity cost.
Taken to extreme, morality by necessity indicates some kind of sainthood. That's pretty much the definition of a moral saint. And yes, in a vacuum where there was nothing else to do for others than sacrifice oneself to prevent a minute harm, that would be the action of a saint.
You don't have to be a saint to be a good person. Morality is the natural polar opposite to selfishness. We should expect outcomes like that even if we don't envy them. People will naturally strike balances between being moral and being selfish in their lives, and it would be counterproductive to discourage that.
aba4w wrote: ↑Mon Aug 04, 2025 12:17 pmAlso the possibility of other people taking advantage of you, which can feel unjust.
Justice is principally a deontological construct, but it can also be a rule consequentialist heuristic. I think the problem you're indicating is that you don't want such heuristics to break down in extreme circumstances, which isn't a realistic expectation. To avoid unusual outcomes in extreme situations you need some kind of rigid absolutism, which has its own absurd extremities but more importantly has no foundation in reason.
aba4w wrote: ↑Mon Aug 04, 2025 12:17 pm
What axioms and how are they presuppositional in that way? Is there maybe a good resource where I can read more about that?
I'd also be interested in resources that make the case that deontology doesn't work logically.
We have some very old threads here on deontology, but there may be some better newer papers. I'm not sure how much is on the wiki. It's something I'd like to work on but have not had time.
Presuppositionaism is mostly popular as a neo-apologetics movement among fundamentalist Christians, it's faulty there, but it actually has valid application to secular moral arguments. If you're speaking English, there are certain definitions that things have because of the purpose of communication. As well as context, logically derived principles like Grice's maxims (look it up). Some meanings have been and must be presupposed as well as the function of the whole endeavor. If you're in the context of a philosophical debate on objective morality, there are certain productive and unproductive things for morality to mean. It is only productive to argue objective morality doesn't exist if it literally can't exist because you have proven logically that there is no possible objective formulation -- otherwise, morality should be understood as a toolset for analysis and rational discourse around it can occur. It doesn't need to be a physical thing floating around out there that can be denied without evidence. The burden of proof is reversed for conceptual frameworks, which should be assumed to be possible unless demonstrated otherwise -- or else, we might as well deny all math and potentially logic without external proof and what is the point of trying to discuss anything? (spoiler: none)
aba4w wrote: ↑Mon Aug 04, 2025 12:17 pm
How do you determine that it's astronomically unlikely that more harm than good is caused?
It's astronomically unlikely that it causes less harm than the good. It's almost certain that it causes more harm than good. Animal behavior in intensive farming operations are indication of this on a strictly hedonic level alone, which is not even accounting for interest violations that the animal doesn't know about or the harms involved in slaughter.
You need very high welfare farms to muddy the issue at all, and in any case we retain the precautionary principle which says we shouldn't do it, and that's even before we account for the resource waste and environmental harm and harms to human health and flourishing.
aba4w wrote: ↑Mon Aug 04, 2025 12:17 pm
Wild animal flourishing, not sure again, some ea people even argue for consuming more pasture meat and things like that in order to reduce wild animal suffering (through destructions of natural habitats). Though at least that's not an argument that normies would hit you with.
I think it's clear where those biases lie for people who are intent on eating meat. They will come up with something. That's why consequentialism so often fails, because people can fudge whatever they want to get a desired outcome without some guardrails like the physical sciences have. It ends up with the same problems of intuitionism in practice where people hold whatever moral systems their whims dictate, which is in practice no morality at all (it loses its utility). That's what rule consequentialism is there to help with. Heuristics like veganism, etc.
aba4w wrote: ↑Mon Aug 04, 2025 12:17 pm
Human flourishing .. here you easily get into arguments about the health advantages and disadvantages of different diets, which doesn't seem clearcut territory.
It's pretty clear cut in terms of health beyond any marginal animal product consumption. If people are only erroneously set on taking a couple bites of fish a day for health (which is the only thing that's close to unclear), I think we have other things to worry about.
aba4w wrote: ↑Mon Aug 04, 2025 12:17 pmEconomically you can get into arguments about the efficiency of farms with and without animals, also in developing countries. Factory Farming in developed countries where you plant crops specifically for feeding or even import them is of course another matter.
If we're only worried about nomadic people in undeveloped regions, that's a good problem to have.
aba4w wrote: ↑Mon Aug 04, 2025 12:17 pm
You can maybe make precaution arguments in some cases again. All I'm saying is that I wouldn't be able to argue a lot of these points with authorative certainty.
With anybody who has internet access I don't think it's hard. We don't need to send missionaries to far off lands beyond the reach of technology to tell them to stop grazing goats on the tundra.
aba4w wrote: ↑Mon Aug 04, 2025 12:17 pm
You could plant crops to feed children in Africa or something like that in theory, but given the constraints of capitalism, not sure what the land would actually be used for instead that provides a big benefit.
It would provide more benefit to simply leave it fallow and let forest regrow over time. Only a small fraction of the land is needed to feed human beings, and the crops needed for that are already grown (different varieties sometimes, but that's just a different seed and slightly different schedule).
aba4w wrote: ↑Mon Aug 04, 2025 12:17 pm
I'm confused by your reasoning here. Isn't it all about the consequences in the end, rather than being a genuinely good person?
Most people are worried about not being bad people; this is where the force of moral argument usually comes in. But my point is that somebody who is actually a good person probably is more worried about others than about scoring those points.
aba4w wrote: ↑Mon Aug 04, 2025 12:17 pm
Someone who manages to make 10 people go vegan would have a huge effect, no matter if they go themselves vegan or not.
Issues like double counting just seem like abstract problems of how to define a good calculus, but that doesn't affect the actual state of the world.
The person you describe is somebody playing that numbers game to try to be a net good person on paper rather than being an actual good person (who would also go vegan his or herself). And if this person is playing a numbers game, he or she has to look at the logic behind those rules which doesn't work when that game is leveraging the genuine virtue of others to get ahead.
I'm not a player hater in this context, but if they want to play the game and pretend to win they had better know the rules.
Hypothetically this person could pay a carnivorous animal hating psychopath to eat vegan, and continue eating meat without virtue theft. I can tell you said animal hating psycho would cost a lot more than vegan outreach, and it's not clear how much it would cost or how that would be achieved (you'd also have to test because this person would lie).
aba4w wrote: ↑Mon Aug 04, 2025 12:17 pm
How do you avoid typical conundrums, for example blood games where the audience draws a lot of enjoyment out of the games while the few participants suffer?
Almost all sports are blood games to some degree. You might as well ask if American Football can be justified.
aba4w wrote: ↑Mon Aug 04, 2025 12:17 pmI don't think the altruism rebuttal neccessarily works here. You could say the audience members are altruistic, because they might have the interest of the other members of the audience in mind.
Or someone outside who doesn't even enjoy the blood games makes sure they continue, because he has the stronger accumulated preference of the audience in mind.
Where we're talking about social psychology, you have to look at the long game. The EA actor would probably work for gradual reforms to make the games less harmful and work toward getting people interested in alternatives like robot fighting instead.
aba4w wrote: ↑Mon Aug 04, 2025 12:17 pm
Another more hypothetical case: Say there's a species that has a very strong preference to make other beings suffer and say they suffer themselves to large extent if they're not able to do so. Should other beings just bend over to that, because the preferences of said species makes up a large amount of all preferences and thus has a lot of weight?
No, the long game says that evil species should be eradicated and replaced by one that gets along better with others.