brimstoneSalad wrote: ↑Tue Feb 06, 2018 6:52 pm
I'm not sure I parsed all of that correctly, but I don't think so. I don't know for certain where his claims of moral virtue come from, but his explanation of saving a child from an oncoming truck seem to suggest it's a matter of personal difficulty and it's not distinguished in fundamental type.
Not that I think it matters much but I think that basic moral views 1-3 would handle this straightforwardly as follows:
(a) Empirical / axiological assumption 1: optimific sets of rules won't demand extreme self-sacrifice (e.g. very riskily saving a child from an oncoming truck - in part due to the degree of personal difficulty)
(b) Extreme self-sacrifice is not morally required (given a & basic moral view 3)
(c) Empirical / axiological assumption 2: Extreme self-sacrifice is (in virtue of the very strong moral reasons constituted by the very great expected benefits to those one is attempting to save) more strongly favoured by moral reasons than the minimum that is morally required. (given b & the further assumption of greater expected moral reasons)
(d) Extreme self-sacrifice is morally virtuous (given c & basic moral view 2)
Because that allows him to circumvent all of the issues with killing animals who want to live as an act in and of itself, and it breaks down the rest of his moral claims by generating the same kinds of exceptions that allow him to kill animals -- creating ignorance to avoid anxiety, which is his only cited reason preventing the painless killing of humans. Surely we could talk about lost productivity and other matters, but these are all variable. There would always be people whom it's fine to kill in certain ways for any reason, because the only thing he could appeal to is consequence of causing anxiety or trouble for others, not any loss for the one being killed.Margaret Hayek wrote: ↑Mon Feb 05, 2018 4:22 am I strongly disagree [which your claim that it matters that Dillahaunty isn't a desire-fulfillment theorist], and I really don't see why you think that.
Oh, I see what you're thinking now; thanks very much for the clarification. I think that you are making assumptions about population ethics and / or the ethics of what matters in survival / death's harm that are very very false, in some ways that resemble those in which views that have been proposed by Peter Singer and Helga Kuhse were very false. First, you seem to be assuming either:
(1) that one has to accept the implausible impersonal total view according to which one has most moral reason to cause the existence of well-being, regardless of whether it benefits anyone or not (or that coming into existence is a benefit proportional to the degree of well-being one has while alive), and that somehow this supports the view that one only has moral reasons to omit killing those who have future-directed desires, or
(2) that, if we take an individual / person affecting approach (and hold that our moral reasons to benefit and omit harming are moral reasons to benefit & omit harming individuals, and also hold that coming into existence isn't a benefit proportional to one's degree of life-long well-being), and we don't accept a desire-fulfillment theory of well-being, then there are no moral reasons against painlessly killing individuals who don't have future-directed desires.
Both of these assumptions are very, very false. For a helpful discussion of how to deny the implausible impersonal total view (while, if you like, remaining a consequentialist), see especially the work of Melinda Roberts (e.g. her "A new way of doing the best that we can: Person‐based consequentialism" - just let me know if you can't find it without a pay wall and I can send you a copy) - or more generally her Stanford Encyclopedia entry on The Non-Identity Problem (https://plato.stanford.edu/entries/nonidentity-problem/).
The reason that (2) is false is that, even if you accept some version of the desire fulfillment theory, the conclusion that there are no moral reasons against killing individuals who lack future-directed desires follows only if you accept the radically implausible CURRENT desire fulfillment theory, according to which morally relevant benefits and harms to someone at time t are determined only by her desires at time t, and regardless of the extent to which death would deprive her of things that she would very much desire in the future / at the time they would come. For criticism of this current desire fulfillment theory see e.g. Appendix I of Parfit's Reasons and Persons ("What Makes Someone's Life Go Best" - e.g. you can get the full text pdf at http://www.chadpearce.com/Home/BOOKS/161777473-Derek-Parfit-Reasons-and-Persons.pdf), Jeff McMahan's The Ethics of Killing: Problems at the Margins of Life (let me know if you can't find it without a pay wall & I can send it), and Kris McDaniel & Ben Bradley's "Death and Desires" (http://krmcdani.mysite.syr.edu/deadesir.pdf). Philosophers have developed many variants of the alternative and much more plausible deprivation account of death's harm: that death harms you to the extent that it deprives you of future goods that would have been yours (where what would have been yours can e.g. be a matter of degree determined by the degree of one's psychological continuity with one's future selves) - see e.g. the aforementioned books by Parfit and McMahan. Given such an account of death's harm in terms of deprivation of future goods, it does not really matter much to what's at issue here what one takes to be the best theory of well-being / future goods: desire-fulfillment, hedonism, objective list, etc.
Moreover, if you are assuming (1), you be making an egregious mistake of assuming that there are no moral reasons not to painlessly kill someone who lacks future directed desires on the impersonal total view. But even Singer knew that THIS was false -if you kill someone who lacks future desires and don't immediately replace them with someone else who will be at least as well-off, then you cause there to be less well-being in the world, which is something that you have more moral reason not to do. What would be better is if you made the mistake that Singer made, of assuming that given the impersonal total view, future-directed desires make those who have them "irreplacable" (i.e. such that it isn't morally neutral to kill them and replace them with someone else just like them - or at least more irreplacable in this sense) while those who lack them are "replacable" (i.e. such that it is morally neutrla to kill them and replace them with someone else just like them - or at least more replacable in this sense). Of course, even if this latter mistake were not a mistake it would not support your contention that it matters that Dillahunty doesn't accept a desire fulfillment theory - since here we would already have to admit that there are moral reasons against killing without replacement regardless of our theory of well-being, and since animal agriculture destroys far more wild animals than it replaces with farmed animals, it indeed falls afoul of the moral reasons against killing without replacing. But in any event many, many authors have pointed out how Singers views about the influence of future-directed desires on irreplacability fail miserably (as I recall Joel Feinberg was one of the first to make this point in a review of Singer's Practical Ethics for some popular publication like the New York Review of Books or something). If you kill someone with future-directed desires then (assuming a desire-fulfillment theory of well-being) you deprive the world of the well-being that she could have had by fulfilling those (and the rest of her) desires. But if you replace her with a new individual who has future-directed desires and these get fulfilled, you don't actually make the world worse than you would have had you not killed her and replaced her. In this respect killing and replacing someone with future-directed desires with someone else who has future-directed desires is exactly morally equivalent to killing and replacing someone who lacks future-directed desires with someone else who lacks future-directed desires (you deprive the world of the well-being from the first fulfilling her desires, but then you give the world the well-being from the second fulfilling her desires, and it cancels out).