Getting to the steel-manned version

From Philosophical Vegan Wiki
Jump to: navigation, search

As discussed in the main entry on #NameTheTrait, the formal, premise-conclusion presentation of #NameTheTrait in English is as follows:

Argument for animal moral value: P1 - Humans are of moral value P2 - There is no trait absent in animals which if absent in humans would cause us to deem ourselves valueless. C - Therefore without establishing the absence of such a trait in animals, we contradict ourselves by deeming animals valueless Argument for veganism from animal moral value: P1 - Animals are of moral value. P2 - There is no trait absent in animals which if absent in humans would cause us to consider anything short of non-exploitation to be an adequate expression of respect for human moral value. C - Therefore without establishing the absence of such a trait in animals, we contradict ourselves by considering anything short of non-exploitation(veganism) to be an adequate expression of respect for animal moral value.


In the section on the summary of issues of the invalidity of #NameTheTrait, we presented the following steel-manned version of the argument in order to discuss its logical form:

(P1) All sentient humans (or even just you) have moral value
(P2) There is no trait absent in sentient non-human animals which is such that, if the trait were absent in sentient humans (or you), then they would be not have moral value.
Therefore, (C) All sentient non-human animals have moral value


In the section on displaying the logical form of #NameTheTrait, we presented the following version of the argument in FOL (here in English with the specific meanings of the predicates and relations):

(P1) for all x, if x is a sentient human, then x has moral value
(P2) It is not the case that there exists a thing, t, such that t is a trait; and for all x, if x is a sentient non-human animal, then x does not have t; and for all y, if y is a counterpart of a sentient human, and y does not have t, then it is not the case that y has moral value
Therefore, (C) For all x, if x is a sentient non-human animal, then x has moral value


This entry discusses the considerations that go into this steel-manned version presented in the main entry.

We / Ourselves

The above formalization deals with two issues regarding the use of "us/ourselves" (See existential meaning).

  1. If we consider the product of the human without the trait to be a human, then P2 is rendered vacuously true (see proof below). This is due to the fact that P1 implies a human can never be valueless, so that in P2 there can never be a trait that if absent in humans would cause humans to be valueless.
  2. Thus to steel-man P2, we must consider the being that remains after removing the trait to no longer be human. When Ask Yourself uses hypothetical situations (such as what if your brain is transferred to a computer, would it be ok to kill you? etc. ), we are no longer talking about a human anymore. So we can just admit that "us" is the product of whatever is left after removing the trait.

It's worth noting that there is nothing in the argument that forbids applying completely different moral standards to the set of humans than to the set of (nonhuman) animals. And this is the essence of why NTT fails as a formal argument, and requires additional moral universalist premises in order to be logically valid.

Note : It is up to Ask Yourselfor any other supporter of the argument to demonstrate that C follows from the premises or that the negation of C leads to a contradiction. Additionally it is also preferable to have the deduction system clearly specified. Until this has been demonstrated, the argument should not be taken as valid.

Margaret: I disagree with this burden tennis, especially when the individuals in question lack a background in formal logic and those composing this wiki have one. Why not explain things to the public rather than chastise them for not possessing your knowledge?

P2 is vacuously true if "us/ourselves" represents humans

In the above translation we are steel-manning P2 of the original argument by not requiring 'us/ourselves' to be human. Since P2 becomes vacuously true if the 'us/ourselves' represents humans (see below).

P2 would have the following form if "us" represents humans :

P2:⇔ ¬ ( ∃t: ( ∀x: A(x) ⇒ t ∉ T(x) ) ∧ ( ∀y: H(y) ⇒ ( t ∈ T(y) ∧ ( ∀q: ( T(q) = T(y) \ { t } ) ∧ H(q) ⇒ ¬ M(q) ) )

Note the addition of "∧H(q)" in the last part of the sentence. The sentence has the form ¬(A ∧ (B ⇒ C ∧ (D⇒ E))) with :

  • A :⇔ ∃t: ( ∀x: A(x) ⇒ t ∉ T(x) )
  • B :⇔ ∀y: H(y)
  • C :⇔ t ∈ T(y)
  • D :⇔ ( T(q) = T(y) \ { t } ) ∧ H(q)
  • E :⇔ ¬ M(q)
  1. Now in D we can instantiate q to be a human. D⇒ E take the form "true => false" for a trait of our choosing and using P1. This makes D⇒ E False, which in turn makes C ∧ (D⇒ E) False.
  2. We apply the same trick in B ⇒ C ∧ (D⇒ E) by instantiating y to be human, which makes B ⇒ C ∧ (D⇒ E) False.
  3. Then by conjunction A ∧ (B ⇒ C ∧ (D⇒ E)) is False.
  4. Applying the last negation make the whole sentence True.

Since the choice of trait and human instance is arbitrary, the statement is vacuously True, meaning it can't be False in any structure that fulfils our mentioned predicates.

And if P2 is vacuously true, then P2 can be removed with no effect on the argument, which simply leaves

P1:⇔ ∀x: H(x) ⇒ M(x)
C:⇔ ∀x: A(x) ⇒ M(x)

i.e.

P1: Humans are of moral value
C: Animals are of moral value

An obvious non sequitur.

P2 is vacuously true if humans are considered to be in the set of animals

If we allow the definition of trait to be so broad that it includes the trait of 'being part of the set of humans', and we allow for this 'trait' to not be absent in animals (i.e. to be present in animals), then P2 becomes vacuously true. This is because it would imply there can be animal which is human i.e. ∃x:(A(x) ∧ H(x)) (there exists an x that is both human and animal). Subsequently there would be no trait that can satisfy both being absent in all animals and present in all humans. i.e. there is no 't' that can satisfy both '∀x: A(x) ⇒ t ∉ T(x)' and '∀y: H(y) ⇒ t ∈ T(y)'

This leaves us with a negated series of conjunctions of which at least one conjunction will be false. This makes the series of conjunctions false, and the negation true, hence P2 becomes vacuously true.

This is not an issue introduced by the FOL translation, it is present in the original argument itself by the requirement that the trait can be absent of humans, which of course 'being human' cannot.

To show this more formally, we can define the trait of 'being part of the set of humans' to be 'h' with,

∀x: (H(x) ⇔ h ∈ T(x))

i.e. for all x if x is human then x possesses the trait of being human, and if x possesses the trait of being human then x is human

Now the statement 'h is absent in humans' would be

∀x: (H(x) ⇒ h ∉ T(x))

which of course would be false, by the definition.

Note we could provide a very similar proof that P2 is vacuously true if 'moral value is allowed to be a trait' since by P1, it is also something a human cannot lack.

Separating humans and nonhuman animals

To avoid the above scenario we must change P2 such that we are now talking about humans and nonhuman animals (which is arguably implicit from the way Ask Yourself presents his argument).

P2:⇔ ¬ ( ∃t: ( ∀x: (A(x) ∧ ¬H(x)) ⇒ t ∉ T(x) ) ∧ ( ∀y: H(y) ⇒ ( t ∈ T(y) ∧ ( ∀q: ( T(q) = T(y) \ { t } ) ⇒ ¬ M(q) ) )

note the addition of '∧ ¬H(x)'. Now by P1, we know that the only trait that if present in animals (or if not absent in animals), that would give animals moral value, is the trait of 'being human', which is not possible here. This is because for a nonhuman to possess the trait 'human' the statement

∃x(¬ H(x) ∧ h ∈ H(x) )

Must be true. But it of course is false because ∀x: (h ∈ T(x) ⇔ H(x))

Hence there is nothing in P2 to give all animals the trait 'human' nor is there anything to give animals moral value, so the argument is a non sequitur.


All Humans

P1 only says humans have moral value (implicitly all humans), not that only some humans do.

P1 - Humans are of moral value.

Human may range from vegetative states to a fertilized egg, and for "conservatives" who believe those have intrinsic value there's common disagreement that violent criminals have moral value (still human).
This in itself seems to contradict the notion of a value giving trait other than the arbitrary "human" status.
If you want value to be based on another trait, your first premise can't make that impossible.
Ask Yourself has permitted that other premises be substituted in for P1 (such as "I am of moral value") and maintained that the same conclusions can be reached. This premise is easy enough to correct, although the argument still fails even when limited to personal moral value.


P2 Inconsequential

P2 says there is no trait of such description:

P2 - There is no trait absent in animals which if absent in humans would cause us to deem ourselves valueless.

But even if so, there is no premise that says moral value must be based on such a (implicitly natural) trait at all or that it can not be an arbitrary one (if one chose to name a trait). Moral value could just be fiat, or the tautological and irreducible non-natural trait "moral value" itself.
P2 can be variously ignored or rejected in many ways.

A perfect logician could accept that P1 humans are of moral value AND P2 that there is no such trait, but still reject the implicit concluion: that animals are of moral value.
As such, the conclusion does not follow from the premises; the argument is a non sequitur, chiefly because P2 fails to do what Ask Yourself thinks it does.

Old material on the brick analogy

2. Comparisons with physical reality and contradictions in physics only makes sense because forces like gravity apply the same universally, something you cannot just assume for morality or moral standards. If forces like gravity could be regarded as subjective, the whole brick lifting issue would become much more complicated. By making these physical analogies Ask Yourself is committing the argument to hidden meta-ethical premises of some form of naturalistic moral realism (a good position to commit to, but it must be stated in the premises to make the argument valid).

Margaret: I think that this in some ways misses the point, and in any event is now going to be more thoroughly dealt with in our discussion of invalidity and defending the premises of the first part of NTT. In any event one does not need to commit to Cornell School realism to reject value narcissism; one could just as easily be any of what many regard as the main foundational metaethical options: a non-naturalist, a constructivist, or an expressivsit quasi-realist

3. Also, it's trivial to name the trait in the brick too if you don't require it to be a natural "mind-independent" one, for example:

"I'm unable to lift that brick because I don't want to. I'm unable to lift things I don't want to lift because I lack the motivation and motivation is necessary for lifting things. The trait is that I want/don't want to lift it."

In essence, arbitrary whim is just as adequate an explanation to differentiate Brick A and B as any UNLESS as explained above you make certain meta-ethical commitments against arbitrary answers and establish some form of moral realism and reject any subjective factors (a.k.a. actual moral objectivism, not Ask Yourself false dichotomy version). Again, these are good premises to establish and that can create strong arguments, but Ask Yourself has consistently rejected the need for them while presenting analogies to justify that lack in #NameTheTrait that also fail without such premises.

Margaret: as Ask Yourself noted in a video, this is clearly not what he had in mind by the analogy and thus a highly uncharitable interpretation of his remarks. As such, I think that including it is at best a nit-pick and at worst a deliberate mis-interpretation.


Existential Meaning

"Human" and "us/ourselves" in this argument has no clear meaning:

"P2 - There is no trait absent in animals which if absent in humans would cause us to deem ourselves valueless."

Any example of a trait applied to you (like having your consciousness transferred into a biologically non-human body) might just cause us to change our understanding of what human means (anything with a human consciousness) to maintain consistency so that nothing could cause you to stop being human, even "once human always human" (even the trait "Having been human at some point"), thus making "human" an unfalsifiable answer for moral value, or the change may cause us to reject the application of "we" to the new entities (e.g. reduced to the intelligence and capacity of a cow, we may consider what makes us ourselves to be gone and this to be equivalent to death anyway), such that the opponent need take no issue with the loss of moral value making P2 false.
Ask Yourself is concerned with the "hard problem" of consciousness, so he probably isn't prepared to tackle this issue. The easiest way to fix this problem is to avoid references to "us" by defining clearly the behavioral implications of "lacking moral value", or just scratch this confusing wording and substitute in directly the golden rule. Unfortunately, this runs into the next problem which Ask Yourself is unwilling to address:


All/Some Animals

Given the qualifier "All" for animals (there is no trait absent in all animals which if absent in humans) P2 is trivially true, because humans are by definition included in the group "all animals". Given the qualifier "some", P2 is clearly false when non-sentient animals are included (assuming the premise about human value were corrected). Animals range from sponges to humans, and do not broadly share any trait beyond phylogeny (like with "humans" as a blind category).

Ask Yourself means the argument to prove moral value for all animals that lack moral value giving traits, but the argument does not do this. If he wanted to do this, the argument should be formed very differently.
This specific problem in the argument could also be corrected by specifying it should be applied to individual animals, or those in like groups based on the evaluative trait.
Like the issue with "human" in P1, this is not a difficult issue, but the lack of rigor here generates ambiguities and contradictions in the argument. Of course, this may seem like a rather minor issue after point #4, but no fallacy is too small. Even correcting for this point, the argument fails as a non sequitur due to the lack of premises to support P2 and other issues as explained at length under other points.