DrSinger wrote: ↑Tue Dec 05, 2017 6:13 am
Re: Margaret,
[Margaret: OK, let me take another run at why I think this section is unhelpful, and let me do it this way. In the closest possible world in which NTT establishes anything about non-human animal moral value, it establishes not only that there is no moral value giving trait absent in all sentient non-human animals, but it establishes that all sentient non-human animals have moral value giving traits (e.g. by adding the relevant suppressed premises needed to make it valid). So it is pointless to consider some really far off possible world in it establishes only that there is no moral value giving trait absent in all sentient non-human animals without establishing that all sentient non-human animals have moral value giving traits (how would it even do that? By adding some weird premises that establish the former but not the latter? Why on earth would it ever do that?)]
It's relevant because if the issue is not fixed, the argument is still invalid even with the additional premise. Establishing
that there is no moral value giving trait absent in all sentient non-human animals
only means there is one animal with the moral value giving trait. That is why in the alternate version we have changed P2 from
(P2) ¬∃t ( T(t) ∧ ∀x ( A(x) ⇒ ¬P(x,t) ) ∧ ∀y ( ( CP(y) ∧ ¬P(y,t) ) ⇒ ¬ M(y) ) )
to
(P2) ∀x ( A(x) ⇒ ( ∃y ( CP(y) ∧ M(y) ∧ ∀t ( T(t) ⇒ ( P(x,t) ⇔ P(y,t) ) ) ) ) )
Without talking about the quantification issue, there is no explanation as to why we would have done this.
Also, regarding the article, I added in some stuff about essential properties, part 2 and changed the conclusion
Edit: Added a correction, do we think it is valid?
OK, so in your correction you have as your premise 3 (and I've changed SNAx to Ax for continuity with the prior sections):
(P3) ∀x (Ax ⇒ ¬∃t (Tt ∧ ( ¬Px,t ⇒ ( ∀y ( Hy ∧ ¬Py,t ) ⇒ ¬Ry ) ) ) )
If that in the relevant sense fixes the quantification issue you mention (and I'm sorry but I'm still not 100% clear on what that issues is supposed to be - e.g. why we have to interpret the argument as - in some possible world - first establishing that "there is no moral value giving trait absent in all sentient non-human animals" and then still failing to establish the conclusion), then why don't we just make that LF the original LF of P2? E.g. why not, in the first part of NTT, make P2:
(P2) ∀x (Ax ⇒ ¬∃t (Tt ∧ ( ¬Px,t ⇒ ( ∀y ( Hy ∧ ¬Py,t ) ⇒ ¬My ) ) ) )?
If your argument from your P1-P3 to your C in the correction is valid, then so would be this version of NTT with an analogous suppressed premises added in:
(P1) (C) ∀x ( Hx ⇒ Mx )
(P1.5) ∃t (Tt ∧ (∀x ( Hx ⇒ ( Mx ⇔ Px,t ) ) )
(P2) ∀x (Ax ⇒ ¬∃t (Tt ∧ ( ¬Px,t ⇒ ( ∀y ( Hy ∧ ¬Py,t ) ⇒ ¬My ) ) ) )
(P2.5) ∀t ( (∀x (Tt ∧ Hx ∧ Px,t ) ⇒ Mx ) ⇒ (∀x ( Tt ∧ Px,t ) ⇒ Mx ) )
Therefore (C) ∀x ( Ax ⇒ Mx )
But if that's right then there isn't an additional quantification issue with P2; the issue is just that suppressed premises P1.5 and P2.5 are left out.
Since you are just asking about the validity and not proving, I assume that you don't remember how to prove things in FOL? One of the many great things about the internet is that everything is online now! When I taught this I used truth trees / tableaux; for an overview see (https://en.wikipedia.org/wiki/Method_of_analytic_tableaux); for what I think is the actual text I used, in full and for free on line, see (http://tellerprimer.ucdavis.edu/pdf); you may want to see in particular chapter 7 of vol II (http://tellerprimer.ucdavis.edu/pdf/2ch7.pdf). Just let me know if you have any questions / need any help figuring it out...
I suspect that there may be some further issues about using the concept of exploitation - you might see a lot of bashing of that in the version of the wiki that you moved off of the main page.
I think that Brim would have a lot to say about that.
I'd recommend that for the first part of the argument you just stick to showing that non-human animals have some moral value, or, if you want to be precise, that they have enough as much moral value as intellectually and socially comparable humans (e.g. orphaned profoundly intellectually disabled humans about whom no one cares), or that they have enough moral value that it isn't OK for us to inflict enormous harm on them for relatively trivial benefits, or something like that.
In any event there's a need for a second part of the argument that establishes that this sort of moral concern for non-human animals requires veganism. You may recall all of that or see it in the original wiki that you moved off from the main page. What you're doing here in the correction so far is re-doing something like part 1 of NTT; part 2 is either going to require some empirical premises and / or need to make some more fine-grained comparisons to e.g. intellectually and socially comparable humans.
I actually think that you are in your correction doing what Isaac was doing - namely going by the vegan society definition of veganism which talks about exploitation. In that sense it's an analytic truth that if we should minimize harm / "exploitation" to non-human animals as far as practical & possible, then there's reason to be vegan, since minimizing harm / "exploitation" in this way is just what they mean by vegan. The only thing is that most of us mean eating a particular diet (no animal products / vegetarian + no dairy + no eggs), and it isn't always clear what minimizing as far as practical and possible means, or whether that would cover the diet in question. E.g. what if a carnist says "I just really like meat, so it just isn't possible, just isn't practical, for me to give that up; so I guess I'm a meat eating vegan!" That, presumably, is something you want to avoid, and to do it you need to connect the moral value issue to the diet / product purchasing issue in a clear way that doesn't leave "practical and possible" in the eye of the beholder - and that is likely going to involve empirical stuff (or at least leveraging certain comparisons to comparable humans; e.g. if the only way to get a given soft-drink that you really like (or even: to avoid great inconvenience; or even: to keep your current job; or even: to avoid costs to yourself that are even bigger than that, etc.) was to pay someone to torture & kill orphaned profoundly intellectually disabled humans to get it, this presumably would not be OK. Of course if you want to leverage those, then you're ruling out speciesism and going a bit more "full Monty" than if you just use something like a requirement to not inflict enormous harm on non-human animals for relatively trivial benefits and use empirical considerations to establish that this requires vegan diets in almost all circumstances, which is certainly doable and is compatible with some degree of speciesism and such).