Why I'm an omnivore.

Vegans and non-vegans alike are welcome.
Post an intro here first to have your account authenticated by a mod, then you'll be able to post anywhere.
Even if you're here to lurk, please drop a short intro post here to let us know you're not a spammer so you aren't accidentally deleted.

Forum rules
Please read the full Forum Rules
ShadowStarshine
Newbie
Posts: 17
Joined: Sat Sep 15, 2018 1:25 pm
Diet: Meat-Eater

Why I'm an omnivore.

Post by ShadowStarshine »

I just want to give a quick hello to everyone here, I found this forum by finding the counter-argument to Name The Trait, I read through the entire wiki, other than the First Order Logic parts (I'm not well versed in that syntax), and it addressed many of the concerns I had and was very well written.

I've been investigating Vegan Ethics for the last 2-3 months, I think it's an important question to ask ourselves, not just because we eat animals, but because the general question of "What should have moral consideration and why?" is a generally interesting and important philosophical question.

I'm personally either a Moral Nihilist (non-cog) or Subjectivist, depending on definition, but definitely an anti-realist. I think morals equate to preferences, but I take a game theory approach to normative ethics, similar to rule utilitarianism. To me, the purpose of a moral discussion is to talk about what happens and what we should do when values conflict. I want A, you want not-A.

When it comes to the vegan question, what it comes down to, for me, is the philosophy and science of the mind. What does it mean to have the capacity to value something? We could say, well, behaviorly, clearly animals value things. In this sense, however, I would say a Chess AI values winning a game of chess, as it does actions for that goal and avoids behaviors that would not reach that goal. This tends not to be what we talk about when it comes to values. It's not merely the act of moving towards a goal, nor having a system that informs those decisions. As a free-will eliminativist, I wouldn't say, that in any meaningful sense, we "make" those type of decisions. What I do think matters, is the capacity to be aware of the things we desire. For us to be aware of the things we desire, we must be aware of 1) Our desires 2) Our selves 3) How they relate. It is a confliction about these understandings that starts a true moral discussion. For these things to be true, typically, a creature will require a form of higher-order consciousness and metacognition leading to self-awareness. Hypothetically, if the world were populated with P-Zombies, I couldn't have value confliction of any type, and the question would only be how do I maximize my own pleasure in this "game".

I don't think it is the case that humans are the only animals capable of self-awareness, but it appears to be a difficult to obtain trait. It would appear that we are the most cognitively complex animal on the planet, and this feature doesn't develop for up to 24 months. Maybe you go Wittgenstein and say that you need language to be the content of your thoughts, or perhaps scientific and say it's due to an undeveloped neocortex, I don't know. It does seem to be the case though, we don't have a sense of "what it is to be" during these periods.

The mirror test is a test that seems to show self-recognition, though one can argue that self-recognition is not the same as self-awareness, and I would certainly agree that on principal, this is correct, and that failing a mirror test could have the opposite and reverse problem, I think it's among the better tests to develop inferences, as it correlates with what we know about ourselves (though I've seen developments towards meta-cognition tests in general, which are a piece of the puzzle).

Anyways, at this point, I don't eat anything that passes the mirror test or shows cognitive complexity with animals that do (It seems to be the case that animals passing the mirror test, sans Ants (I can go more into ants if anyone wants) are the most cognitivly complex animals in other areas that don't have to do with the mirror test). I'm undecided on pigs, but I'm fairly confident on chickens.

In case anyone is curious and to head a few questions to the pass: I do believe in extrinsic value. A house doesn't have value in and of itself, for itself, but burning it down would effect those that do. Were it the case that people put enough extrinsic value in something, I wouldn't want to damage or destroy that thing, regardless of its lack of intrinsic value.

Feel free to pick apart my beliefs, I always remain civil.
User avatar
Jebus
Master of the Forum
Posts: 2379
Joined: Fri Oct 03, 2014 2:08 pm
Diet: Vegan

Re: Why I'm an omnivore.

Post by Jebus »

Welcome!
ShadowStarshine wrote: Sat Sep 15, 2018 10:59 pm When it comes to the vegan question, what it comes down to, for me, is the philosophy and science of the mind. What does it mean to have the capacity to value something? We could say, well, behaviorly, clearly animals value things. In this sense, however, I would say a Chess AI values winning a game of chess
Sentient animals value happiness and the avoidance of pain. Would you agree with that?

What else besides the above comment do you think is relevant, and if so why?
How to become vegan in 4.5 hours:
1.Watch Forks over Knives (Health)
2.Watch Cowspiracy (Environment)
3. Watch Earthlings (Ethics)
Congratulations, unless you are a complete idiot you are now a vegan.
ShadowStarshine
Newbie
Posts: 17
Joined: Sat Sep 15, 2018 1:25 pm
Diet: Meat-Eater

Re: Why I'm an omnivore.

Post by ShadowStarshine »

Hey there!

I think the whole paragraph was pretty relevant, as I explore what we mean when we say value and why. I'm making a distinction between different possible usages of the word. One, focuses on the behaviorally mechanistic way, like a Chess AI. The other involves an awareness. Maybe you would say both are important, or perhaps you'd like to state that all animals have the latter way.

I don't think sentient, and subsequently, consciousness, are useful words when trying to bridge understanding of a concept. There are too many interpretations of them. You'd have to clarify (and perhaps clarify again after I investigate your answer) the entirety of what you think sentience is.
User avatar
Jebus
Master of the Forum
Posts: 2379
Joined: Fri Oct 03, 2014 2:08 pm
Diet: Vegan

Re: Why I'm an omnivore.

Post by Jebus »

Before moving forward would you agree with the following statement:?

If given two choices, the most moral choice is the one that causes the least amount of suffering (or highest amount of pleasure)?
How to become vegan in 4.5 hours:
1.Watch Forks over Knives (Health)
2.Watch Cowspiracy (Environment)
3. Watch Earthlings (Ethics)
Congratulations, unless you are a complete idiot you are now a vegan.
ShadowStarshine
Newbie
Posts: 17
Joined: Sat Sep 15, 2018 1:25 pm
Diet: Meat-Eater

Re: Why I'm an omnivore.

Post by ShadowStarshine »

If moral choices are discussions about peoples values, I would say the most moral choice is the one that allows for the highest value expression. Now, if you want to say your definition of suffering is a loss of value, and pleasure is a gain of value, as subjectively defined by things capable of having them, then we are talking about the same thing. If your definition of suffering is linked specifically to nerve endings we describe as pain, and pleasure as a specific chemical brain state, then I'd say we aren't talking about the same thing and my answer would be no.
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10273
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: Why I'm an omnivore.

Post by brimstoneSalad »

Welcome!
ShadowStarshine wrote: Sun Sep 16, 2018 3:46 am If your definition of suffering is linked specifically to nerve endings we describe as pain, and pleasure as a specific chemical brain state, then I'd say we aren't talking about the same thing and my answer would be no.
You're right to reject the hedonistic interpretation, as it is quite arbitrary. What we're interested in is interests (values, if you want to call them that).

However, it is a good approximation to assume sentient beings don't want to experience pain and suffering. Where it fails is where we accept some pain and suffering for other goals (like suffering and dying to save a loved one, or country, etc.). But we can still say it's better to not cause suffering than to cause it, all other things being equal. You could even admit that in a probabilistic way without knowing if the victim really is suffering (as long as there's no reasonable belief that the victim enjoys it, but rather it's a question of presence of sentience/awareness).
It's better to not cause maybe suffering than to cause maybe suffering, all other things being equal.
ShadowStarshine wrote: Sun Sep 16, 2018 3:03 am I'm making a distinction between different possible usages of the word. One, focuses on the behaviorally mechanistic way, like a Chess AI. The other involves an awareness. Maybe you would say both are important, or perhaps you'd like to state that all animals have the latter way.
If we're talking about an adaptive neural network, there's a good argument to be made that a general purpose Synthetic Intelligence IS aware (not something specifically programmed for chess and only chess).
True learning (as expressed through operant conditioning) is the smoking gun of sentience and consciousness, since it requires some basic level of awareness of self and environment in the context of interests to motivate that learning and resultant behavior.

A mere chess program is more of an unmodified fixed action pattern, not real awareness/sentience.

Of course there's a gradation to it (current generation SI is mostly around the level of insects, not chordates), but it's important to consider that while people may be naturally less empathetic to computer programs that may just be a failure on our part and NOT a proof against sentience by behavioral tests.
ShadowStarshine wrote: Sun Sep 16, 2018 3:03 am I don't think sentient, and subsequently, consciousness, are useful words when trying to bridge understanding of a concept. There are too many interpretations of them. You'd have to clarify (and perhaps clarify again after I investigate your answer) the entirety of what you think sentience is.
It's probably more useful to talk about the tests of sentience (like operant conditioning) to explore an objective and useful definition.
ShadowStarshine
Newbie
Posts: 17
Joined: Sat Sep 15, 2018 1:25 pm
Diet: Meat-Eater

Re: Why I'm an omnivore.

Post by ShadowStarshine »

You're right to reject the hedonistic interpretation, as it is quite arbitrary. What we're interested in is interests (values, if you want to call them that).
We can call them wants, or preferences, interests, values. I'm okay with the general gist of all of them. I suppose value could have a somewhat universal connotation that I don't mean.
It's probably more useful to talk about the tests of sentience (like operant conditioning) to explore an objective and useful definition.
I agree, it's much better to break it into observable and understandable chunks rather than equivocating something into existence under an umbrella term. But that brings me to:
However, it is a good approximation to assume sentient beings don't want to experience pain and suffering.
Now, are we saying that "Things that have the ability: Operant Conditioning" have the capacity to *experience* pain, and also, if suffering is defined by preference, then it is a tautology that: That which suffers doesn't want to suffer. It's definitionally true. If you're okay with it, I'd like to avoid using sentience in any way that we haven't agree upon definitionally.
But we can still say it's better to not cause suffering than to cause it, all other things being equal. You could even admit that in a probabilistic way without knowing if the victim really is suffering (as long as there's no reasonable belief that the victim enjoys it, but rather it's a question of presence of sentience/awareness).
It's better to not cause maybe suffering than to cause maybe suffering, all other things being equal.
So I agree, in a sense. I take a purely descriptive approach to preference morality, I think any other approach of what is "better" is non-cog, so if that's an angle you want to contest just let me know. The part I agree on, is I think maximizing value expression within a closed system cannot be done by merely making moves that maximize each individual value of a single person. Conflicts are not best resolved by ignoring these issues or trying to bypass them, but by calculating for them.
If we're talking about an adaptive neural network, there's a good argument to be made that a general purpose Synthetic Intelligence IS aware (not something specifically programmed for chess and only chess).
I'm open to such an argument if you'd like to make it.
True learning (as expressed through operant conditioning) is the smoking gun of sentience and consciousness, since it requires some basic level of awareness of self and environment in the context of interests to motivate that learning and resultant behavior.
I don't think I agree with this. So much of what we do as humans, especially in terms of operant conditioning, is done "behind the scenes" as it were. I think the definition even presupposes that our observation *means* dislike or like in a rich sense, but really its relative to any stored value system. You simply need to check the input against a standard, and that standard doesn't have to mean you like or dislike the thing. I wouldn't suggest that a more complex operant conditioning system is different from a less complex one in terms of awareness.

Thanks for the reply, hit me back! This forum looks like it has some good thinkers on it.
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10273
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: Why I'm an omnivore.

Post by brimstoneSalad »

ShadowStarshine wrote: Mon Sep 17, 2018 12:20 am if suffering is defined by preference, then it is a tautology that: That which suffers doesn't want to suffer. It's definitionally true.
Not necessarily, since there can be non-experiential preferences with regard to idealized interests. That is, suffering is only part of preference.

For example, which would you hypothetically prefer:

Scenario A: You're killed painlessly, and your family is tortured for the rest of their lives (something you never know about and so you never technically suffer even the thought of)

Scenario B: You're killed otherwise painlessly after having a slightly painful paper cut inflicted upon you. Your family lives out their lives happily (again, you don't find out about this on account of being dead).

The only suffering YOU experience there is in scenario B, and yet most people would prefer Scenario B happening over the technically suffering-less (for themselves) scenario A.

Suffering relates only to experience, and it's almost always true that we'd rather not suffer than suffer, but interests extend beyond mere experience so it's not always true that the less suffering is better since there are exceptions with extreme circumstances that affect preferences. Thus it's not a perfect tautology.
ShadowStarshine wrote: Mon Sep 17, 2018 12:20 amIf you're okay with it, I'd like to avoid using sentience in any way that we haven't agree upon definitionally.
Here we use it to refer to having sense experience which is comprehended in a meaningful way, not just responded to mindlessly by way of relfex.
A machine with a photo sensor that gives a readout of the color spectrum is not sentient; it senses but doesn't understand the significance of that information. In order to understand sense experience in a meaningful way, it must be interpreted relative to values/interests. The only proof of that is operant conditioning (otherwise we wouldn't know if we're dealing with a fixed action pattern). That is not to say that there can't be secretly sentient things (rocks?) but it would just be unfalsifiable for us, so there's no reason to make assumptions about what the interests of rocks might be.
ShadowStarshine wrote: Mon Sep 17, 2018 12:20 amI take a purely descriptive approach to preference morality, I think any other approach of what is "better" is non-cog, so if that's an angle you want to contest just let me know.
I'm not sure what you mean by that.
ShadowStarshine wrote: Mon Sep 17, 2018 12:20 amThe part I agree on, is I think maximizing value expression within a closed system cannot be done by merely making moves that maximize each individual value of a single person. Conflicts are not best resolved by ignoring these issues or trying to bypass them, but by calculating for them.
Sure, but the only way to do that is objectively.

When you take into account the values of everybody, you can't just arbitrarily take into account the values of white men, or just whites, or just humans. You have to take into account the values of every being that has values, to the extent the being has them.

We're looking for game theory applications that give us the best win-win scenarios possible.

With animal agriculture today, it's a pretty clearly lose-lose proposition: taking into account effects from climate change and antibiotic resistance along with agricultural inefficiency for humans, and the suffering of the animals used on the other end.
ShadowStarshine wrote: Mon Sep 17, 2018 12:20 amI'm open to such an argument if you'd like to make it.
See above for my comments on sentience and falsifiability.
Likewise, there's no reason to believe there's a difference between a system running on silicon chips and living wetware in terms of how it functions and what it does (we might as well postulate an unfalsifiable soul at that point).
ShadowStarshine wrote: Mon Sep 17, 2018 12:20 amSo much of what we do as humans, especially in terms of operant conditioning, is done "behind the scenes" as it were.
I'm not sure what you mean by that, there's not really a good way to differentiate conscious and unconscious processes, particularly since humans are so metacognitively mistaken about themselves.
ShadowStarshine wrote: Mon Sep 17, 2018 12:20 amI wouldn't suggest that a more complex operant conditioning system is different from a less complex one in terms of awareness.
There are more factors taken into consideration with more complex systems. E.g. an insect may only be aware of its position in the environment relative to food that is visible, and its state of being hungry or not. A human will be aware of that, plus potentially dozens of others considerations that stack on top of it arguably giving more value.
ShadowStarshine
Newbie
Posts: 17
Joined: Sat Sep 15, 2018 1:25 pm
Diet: Meat-Eater

Re: Why I'm an omnivore.

Post by ShadowStarshine »

Not necessarily, since there can be non-experiential preferences with regard to idealized interests. That is, suffering is only part of preference.
Hold up, I don't think it's fair to talk about non-experiential preferences if we look at your original statement:
However, it is a good approximation to assume sentient beings don't want to experience pain and suffering.
You were stating that part of what it is to be sentient, is to experience. If that is not necessarily on the table, then we are having a different discussion. This is why it's so important to define sentience. Once I know the totality of what you believe it is, then I can express what I think has it, and why I think that is the case.
For example, which would you hypothetically prefer:
My problem with this hypothetical is that I'm supposed to answer it from a place of knowledge (me, now, understanding the question) but also from the perspective of not knowing. If I truly don't know the details, then I have to pretend they are not there, in which case, it's a paper cut vs not a paper cut.

The only way I can say I prefer B over A is if I've managed to conceptualize it, which is the what I'm doing in the process of this question. And thusly, it is part of my experience. My experience is the understanding of the question itself. Were I not to have the ability to such a thing, I could not say that it could affect me.

There could be a definitional divide between us when we say experience, where perhaps you are limiting it to sensory experience, but I would look at experience in a much broader way. It is not my sensory data that gives me experience, it is my understanding of it, and my understanding of that question, and my understanding of what I want that is an experience.
A machine with a photo sensor that gives a readout of the color spectrum is not sentient; it senses but doesn't understand the significance of that information. In order to understand sense experience in a meaningful way, it must be interpreted relative to values/interests. The only proof of that is operant conditioning (otherwise we wouldn't know if we're dealing with a fixed action pattern).
I disagree on two fronts, possibly one, because one I'm inferring.

I do not think that operant conditioning is an understanding of anything unless we use understanding in an ambiguous way to mean "according to data". If something was to take in sensory information, cross that with conception of an action (That which led to the sensory data), output a new statement for future use "If stimuli X, take alternative action Y, avoid sensory data Z" I would not call that understanding in the way we mean when we experience something. These tend not to be the processes that encompass our awareness. Yes, they are more complex than a fixed action pattern, but I do not think they are, by necessity, the building blocks of our awareness of the world. (Though they could have lead to the conditions that were).

The second disagreement, is that it would be the only way to prove experience in a meaningful way. In any absolute sense, we can't at all. But if we were going to accept inferences at all, then someone telling you about their experiences would have to be the strongest inference we have imaginable. Beyond that, I think it's fair to use what we know about ourselves as a reference point.
I'm not sure what you mean by that. (In reference to the preference non-cog statement)
I'm saying if we are being purely descriptive rather than prescriptive, of what it means to be "better to suffer than to not suffer", I'd say you are either tautologically describing suffering as something you don't prefer. The alternative, is "better" means a moral sense outside of preferences, and I don't think there is a cognitive definition outside of that.
When you take into account the values of everybody, you can't just arbitrarily take into account the values of white men, or just whites, or just humans. You have to take into account the values of every being that has values, to the extent the being has them.

We're looking for game theory applications that give us the best win-win scenarios possible.
100% agree with you.
With animal agriculture today, it's a pretty clearly lose-lose proposition: taking into account effects from climate change and antibiotic resistance along with agricultural inefficiency for humans, and the suffering of the animals used on the other end.
I think this is correct, in a sense, but incorrect in another senses. Let's take animal suffering off the table for the sake of this point, because we clearly contest what is suffering and what could do it so I can address the other points directly.

To climate change: I agree that a sustainable environment is important. I also agree that animal agriculture is less effecient environmentally. I don't agree with a lot of rhetoric involving the amount of Carbon, but nitros oxide is high (though this could be said it is due to a particular form of animal agriculture, not animal agriculture in principal), the actual issue is the loss of treed land which is a great carbon sink, which increases the amount of sustainable gases.

But, and this is the main point, there is an amount of sustainable gases we can output. We are exceeding it, but the ways of reduction are plentiful. If we removed a lot from transportation, if we changed how we generate energy or if we simply enacted one child policies to reduce population (every single one of these has the issues of getting other nations to play along). Regardless of what we do on this earth, if we became more and more effecient, we have to eventually have a conversation about population control. To say that we can't fix it would be naive, in my opinion.

We can have animal agriculture, so long as it doesn't exceed the limits of what this planet can sustain.

To antibiotics: I'm ignorant here, you'd have to educate me on this one.

To agricultural inefficiency: If this is in reference to "feeding other people in the world", the issue isn't that we don't produce enough. The developed world already produces at an excess. We simply don't give the excess to underdeveloped nations. Creating more excess doesn't solve the problem. If we get to the point where we actually start caring about other nations (and I do), and we find that we can't produce enough food, and that we are willing to become more efficient to do so, sign me up.
There are more factors taken into consideration with more complex systems. E.g. an insect may only be aware of its position in the environment relative to food that is visible, and its state of being hungry or not. A human will be aware of that, plus potentially dozens of others considerations that stack on top of it arguably giving more value.
This is a topic that actually needs to be addressed with vegans. The idea that the amount of content of awareness would be the same thing as less of a process of awareness. Which leads to statements like "insects are less conscious."

It would be like saying Helen Keller is "less conscious". I would argue no, her process of consciousness is equal, but she does have a reduction in content. That doesn't mean the content she has, she cares less about, and thusly, a process of suffering is different. To say the conceptualization of our values of the content is equal to the amount of content just doesn't logically follow.

Now, of course, I would argue the insect simply doesn't conceptualize any of it in a meaningful way, but if you think it does, I would think it would be unfair to state that its lack of content means a lack of process or value extraction of it.
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10273
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: Why I'm an omnivore.

Post by brimstoneSalad »

ShadowStarshine wrote: Mon Sep 17, 2018 6:45 pm You were stating that part of what it is to be sentient, is to experience.
Sure, but non-sentient beings can have interests too even though they don't experience.

Imagine you lost your sense of touch, of sight, hearing, taste, smell, etc.
You were locked in, totally senseless, yet still present in mind. Do you not want anything anymore just because you can't experience anything in the world?

In an abstract sense, you probably want (hope) that your family is well, you may worry that they're spending too much money keeping you alive, etc. even though you can't experience ANY of that. You still have interests despite no sense experience, unless you want to call those thoughts sense experience (they aren't, they're experience in a way but not from senses).

Sentience is a very important part of things. I don't think it's possible for a being to have wants without having been sentient at some point (before being cut off), but wants aren't always limited by the experiences you will have.

My point is that sentience (which involves having wants to process sense experience in the context of) is proven by demonstrable wants that relate to responding to sense experience. My point was not that sense experience was ALL that sentient beings can want, or that beings without sense experience can't have wants.
ShadowStarshine wrote: Mon Sep 17, 2018 6:45 pm My problem with this hypothetical is that I'm supposed to answer it from a place of knowledge (me, now, understanding the question) but also from the perspective of not knowing.
It's called an idealized interest.

In the same way I can choose for you to not to let you eat a cookie you want to eat because I know it to be poisoned (and you wouldn't want to eat it IF you knew that), I can choose for you to receive the paper cut to save your family because that's the choice you would make IF you knew that.

Your idealized interest would be receiving the paper cut and saving your family, even if you'd never know it.
ShadowStarshine wrote: Mon Sep 17, 2018 6:45 pmIf I truly don't know the details, then I have to pretend they are not there, in which case, it's a paper cut vs not a paper cut.
You, in full knowledge, are deciding for a hypothetical ignorant version of yourself.
Otherwise it would be a bit like saying a person wants to eat a poisoned cookie because the person doesn't know it's poisoned and calling that a legitimate desire.

...Unless you're saying the only reason we care about our families is because we'll experience displeasure if we KNOW something harmful has befallen them, and if we were honest and accurately reflective we'd say to others that it's fine by us for them to harm our families as long as we don't find out about it.

Some people DO feel that way, not caring if others suffer but just not wanting to see it or be told about it. Some other people SEEK OUT this information despite the emotional pain it causes because they want to know and help prevent it regardless of the personal inconvenience.

There will always be ever more elaborate unfalsifiable hypothetical explanations that save strict psychological egosim (it seems like this is what you're hinting at: https://en.wikipedia.org/wiki/Psychological_egoism), but it's not a very good assumption to make when people say they do have non-experiential interests and legitimately care about others beyond the goods it offers them.
ShadowStarshine wrote: Mon Sep 17, 2018 6:45 pm There could be a definitional divide between us when we say experience, where perhaps you are limiting it to sensory experience, but I would look at experience in a much broader way.
There's no definitional divide.
ShadowStarshine wrote: Mon Sep 17, 2018 6:45 pmIf something was to take in sensory information, cross that with conception of an action (That which led to the sensory data), output a new statement for future use "If stimuli X, take alternative action Y, avoid sensory data Z" I would not call that understanding in the way we mean when we experience something.
I don't know what you're saying here.
ShadowStarshine wrote: Mon Sep 17, 2018 6:45 pmThe second disagreement, is that it would be the only way to prove experience in a meaningful way. In any absolute sense, we can't at all.
We know to a moral certainty. Absolute knowledge is not necessary. In absolute terms we don't know that we're not alone in a simulation (brain in a vat) surrounded by mindless zombies that only act as we expect them to with a little noise thrown in.
ShadowStarshine wrote: Mon Sep 17, 2018 6:45 pmBut if we were going to accept inferences at all, then someone telling you about their experiences would have to be the strongest inference we have imaginable.
How is that? There are plenty of chat programs that can do that pretty convincingly without any real intelligence or understanding behind them.
Plenty of toys say "I love you" or "it makes me happy when you tickle me" etc. Is that telling us about their experiences?

Actions speak much more reliably than words.

Without any contradictory evidence we should probably believe people when they say they have non-experience based interests and care about others, but that is at least to err on the side of caution. The fact that beings demonstrate behaviorally their interests is much stronger evidence, so I'm not sure how you can deny that.
ShadowStarshine wrote: Mon Sep 17, 2018 6:45 pmI'm saying if we are being purely descriptive rather than prescriptive, of what it means to be "better to suffer than to not suffer", I'd say you are either tautologically describing suffering as something you don't prefer. The alternative, is "better" means a moral sense outside of preferences, and I don't think there is a cognitive definition outside of that.
I think you may be a bit confused here on the meaning of non-cognitivsm.

There are MANY cognitive definitions of morality outside of personal preference. The more important question is whether any of them are valid definitions or refer to anything that exists or can exist in reality.

I mean, you could define moral value as relating to mass in kilograms, and that has nothing at all to do with preferences and is perfectly scientifically objective... but it may not be semantically valid (I have a feeling a usage panel would consider that incorrect word usage).

Now if you don't think there are any semantically valid and logically coherent/objective definitions of morality that are cognitive, then that's another matter. I think you're mistaken on that point, but it's something that could be discussed.
ShadowStarshine wrote: Mon Sep 17, 2018 6:45 pm But, and this is the main point, there is an amount of sustainable gases we can output.
There are, but it's FAR below what we currently output, and something has to give. Animal agriculture is the place is makes the most sense because it's completely unnecessary for human health (arguably even deleterious relative to the alternatives). It does not make sense to sacrifice something like housing or keeping people from freezing in the winter.

We can't even budget giving everybody what they NEED, so wants like meat burgers (when something like the impossible burger should fulfill that just as well for most people) are far outside the scope of what we should be trying to cling to.
ShadowStarshine wrote: Mon Sep 17, 2018 6:45 pmIf we removed a lot from transportation
We don't have a choice but to ALSO remove a lot from transportation, but meaningful reductions mean MASSIVE infrastructure changes and most of that isn't realistic in the near-term in the way cutting out animal agriculture is.
ShadowStarshine wrote: Mon Sep 17, 2018 6:45 pmif we changed how we generate energy
We have to do that too, but again, not enough on its own and also not fast enough.
ShadowStarshine wrote: Mon Sep 17, 2018 6:45 pmor if we simply enacted one child policies to reduce population
Unless you're planning to murder most of the population today, draconian (and socially evil) policies like that take a very long time to work. We don't have the luxury of waiting multiple generations to reduce the population to make our current waste and pollution sustainable.

Also, human life is a fundamentally good thing. We want more happy and fulfilled lives, not fewer.
It's better that more people live perfectly happy lives enjoying veggie burgers than fewer people live lives not any better just so they can stay stuck in their ways and eat dead cow burgers instead for no good reason.

Forget that it's not even possible to reduce the population on the time scale we would have to do it: doing so in order that fewer people can just continue being more unnecessarily destructive and wasteful is evil. It serves no benefit at all.
ShadowStarshine wrote: Mon Sep 17, 2018 6:45 pmif we became more and more effecient, we have to eventually have a conversation about population control.
No, population in developed countries is already stabilizing; it's done so across multiple cultures BY CHOICE. You don't have to force it on people, they'll mostly have 1-3 kids on their own and stabilize once they're out of poverty and have access to the tools to do it.
ShadowStarshine wrote: Mon Sep 17, 2018 6:45 pmWe can have animal agriculture, so long as it doesn't exceed the limits of what this planet can sustain.
If we make inordinate sacrifices in other areas (sacrifices that harm quality of life) in order to support animal agriculture (which does nothing to improve quality of life) we could hypothetically do it. It would make us self destructive idiots, though. It's bad policy, and it's a bad recommendation just to hold onto something that's completely unnecessary.

Meat replacements are getting better all the time (consider the "Impossible Burger"). People aren't going to miss out on anything by us as a culture giving up animal agriculture.

The only people who will be upset about it is the psychotic foodies who think they can taste the difference in water shipped around the Earth from some exotic mountain and don't care how much it costs others to do that.

ShadowStarshine wrote: Mon Sep 17, 2018 6:45 pmTo antibiotics: I'm ignorant here, you'd have to educate me on this one.
There are many articles on the topic. (Not sure if I've read this one or not):
https://www.scientificamerican.com/arti ... our-table/

Antibiotics make factory farms possible, where disease is otherwise rampant and losses aren't sustainable.
Yields are much lower (and land use much larger) without antibiotics. But if we keep using them, we'll have none left that are effective on humans when we get sick.

There's no practical way to contain or isolate bacteria from these farms. The only option is to stop using antibiotics and start wasting even more land and resources on farmed animals.
ShadowStarshine wrote: Mon Sep 17, 2018 6:45 pmTo agricultural inefficiency: If this is in reference to "feeding other people in the world", the issue isn't that we don't produce enough.
No, it's an issue of "stop using so damn much land and let the forests grow back". ;)

Part of the waste and environmental destruction of animal agriculture is opportunity cost with respect to the forest that could otherwise be there soaking up carbon.
ShadowStarshine wrote: Mon Sep 17, 2018 6:45 pmThe idea that the amount of content of awareness would be the same thing as less of a process of awareness. Which leads to statements like "insects are less conscious."
I think I said arguably more valuable, not more conscious of any one thing; although the human is conscious of more things.
That's all you really need to show that there are some degrees to consciousness (whether there is one axis or many) unless you subscribe to a very implausible multiple consciousnesses kind of theory.

A human doesn't necessarily need to be more conscious of where the sandwich is than an insect is of where the dung is relative to itself (a plausible account puts them equal at best for the insect). But while they may share that rudimentary understanding of self in relation to food, a human is also conscious of more things beyond that, so much so that a human may ultimately have a lot more to care about (this is the norm but not universally true, of course some people are apathetic or catatonic).

If you want to talk relative value then we could also talk about the processing power devoted to each routine if you want, or to the whole, and if you just want to count quantity rather than quality then parallel interests act in the mind very much as distinct subroutines of consciousness (although I don't think that's very useful); we could even make a reasonable analogy to different competing agents bidding behind the scenes like so many squabbling homunculi like in old cartoons.

You can compare insects to humans among MANY dimensions, but the fact is in behavioral terms that not very much practical effect is manifest in an insect's learning and adaptation. Part of that is due to being conscious of fewer things, but limits in intelligence are likely more pressing (and intelligence is probably limited precisely because the insect only need be conscious of a few things and only needs to learn with acceptable speed to be reasonably successful, and any excess would be a metabolic waste).

Bottom line, it doesn't matter that much what's supposedly going on in the black box when it comes to actual evaluation. Barely caring about something but being very intelligent can have the same manifestation as caring with all of your being but being only barely intelligent, and that's the only thing we can really act on. That's what I ultimately consider. I don't give an insect bonus points for being dumb, and I don't think it makes sense for anybody to do so in an objective head-to-head comparison.
ShadowStarshine wrote: Mon Sep 17, 2018 6:45 pmTo say the conceptualization of our values of the content is equal to the amount of content just doesn't logically follow.
Not what I'm saying, it's just one way to look at aspects of consciousness. Intelligence is a better proxy for value than interest counting (particularly since they aren't necessarily discrete), but interest counting in a crude sense can also be a good approximation and can be easier to explain, so it's not entirely wrong.

An insect (some insects) comprehends sense data in a meaningful way and that gives it some value, but it's less meaningful in many ways than a human because it's less robust; fewer meaning associations, and less processing power ultimately devoted to it.
Post Reply