Why I'm an omnivore.

Vegans and non-vegans alike are welcome.
Post an intro here first to have your account authenticated by a mod, then you'll be able to post anywhere.
Even if you're here to lurk, please drop a short intro post here to let us know you're not a spammer so you aren't accidentally deleted.

Forum rules
Please read the full Forum Rules
ShadowStarshine
Newbie
Posts: 16
Joined: Sat Sep 15, 2018 1:25 pm
Religion: None (Atheist)
Diet: Meat-Eater

Re: Why I'm an omnivore.

Post by ShadowStarshine » Sun Sep 23, 2018 1:32 am

Part 1:

Whew, okay. Your reply was so long I had to wait until the weekend to have time to address it.

So the first paragraph was very telling about the ambiguity of words in the philosophy of the mind. This isn't a slight or to say someone is right or wrong, but to point out the difficulty in using terminology to express certain concepts, and it's a bridge to gap between us to foster understanding.
Sure, but non-sentient beings can have interests too even though they don't experience.

Imagine you lost your sense of touch, of sight, hearing, taste, smell, etc.
You were locked in, totally senseless, yet still present in mind. Do you not want anything anymore just because you can't experience anything in the world?
So what I learned from this reply, is that your concept of sentience and even the word experience is linked to sensory perceptions. Someone who is "present in mind" as you described, is not having an experience. Whereas, if I was to use the word experience, that presence of mind would be under the umbrella of that word. Your thoughts would be something you are experiencing.

Also, interesting that this is a scenario for non-sentience for you. I've debated the Ask Yourself vegans quite a bit, and when they describe something as "non-sentient", it is essentially brain dead. The idea of still having the presence of mind for them, while being non-sentient, wouldn't make sense. Again, not stating right or wrong here, I'm just noting this difference in terminology usage. I don't tend to use words like "sentience", as stated before, but our difference in just the word experience will cause some confusion.

When I say it, I'm referring to anything you are present of mind about. Whether that's abstract thoughts, or sensory experiences, or whatever. If my eyes were to take in some data, but it was not part of my HUD, I would say I am not experiencing it, even though my brain might process and use that data. I recently learned about some people who have had brain damage, and could not "see" an object, even though their eyes were taking in the data. When asked to reach out to touch it, they could. They obtain the visual information and they still can process it. I wouldn't say they *experience* it, however.
It's called an idealized interest.

In the same way I can choose for you to not to let you eat a cookie you want to eat because I know it to be poisoned (and you wouldn't want to eat it IF you knew that), I can choose for you to receive the paper cut to save your family because that's the choice you would make IF you knew that.

Your idealized interest would be receiving the paper cut and saving your family, even if you'd never know it.
It's not that I disagree with what you're saying, it's just that I don't think the analogy of asking *me* works, because I would have had the capacity to consider the question. If you asked me "If I was the rock, would I want to be smashed or left alone" I could answer that question, were I to have my capacity to consider the question, as though I was a magic rock. But to say: "Therefore, that is what the rock would want" doesn't follow, because the conclusion has taken the magic away from the rock. It was the "me-ness" giving it value, and that was who the question was directed at.

If you just asked me directly: Do you think this rock wants to not be smashed? I would say the rock doesn't have the capacity to care.

Now if we are talking about just me, yes I care about my family, which means you could extract the understanding in which I would want them to be safe even if I wasn't aware of something that was happening. But for me to consider how I would *feel* about it, I must have known about it. It cannot be said that I feel displeasure about what happens to my family if I don't know about it. It's a very minor point.
There's no definitional divide.
I hope I cleared up the divide I think we have. I think something "present of mind" with no sensory information is also experiencing.
I don't know what you're saying here.
I'm offering up a mechanistic explanation of a behavioral occurrence sans experience. I'm showing how operant conditioning can be used without experiential consideration, another way of saying: It's not brought to our attention that it is occurring. It's not just that I think animals don't have this ability, I don't think humans do either. Operant conditioning *could* be done through experiential means, but for the vast majority, it is not.

ShadowStarshine
Newbie
Posts: 16
Joined: Sat Sep 15, 2018 1:25 pm
Religion: None (Atheist)
Diet: Meat-Eater

Post by ShadowStarshine » Sun Sep 23, 2018 1:32 am

Part 2:
How is that? There are plenty of chat programs that can do that pretty convincingly without any real intelligence or understanding behind them.
Plenty of toys say "I love you" or "it makes me happy when you tickle me" etc. Is that telling us about their experiences?

Actions speak much more reliably than words.
Hard disagree, "I love you" is not a turing test passer. There is no AI that exists that could actually convince me it was having an experience through words. It would be much easier programming a bot to run away from stimuli and cry then I would ever have programming something that could actually hold a real conversation about itself.

We have no sense of ourselves as babies, and yet we are able to exhibit the same mechanistic behavior as when we do have a sense of self. You could take a Wittgenstein approach and say that this is because language *is* the content of thought, though I'm not sold on that. It at least shows me there is a higher correlation with language and communication and a sense of what it is to be, then behavior, which seems to go on fine without it.
Without any contradictory evidence we should probably believe people when they say they have non-experience based interests and care about others, but that is at least to err on the side of caution. The fact that beings demonstrate behaviorally their interests is much stronger evidence, so I'm not sure how you can deny that.
The reason why I can have a stronger inference of *people* is that if materialism is true, and if evolution is true, I would understand that the thing by which gave me my sense of being isn't any different than others, and that the way I conceptualize that understanding can be expressed through language and expressed back to me. That can't be done via behavior.
I think you may be a bit confused here on the meaning of non-cognitivsm.

There are MANY cognitive definitions of morality outside of personal preference. The more important question is whether any of them are valid definitions or refer to anything that exists or can exist in reality.

I mean, you could define moral value as relating to mass in kilograms, and that has nothing at all to do with preferences and is perfectly scientifically objective... but it may not be semantically valid (I have a feeling a usage panel would consider that incorrect word usage).

Now if you don't think there are any semantically valid and logically coherent/objective definitions of morality that are cognitive, then that's another matter. I think you're mistaken on that point, but it's something that could be discussed.
Right, sure you can give a cognitive version like mass in kilograms, I'm speaking to what people *actually* attempt to do. I think none of *those* are cognitive, because unlike mass in kilograms, they don't refer to anything/aren't talking about anything. Feel free to offer a definition, just try to make sure it's not circular. (Such as what is moral is good).
There are, but it's FAR below what we currently output, and something has to give. Animal agriculture is the place is makes the most sense because it's completely unnecessary for human health (arguably even deleterious relative to the alternatives). It does not make sense to sacrifice something like housing or keeping people from freezing in the winter.

We can't even budget giving everybody what they NEED, so wants like meat burgers (when something like the impossible burger should fulfill that just as well for most people) are far outside the scope of what we should be trying to cling to.
So I don't necessarily disagree with this point, so long as you agree to the point that there theoretically exists a world where you could ethically have meat production (in a purely environmental aspect), so long as it is below the threshold. I agree that in general, meat production is more of a luxery and easier to give up when calculating what we should do atm, mostly for the tree land, but for emissions we need to heavily look at energy production and travel. And I hope you agree that a population conversation is inevitable either way.
Unless you're planning to murder most of the population today, draconian (and socially evil) policies like that take a very long time to work. We don't have the luxury of waiting multiple generations to reduce the population to make our current waste and pollution sustainable.
I agree, but to be fair, if it was literally between making the planet completely uninhabitable for humans, or murdering people, I would choose murdering people. Just a different trolley problem.
Also, human life is a fundamentally good thing. We want more happy and fulfilled lives, not fewer.
Well that's not a value I share. I want the people who *are* around to have a happy and fulfilled life, but I wouldn't ask them to lower the quality of their lives for *more* people, because more is just better.
No, population in developed countries is already stabilizing; it's done so across multiple cultures BY CHOICE. You don't have to force it on people, they'll mostly have 1-3 kids on their own and stabilize once they're out of poverty and have access to the tools to do it.
The planet is already past its environmental limit, stabilizing isn't good enough.
If we make inordinate sacrifices in other areas (sacrifices that harm quality of life) in order to support animal agriculture (which does nothing to improve quality of life) we could hypothetically do it. It would make us self destructive idiots, though. It's bad policy, and it's a bad recommendation just to hold onto something that's completely unnecessary.
I don't understand this sentence. How is it self-destructive in a hypothetical where things are sustained?
Antibiotics make factory farms possible, where disease is otherwise rampant and losses aren't sustainable.
Yields are much lower (and land use much larger) without antibiotics. But if we keep using them, we'll have none left that are effective on humans when we get sick.
If I take this at face value, and I'm charitable so I would do so, I would agree this is an issue. I'd be happy to take whatever measures are necessary to eliminate this problem.
There's no practical way to contain or isolate bacteria from these farms. The only option is to stop using antibiotics and start wasting even more land and resources on farmed animals.
Or make less meat, or take losses and jack the prices. I can't speak on all the possibilities.
I think I said arguably more valuable, not more conscious of any one thing; although the human is conscious of more things.
That's all you really need to show that there are some degrees to consciousness (whether there is one axis or many) unless you subscribe to a very implausible multiple consciousnesses kind of theory.
Alright, I'll take it mean that when you say it. Also, I have a philosopher friend trying to convince me of multiple consciousnesses as a possibility. It's an interesting back and forth.
Bottom line, it doesn't matter that much what's supposedly going on in the black box when it comes to actual evaluation. Barely caring about something but being very intelligent can have the same manifestation as caring with all of your being but being only barely intelligent, and that's the only thing we can really act on. That's what I ultimately consider. I don't give an insect bonus points for being dumb, and I don't think it makes sense for anybody to do so in an objective head-to-head comparison.
If I was to speak on values, I wouldn't care about how intelligent something was, merely about how much it cares about a thing, which I would state requires a conscious awareness of that thing. So sure, I agree that IF an insect was consciously aware of things, it wouldn't care about voting, or other such stuff it can't consider, and thusly when thinking of how do handle value confliction, I wouldn't have to consider a value it didn't have. In that sense we can say, your "less conscious" matters. But if you were to state, the insect can care about its life, but doesn't care about a lot of other things, therefore its ok to squish it, I would say is nonsense way to calculate value confliction.
An insect (some insects) comprehends sense data in a meaningful way and that gives it some value, but it's less meaningful in many ways than a human because it's less robust; fewer meaning associations, and less processing power ultimately devoted to it.
I think "fewer meaning associations" is an interesting way to phrase that, I think you could start a solid case based on that.

User avatar
Jebus
Master of the Forum
Posts: 1824
Joined: Fri Oct 03, 2014 2:08 pm
Religion: None (Atheist)
Diet: Vegan

Post by Jebus » Sun Sep 23, 2018 3:10 am

ShadowStarshine wrote:
Mon Sep 17, 2018 6:45 pm
if we became more and more effecient, we have to eventually have a conversation about population control.
brimstoneSalad wrote:
Mon Sep 17, 2018 10:14 pm
No, population in developed countries is already stabilizing; it's done so across multiple cultures BY CHOICE. You don't have to force it on people, they'll mostly have 1-3 kids on their own and stabilize once they're out of poverty and have access to the tools to do it.
I agree with Shadow here.

I don't think we should assume that the trend of stable population growth in the developed world will automatically continue. Poor people have more children mainly due to lack of contraceptives plus concerns about growing old without support from offspring. The richest are also having more children, probably because they know they have the choice of spending their time with their kids rather than working. I therefore assume that the reason the middle class are having fewer children is that they worry about the potential time and economy strains during their career years while they don't worry about their retirement economy. Instead of assuming that the time/economy balance that causes small families will continue, why not instead play it safe by removing policies that encourage people to spawn? In addition, such policies are unfairly paid for by those who choose not to have children.

One thing I think we can safely assume is that mortality rates will continue to decrease. Unless there is a concurrent decrease in birth rates, population growth will soon get out of control. I also believe it's wishful thinking that a planet Earth with 10 billion people can have the same average happiness level as a planet Earth with 5 billion people.

You (Brimstone) have previously mentioned that you think drastic population growth will prompt draconian measures that will benefit our planet long term. I would encourage you to rethink this while considering all the things that could go wrong under such a scenario.
How to become vegan in 4.5 hours:
1.Watch Forks over Knives (Health)
2.Watch Cowspiracy (Environment)
3. Watch Earthlings (Ethics)
Congratulations, unless you are a complete idiot you are now a vegan.

User avatar
brimstoneSalad
neither stone nor salad
Posts: 8948
Joined: Wed May 28, 2014 9:20 am
Religion: None (Atheist)
Diet: Vegan

Post by brimstoneSalad » Tue Sep 25, 2018 4:50 pm

ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
So what I learned from this reply, is that your concept of sentience and even the word experience is linked to sensory perceptions.
Sentience is most probably, it relates to sense experience with the outside world, but you could also talk about "feelings" in terms of emotional qualia or something.

It's a bit less ambiguous just to talk about external sensation, which is falsifiable.
Sentience is the capacity to feel, perceive or experience subjectively.[1] Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience). In modern Western philosophy, sentience is the ability to experience sensations (known in philosophy of mind as "qualia"). In Eastern philosophy, sentience is a metaphysical quality of all things that require respect and care. The concept is central to the philosophy of animal rights because sentience is necessary for the ability to suffer, and thus is held to confer certain rights.
https://en.wikipedia.org/wiki/Sentience

The distinction between that and consciousness is very very subtle.
The notion of "capacity" may be a mess, because we can ask in what context.
Somebody asleep arguably has that capacity (since he or she can be awoken by being jostled), thus sentient but unconscious (or in another dream state of consciousness, but not an outward one). Somebody brain dead doesn't, but what if he or she can be returned to that state with a brain transplant? There are obviously some lingering existential questions there, although they aren't that dramatic.

Does sentience take into account purely emotional feelings, divorced from reality? Not as clear, but if the presence of such things without external sense experience is unfalsifiable I don't think that ambiguity is important.

As to experience (note that I also clarified it as having to do with the world) that's a more complex definitional issue, but it's generally set apart from reason in philosophy. E.g. the distinction between a priori and a posteriori.

https://en.wikipedia.org/wiki/Experience

Wikipedia's page on experience is pretty good. The problem with trying to make "experience" extend to mental ruminations as well is that it weakens the utility of the word based on its ability to create distinction between two categories.

Compare, on semantic grounds, the wrongness of "literally" being used to mean figuratively, because as a consequence we lose the word's utility and no longer have a word that actually means what "literally" meant.

This is a matter of prescriptive semantics on the basis of the function of language. We should fight for words to mean things on the basis of retaining the ability to communicate those ideas, and when they become too vague or all encompassing (or come to mean their own negations) we've lost something and our ability to communicate is lesser for it.

So, yes, in the same way that somebody saying "I literally shit bricks" when in fact there was no actual brick shitting is wrong to use that word in that way (or at least would be if the word hadn't been so abused for so long that it's already become almost meaningless), I would say you are using "experience" wrong by applying it in an overly broad way... a way that makes communicating these distinct ideas more difficult.
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
Someone who is "present in mind" as you described, is not having an experience. Whereas, if I was to use the word experience, that presence of mind would be under the umbrella of that word. Your thoughts would be something you are experiencing.
Qualifying something like "internal experience" (as it's called) as distinct from run of the mill experience (which is implicitly external) might help avoid confusion.
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
Also, interesting that this is a scenario for non-sentience for you. I've debated the Ask Yourself vegans quite a bit, and when they describe something as "non-sentient", it is essentially brain dead. The idea of still having the presence of mind for them, while being non-sentient, wouldn't make sense.
I don't know how they'd answer that. Brain-dead is very similar to just being completely disconnected from reality. I think that's just a more accessible example.

Conceivably you could talk about sentience as including emotions only without any sense, but as I said if it's unfalsifiable without an internal-external exchange then it's not a very useful category.

I think a definition that just deals with something falsifiable is easier to talk about and less controversial.
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
If my eyes were to take in some data, but it was not part of my HUD, I would say I am not experiencing it, even though my brain might process and use that data. I recently learned about some people who have had brain damage, and could not "see" an object, even though their eyes were taking in the data. When asked to reach out to touch it, they could. They obtain the visual information and they still can process it. I wouldn't say they *experience* it, however.
Sure, as I said, it can't just be that a signal is sent and received, it has to be processed and understood.
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
It's not that I disagree with what you're saying, it's just that I don't think the analogy of asking *me* works, because I would have had the capacity to consider the question.
You're the one who would know best. Not sure what you're missing here.

The idealized interest is what you'd prefer in an idealized situation of full knowledge, even if in fact you lack that knoweldge.

But I covered other ways to consider it with respect to rationality...
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
"If I was the rock, would I want to be smashed or left alone" I could answer that question, were I to have my capacity to consider the question, as though I was a magic rock. But to say: "Therefore, that is what the rock would want" doesn't follow, because the conclusion has taken the magic away from the rock. It was the "me-ness" giving it value, and that was who the question was directed at.
No, the rock as NO interests at all, thus nothing to idealize.

Now if you believe in psychological egoism and you think that's it's impossible for people to have any interests that don't maximize their own pleasure, then and only then would the situations be comparable.

Are you claiming psychological egoism here?
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
Now if we are talking about just me, yes I care about my family, which means you could extract the understanding in which I would want them to be safe even if I wasn't aware of something that was happening.
YES, that's all that's happening here. That's what idealized interests are: the raw consideration.
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
But for me to consider how I would *feel* about it, I must have known about it. It cannot be said that I feel displeasure about what happens to my family if I don't know about it. It's a very minor point.
It only has to do with interests, not the feelings you wouldn't have. I'm just trying to probe your true interests here.
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
Operant conditioning *could* be done through experiential means, but for the vast majority, it is not.
You're confusing classical and operant conditioning.

Operant conditioning requires experience to function.

User avatar
brimstoneSalad
neither stone nor salad
Posts: 8948
Joined: Wed May 28, 2014 9:20 am
Religion: None (Atheist)
Diet: Vegan

Post by brimstoneSalad » Tue Sep 25, 2018 6:40 pm

ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
Hard disagree, "I love you" is not a turing test passer.
It is for a child. The Turing test is not an objective test of consciousness/sentience, it's highly subjective.

Operant conditioning is an actual experimental test which gives objective results independent of the experimenter's interpretation.
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
There is no AI that exists that could actually convince me it was having an experience through words.
A bot programmed to recite a well made script could do it pretty easily, which is yet another problem with the Turing test. Unlike operant conditioning, it makes no attempt to distinguished between programmed and novel behavior. That's the whole point of operant conditioning: remove the influence of instinct "fixed action pattern" programming by introducing something the subject would not be familiar with.
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
It would be much easier programming a bot to run away from stimuli and cry
That's not what operant conditioning is. Likewise, neither is ringing a bell to cause drooling, etc.

Something that gives a reflexive response like only moving toward or away from a particular stimuli is not demonstrated to be sentient. Operant conditioning demonstrates learned responses by putting the subject in an unfamiliar circumstance and relying on it coming to understand how the new mechanism works, using it to get what it wants.
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
We have no sense of ourselves as babies, and yet we are able to exhibit the same mechanistic behavior as when we do have a sense of self.
That's a bold claim. How do you come to know this? Is a sense of self a magical thing that we receive at some arbitrary point?
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
The reason why I can have a stronger inference of *people* is that if materialism is true, and if evolution is true, I would understand that the thing by which gave me my sense of being isn't any different than others, and that the way I conceptualize that understanding can be expressed through language and expressed back to me.
That's an inductive assumption, but it should just as well apply to anything with analogous brain structures. There's no magical line there, only distance of relation. You should assume the same at least with respect to a gradient of kind of thought and sense of being, not that only humans have it and all other beings magically lack it.
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
Right, sure you can give a cognitive version like mass in kilograms, I'm speaking to what people *actually* attempt to do.
You don't think Utilitarians have a cognitive definition?
You don't think theists have a cognitive definition?
Or even non-theistic deontologists (as rare as they are)?

All definitions meaningfully discussed and argued on in philosophy are cognitive. Without that, there's nothing to discuss.
We are NOT *actually* attempting just to spout our feelings at each other when we engage in rational discourse on the nature of morality, and I think claiming that we are is disingenuous (if that's what you're claiming).

The notion of morality, AT LEAST within the context of rational discourse on the subject (as found here) has to be cognitive, just as the rules to chess have the be cognitive within the context of a chess match.
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
Feel free to offer a definition, just try to make sure it's not circular. (Such as what is moral is good).
I referenced several above.

If you can't accept that there are cognitive definitions of morality in use (even if you disagree that one bests all contenders, which is a fair question to ask) then I don't think it's possible to have any kind of discussion on morality with you, because all you're doing is ascribing false motives to people which is about the most uncharitable thing possible and it's not compatible with a civil conversation.

I take personal offense to you telling me you know better what I *intend* to say than I do, and that all I'm trying to say is "boo murder" when I explicitly reject that and I insist that I'm talking about a factual quality. I hope that's not what you're trying to say, so correct me if I have misread you.
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
So I don't necessarily disagree with this point, so long as you agree to the point that there theoretically exists a world where you could ethically have meat production (in a purely environmental aspect), so long as it is below the threshold.
First, there is no such thing as "a purely environmental aspect" in ethics. That makes no sense at all. The environment only matters because of how it affects those who LIVE in that environment. Without consideration of consequences to those lives, any environmental goal is arbitrary and done for its own sake and has nothing at all to do with ethics and only to do with aesthetics.

I can agree there's a world where you could raise and torture humans (or any being) to death for enjoyment, or do any horrible thing you want in moderation, without any additional harm to a certain arbitrary environmental aesthetic so long as the damage is repaired or compensated for in some other way.
That doesn't say anything about ethics, though.

If you only care about aesthetics, not ethics, then perhaps that kind of argument would be appealing, but it's not something that registers with me.
I care about the environment only because of those who live in it (and who will live in it in the future). Sure, it's pretty, but that's meaningless if there's nobody to enjoy that beauty.
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
but for emissions we need to heavily look at energy production and travel.
A lot of that, too, is animal agriculture.

Of course home and non-agricultural energy use and personal travel is important too, but there is no solution that doesn't involve a change in how we eat, and there's no reason to retain ANY animal agriculture in the developed world when we have environmentally superior options.

Why not choose an impossible burger over a cow burger?
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
And I hope you agree that a population conversation is inevitable either way.
I don't, that remains to be seen.
It's possible that the very rich will end up procreating more just out of boredom or because they can as @Jebus suggested, but there's no reason to speculate on that or intervene unless and until we have that data and we know that the issue isn't going to solve itself.

Until then it makes sense to promote social welfare programs to alleviate people's urgency or fear of the future, and spread contraceptives and sex education. We should do that, not for population control, but just because these are very effective means of improving human welfare.
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
I agree, but to be fair, if it was literally between making the planet completely uninhabitable for humans, or murdering people, I would choose murdering people. Just a different trolley problem.
Sure, but those two extremes are less likely.

More likely it's a question of continuing to eat meat/indulge in other wasteful activities vs. not murdering people. Where do you stand on that trolley problem? Are you willing to reduce?
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
Well that's not a value I share. I want the people who *are* around to have a happy and fulfilled life, but I wouldn't ask them to lower the quality of their lives for *more* people, because more is just better.
What about keeping the same quality of life?
E.g. still take your hot showers, because that actually does affect life quality, but switch from cow burgers to impossible burgers and suffer not at all in the change. Any taste difference today is minor and easy to adjust to.

Would you agree that more happy people is good as long as you don't have to make any meaningful sacrifice to quality of life?
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
The planet is already past its environmental limit, stabilizing isn't good enough.
It isn't past its limit if we stop relying on animal agriculture and we shift to more sustainable energy sources. The limit changes based on how people are using resources.
It's only past its limit given current bad practices.

We can host billions more people pretty easily without much change if we just abandon the most wasteful practices (like animal ag.) and switch for perfectly delicious but more sustainable alternatives.
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
I don't understand this sentence. How is it self-destructive in a hypothetical where things are sustained?
Sacrificing things that actually affect human well-being significantly so that we can keep farming animals is self destructive.
If you're talking about giving up running water and electricity but keeping the burgers, that's insane.
Switch to impossible burgers, and keep your running water and your electricity. That's what makes sense.
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
If I take this at face value, and I'm charitable so I would do so, I would agree this is an issue. I'd be happy to take whatever measures are necessary to eliminate this problem.
We can:

A. Effectively end animal agriculture (wild-caught fishing or hunting isn't a problem with respect to antibiotics, but all productive modern operations are, although hunting/fishing are each their own environmental problems). or:
B. Switch to low density farming (the only farming you can do without antibiotics), which means basically clear-cutting the rest of the world's forests to make room to keep the animals spaced out and continue eating a non-negligible amount of meat.

It might be possible for the world's population to eat a negligible amount of meat, like once a month or something (comparable to chimpanzee diets) with the same environmental footprint high density farming operations have today. Unfortunately these low density farming operations are much less efficient, meaning more land and energy waste per human food calorie. There's no good reason to continue eating any meat at all when we can choose alternatives.
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
If I was to speak on values, I wouldn't care about how intelligent something was, merely about how much it cares about a thing, which I would state requires a conscious awareness of that thing. So sure, I agree that IF an insect was consciously aware of things, it wouldn't care about voting, or other such stuff it can't consider, and thusly when thinking of how do handle value confliction, I wouldn't have to consider a value it didn't have. In that sense we can say, your "less conscious" matters. But if you were to state, the insect can care about its life, but doesn't care about a lot of other things, therefore its ok to squish it, I would say is nonsense way to calculate value confliction.
I didn't say an insect has no value (although many very small insects are likely non-sentient), the issue is the degree of value.
A lot of vegans think that way, though: that one interest in living is the same as any other thus giving an insect and a human an equal right to life. I don't think that can be substantiated, though.
The simplest contention is to argue that a human has a lot more interests that also rely on living.

Killing a human sabotages not just primitive interests to eat and procreate, but more advanced interests like voting (as you mentioned) which can't be satisfied if you're dead. That's a very simple way to answer it, though, not necessarily the most accurate.
ShadowStarshine wrote:
Sun Sep 23, 2018 1:32 am
I think "fewer meaning associations" is an interesting way to phrase that, I think you could start a solid case based on that.
There are many ways to make a solid case about the differences between an insect and a human, even multiple consciousnesses... a human brain being like a large collection of insect brains, with the proportionate value. We do derive from a bunch of single celled things working together for the common good, after all.

The most accurate way to explain it, though, is probably gradation. Every aspect of consciousness and sentience exists in gradation, from the most minute to the most complex. That is the way of evolution; just like an eye can vary from one light sensitive cell to millions, from exposed with no ability to focus to contained in a round cup covered in a lens. So too do qualia vary on a spectrum, awareness, etc. There's no magical point where we just "get it" and we're complete and that before that we were nothing.

If you don't believe things like consciousness and sentience exist on the same kind of spectrum that every other thing in evolution does, then I'm not sure how to answer that. I can't put you inside an insect to experience what it's like to be one millionth as conscious of something as you were in the human mind, but we can infer variation in capacity from the way organisms respond to stimulus. The length of a learning curve on an insect is profound compared to a human, and really does suggest the insect has more trouble understanding (and just barely understands) what's going on around it, and just barely understands what it is which is why it has so much trouble figuring out how to behave in new ways.

We can believe insects have less value for any or all of these reasons, but I think it's quite far fetched to suggest an insect's life might have any where nearly the intrinsic value a human's does by any metric. And that's not even to mention the many extrinsic value differences.

ShadowStarshine
Newbie
Posts: 16
Joined: Sat Sep 15, 2018 1:25 pm
Religion: None (Atheist)
Diet: Meat-Eater

Post by ShadowStarshine » Mon Oct 01, 2018 2:56 pm

I would take a look at Block's definition of phenomenological consciousness:
Phenomenal consciousness. According to Block, phenomenal consciousness results from sensory experiences such as hearing, smelling, tasting, and having pains. Block groups together as phenomenal consciousness the experiences of sensations, feelings, perceptions, thoughts, wants and emotions. Block excludes from phenomenal consciousness anything having to do with cognition, intentionality, or with "properties definable in a computer program".
This would be very distinct from his concept of access consciousness, which your understanding of sentience falls very far from:
Access consciousness. Access consciousness is available for use in reasoning and for direct conscious control of action and speech.
These are definitions that are used a bit in science. Sentience isn't a word that is used anymore, everything seems to be revolving around the word consciousness, however, how it came about, what it is and what it requires has been continuously disputed. This is why things like the Cambridge Declaration of Consciousness falls so short for me.
Wikipedia's page on experience is pretty good. The problem with trying to make "experience" extend to mental ruminations as well is that it weakens the utility of the word based on its ability to create distinction between two categories.
Yeah but, right in your own link experience does extend to that. I don't think I was using it more broadly then it already has the capability of doing.
This is a matter of prescriptive semantics on the basis of the function of language. We should fight for words to mean things on the basis of retaining the ability to communicate those ideas, and when they become too vague or all encompassing (or come to mean their own negations) we've lost something and our ability to communicate is lesser for it.
I disagree, I think it's good to have broad words in combination with narrow words, so that you can compare different sets of concepts. If experience meant "anything brought to one's attention" we can contrast that with perhaps your usage of sentience, or access consciousness. Or contrasting that notion of experience with the subconscious, the things the brain does that are NOT brought to our attention.
I don't know how they'd answer that. Brain-dead is very similar to just being completely disconnected from reality. I think that's just a more accessible example.

Conceivably you could talk about sentience as including emotions only without any sense, but as I said if it's unfalsifiable without an internal-external exchange then it's not a very useful category.

I think a definition that just deals with something falsifiable is easier to talk about and less controversial.
It doesn't matter to me, I just want us to be able to communicate. I think we have to be fine with unfalsifiable concepts in the end, since I don't think our back and forth has any chance to deal with the Hard Problem of Consciousness.

http://www.scholarpedia.org/article/Har ... sciousness

What we need to do is have a functional vocabulary, and talk about what data we think we know, and what data we think we can infer from the data we know.
No, the rock as NO interests at all, thus nothing to idealize.

Now if you believe in psychological egoism and you think that's it's impossible for people to have any interests that don't maximize their own pleasure, then and only then would the situations be comparable.

Are you claiming psychological egoism here?
I do believe in psychological egoism, the thing is, I don't understand why you think that is relevant to what I'm saying. I realize that a rock doesn't have interests, but if you said *I* was the rock, then I assume my interests come with that *I*. If, by becoming that rock, I now have the mentality of a rock, then I don't see what piece of *I* there remains. By those rules, I simply could not be that rock. But by the same reasoning, I could not become another animal either. If you ask me to use reasoning to determine what it is I would want in a situation, and you put me into something that cannot reason, then I can no longer answer the question. So either you want my mental state to travel into that thing, or you don't.
You're confusing classical and operant conditioning.

Operant conditioning requires experience to function.
I don't *think* I am. I can't find anything in what I've read that suggests that. Perhaps you can send me a link?

User avatar
brimstoneSalad
neither stone nor salad
Posts: 8948
Joined: Wed May 28, 2014 9:20 am
Religion: None (Atheist)
Diet: Vegan

Post by brimstoneSalad » Tue Oct 02, 2018 3:05 am

ShadowStarshine wrote:
Mon Oct 01, 2018 2:56 pm
I would take a look at Block's definition of phenomenological consciousness:
Phenomenal consciousness. According to Block, phenomenal consciousness results from sensory experiences such as hearing, smelling, tasting, and having pains. Block groups together as phenomenal consciousness the experiences of sensations, feelings, perceptions, thoughts, wants and emotions.
And sentience, most minimally, is the first part: the parts related to sensory experience. It's only experience if there's somebody home to perceive it.
Broadly it can mean things that go beyond that, but I don't think it's necessary to make those assumptions.
Block excludes from phenomenal consciousness anything having to do with cognition, intentionality, or with "properties definable in a computer program".
Including wants, but excluding intentionality? How does that make any sense?
The latter part just seems like magical thinking; begging the question as to the special properties of a mind as if it couldn't all be part of a computer program.
ShadowStarshine wrote:
Mon Oct 01, 2018 2:56 pm
This would be very distinct from his concept of access consciousness, which your understanding of sentience falls very far from:
Access consciousness. Access consciousness is available for use in reasoning and for direct conscious control of action and speech.
What is "direct conscious control"? And are you begging the question here about the direct control that non-human animals do or do not exercise?
ShadowStarshine wrote:
Mon Oct 01, 2018 2:56 pm
These are definitions that are used a bit in science.
They're not great ones, but keep in mind that any fields dealing with these issues are probably soft sciences to begin with (which is unfortunate).
This is why I prefer to deal with the falsifiable behavioral issues I mentioned.
ShadowStarshine wrote:
Mon Oct 01, 2018 2:56 pm
This is why things like the Cambridge Declaration of Consciousness falls so short for me.
The term itself may be vague there, but the declaration does go into detail from which you can deduce something more specific.
What they're talking about is a lack of fundamental differences that would allow us to assume animals lack the qualities we're concerned with by analogy to humans.
ShadowStarshine wrote:
Mon Oct 01, 2018 2:56 pm
Yeah but, right in your own link experience does extend to that. I don't think I was using it more broadly then it already has the capability of doing.
It can be used too broadly and be meaningless; like the article suggests, differentiating is useful. See how usage even breaks down your proposed limits:
ShadowStarshine wrote:
Mon Oct 01, 2018 2:56 pm
If experience meant "anything brought to one's attention" we can contrast that with perhaps your usage of sentience, or access consciousness. Or contrasting that notion of experience with the subconscious, the things the brain does that are NOT brought to our attention.
Wikipedia wrote: Mental experience involves the aspect of intellect and consciousness experienced as combinations of thought, perception, memory, emotion, will[citation needed] and imagination, including all unconscious cognitive processes.
What?

If you can draw a clear line between what experience is and isn't that makes it specific and defined enough to be a useful notion, then the definition may be valid.

Historically, however, you'll often see the notion of experience contrasted with reason and that's how I meant it. See also e.g. the brief discussion of Kant's usage.
Also consider common usage:
I would not say that my experience has taught me not to put fireworks in my nose and set them off, which implies I have done that or seen it done. That's something I can reason would be a bad idea based on other experiences with the relevant facts, but have not directly experienced. Yet if we call reason experience then it blurs the line there.

Maybe we can just agree that it's useful to specify what we're talking about, be it sensory experience or reasoning, or a meta-cognitive reasoning (thinking about thinking).
I just don't think it's very useful to talk about reason as being experienced, as in you are having the experience of being in a state of reasoning about something. Even if in the broadest sense it is "experienced", such use becomes a bit confusing if we're not clear about it, particularly given contrasting usages which have explicitly excluded reason.
ShadowStarshine wrote:
Mon Oct 01, 2018 2:56 pm
It doesn't matter to me, I just want us to be able to communicate.
Communicating about unfalsifiable things is like a stoner conversation. Not sure if it can get anywhere, which defeats most of the point.
ShadowStarshine wrote:
Mon Oct 01, 2018 2:56 pm
I think we have to be fine with unfalsifiable concepts in the end,
Why? Why can't we just talk about behavior and what we know and can determine from that?
ShadowStarshine wrote:
Mon Oct 01, 2018 2:56 pm
since I don't think our back and forth has any chance to deal with the Hard Problem of Consciousness.
I don't believe in the "hard problem", I think it's a non-problem, because consciousness is a very simple thing that people make too much of because they can't accept the notion that they're not magical. I also don't think that's relevant.

If it walks like a duck and quacks like a duck, and there's no falsifiable means to determine otherwise, then there's no reason not to assume to a moral certainty that it's a duck.
Can we agree on that point of moral certainty?
ShadowStarshine wrote:
Mon Oct 01, 2018 2:56 pm
What we need to do is have a functional vocabulary, and talk about what data we think we know, and what data we think we can infer from the data we know.
Sure, but if some people are insisting on unfalsifiable beliefs, it's kind of hard to change those...
ShadowStarshine wrote:
Mon Oct 01, 2018 2:56 pm
I do believe in psychological egoism, the thing is, I don't understand why you think that is relevant to what I'm saying.
Because it's an unfalsifible belief which denies the possibility of idealized interests about things that will not be experienced... why? Because the only possible interests by definition of psychological egoism are experience-based (feeling pleasure, or not feeling pain, whatever that comes from). It makes the whole thought experiment meaningless.
ShadowStarshine wrote:
Mon Oct 01, 2018 2:56 pm
If you ask me to use reasoning to determine what it is I would want in a situation, and you put me into something that cannot reason, then I can no longer answer the question. So either you want my mental state to travel into that thing, or you don't.
That's not what I'm saying, no your interests/mental state don't transfer. Putting yourself in the others' shoes is a shorthand for discovering roughly what idealized interests might be. I'm trying to probe idealized interests, but if you're a psychological egoist then you can not believe those exist.

A rock doesn't have any interests, other beings do, AND if they have interests beyond pure psychological egoism then they have idealized interests we can examine.
ShadowStarshine wrote:
Mon Oct 01, 2018 2:56 pm
I don't *think* I am. I can't find anything in what I've read that suggests that. Perhaps you can send me a link?
I'm not sure what you're looking for... operant conditioning is literally learning how to operate a thing or perform a novel task for a reward.

Really not sure what you're looking for. It's based on experience of positive or negative reinforcement.
https://en.wikipedia.org/wiki/Operant_conditioning

Classical conditioning can just be unconscious reflex, operant conditioning can't.

ShadowStarshine
Newbie
Posts: 16
Joined: Sat Sep 15, 2018 1:25 pm
Religion: None (Atheist)
Diet: Meat-Eater

Post by ShadowStarshine » Thu Oct 04, 2018 2:56 am

Lots to respond to, I wanted to do the second half before you replied. I'm going to try and combine it as much as I can, though there are so many topics we have going at once.

On inferences and falsifiability

We need to agree to disagree that behavior is: 1) inferences or 2) actual direct observations of consciousness.

If you believe 1), I would ask why do you dismiss every other possible inferential methodology or explanatory models of consciousness.

If you believe 2), why? What makes you think you are directly looking at the experiences of another mind?

If 1), Operant conditioning would be your attempt at *inferring* consciousness. If 2), Operant conditioning is us observing consciousness directly. I'm not going to accept that operant conditioning requires consciousness because it's within the definition, especially, since you haven't actually shown me a source that states that this is the case, but even if you had, we would then be having a conversation as to whether it exists or not.

Unless you don't think 2) is the case, but you don't want to discuss frameworks of understanding consciousness, you only want to talk behavior sans inference, then I would say is all we can do is describe behavior and not talk about consciousness at all. We are gonna sit at P-Zombie territory and that's it. Saying something or someone is or is not a P-zombie is unfalsifiable at its core.

On the environment:

I think we agree on almost everything, except the fact that you call a "negligible amount of meat" wasteful. I don't agree. Let's say to not use antibiotics, we need to 1/3 the amount of meat we make. Let's say we decide that, one of the ways we are gonna reduce the environmental impact, is to 1/4 the amount of land we use for animal agriculture, while additionally changing to cleaner forms of energy, and more efficient modes of travel, as well as introducing 1 child policies. That's 1/12 the amount of meat on a possibly sustainable planet. Given that people eat meat 2-3 times a day, that would mean they eat meat 1-2 times a week. That would be fine, environmentally, to me.

If you say "It's still wasteful, why even do that much". I note your opinion, but honestly I would take an aggregate opinion of everyone who shares this planet as to what is okay and what our goals are, so long as we take care of anything that leads to unacceptable health results for us. (Such as a non-sustainable planet).

On Psychologicalcal Egoism:

You can have idealized interests. I don't know what you think psychological egoism is. Not every motive that is self-serving only serves the self, and one can value things beyond what they value while they are alive. If part of my values is to value other people, then ultimately by helping others, I achieve my own satisfaction of doing things according to my values. One of your values can be an idealized interest, and if often is. That's why I don't understand the objection.
You don't think Utilitarians have a cognitive definition?
You don't think theists have a cognitive definition?
Or even non-theistic deontologists (as rare as they are)?

All definitions meaningfully discussed and argued on in philosophy are cognitive. Without that, there's nothing to discuss.
We are NOT *actually* attempting just to spout our feelings at each other when we engage in rational discourse on the nature of morality, and I think claiming that we are is disingenuous (if that's what you're claiming).

The notion of morality, AT LEAST within the context of rational discourse on the subject (as found here) has to be cognitive, just as the rules to chess have the be cognitive within the context of a chess match.
I think you are confusing meta-ethics with normative ethics. I'm saying there is no cognitive meta-ethical definition of words like "morals" "good" "bad" that aren't just expressions of things we personally value. That would not be a disingenuous claim, that is a well established meta-ethical claim. You are giving me examples of normative ethics, unless you were talking about a theist meta-ethical claim, and not a claim about their common deontological structure, then no, I think they don't have a cognitive definition of morals outside of preferences/values.

What there would remain to be discussed within that structure, is how value confliction is handled. What happens when I want A, and you want not-A. That's when normative ethics can still be applied. (I would be a consequentialist).

If a meta-ethical position has you "personally offended", then I don't think we should be having this conversation at all. It's not meant to be a personal attack, but I can't stop you from taking it as one.

On Blocks theories of consciousness:

No one is suggesting "magical qualities". I don't know why you're strawmanning that position. These are all theories of mind rooted in materialism. Perhaps you're using a different understanding of intentionality.

Perhaps check something like this out: https://plato.stanford.edu/entries/cons ... tionality/

User avatar
brimstoneSalad
neither stone nor salad
Posts: 8948
Joined: Wed May 28, 2014 9:20 am
Religion: None (Atheist)
Diet: Vegan

Post by brimstoneSalad » Sat Oct 06, 2018 4:27 pm

ShadowStarshine wrote:
Thu Oct 04, 2018 2:56 am
We need to agree to disagree that behavior is: 1) inferences or 2) actual direct observations of consciousness.

If you believe 1), I would ask why do you dismiss every other possible inferential methodology or explanatory models of consciousness.
I'm not sure what you're asking.

If you don't, then why don't you give equal weight to "intelligent falling" vs. the standard model of gravity?
Why not equal weight to theistic creationism vs. the big bang etc.?

There are reasons we prefer simple inferences from observation vs. elaborate and unfalsifiable speculative models that go beyond that.
ShadowStarshine wrote:
Thu Oct 04, 2018 2:56 am
If you believe 2), why? What makes you think you are directly looking at the experiences of another mind?
I didn't say that, I said that's all we have to go from, lacking a more detailed evidence based model (wild speculation doesn't count). If we're being reasonable then we have to accept that's all we know to a moral certainty. That doesn't mean we have magical direct experience.
ShadowStarshine wrote:
Thu Oct 04, 2018 2:56 am
If 1), Operant conditioning would be your attempt at *inferring* consciousness. If 2), Operant conditioning is us observing consciousness directly.
Based on the methodology of operant conditioning, it's the only way to experimentally show that the features we can currently only associate with consciousness are present.

Like most science, it's a smoke/fire situation. We're observing the smoke, and fire is our simplest and most reasonable explanation. You can speculate on some kind of fire-free smoke generating machine which creates CO2 and distributes particulates that look to us exactly like the smoke created by a fire, but that's not a reasonable assumption, and to that point you could speculate on the exact same for anybody (solipsism).
ShadowStarshine wrote:
Thu Oct 04, 2018 2:56 am
Saying something or someone is or is not a P-zombie is unfalsifiable at its core.
I have said this, and I believe multiple times by now. It's a little frustrating.

If you're going to arbitrarily exclude animals from moral consideration by claiming they aren't conscious (are P-zombies) using the SAME unfalsifiable reasons that could be used to do the same to humans, you have no rational basis to condemn mass murder or genocide as anything more than your non-rational feelings against them because they're just another interpretation that humans aren't conscious and it's OK to do whatever to them.

It's not fundamentally more of a stretch to speculate on a larger non-fire smoke generating machine than a smaller one. If you claim the signs of consciousness can be reliably generated by non-conscious thing then there's no good reason to believe humans are conscious either just because there's *more* smoke.

As I explained, the Turing test is not a test of consciousness (or objective signs of consciousness), explicitly so according to Turing; it's a subjective test of what it's like to seem to have a human level conscious under the assumption (a mistaken one) that there are no objective tests of consciousnes (or the signs thereof, which again is what all tests are in science so I shouldn't need to specify that). The Turing test is inherently inferior to operant conditioning as a test and should not be asserted in place of objective behavioral tests.
ShadowStarshine wrote:
Thu Oct 04, 2018 2:56 am
I think we agree on almost everything, except the fact that you call a "negligible amount of meat" wasteful.
It's still wasteful because it has no advantages over high quality mock meats or even cellular agriculture (clean meat), which still beat it in terms of efficiency, roughly by 10:1, and likely even more if you take away antibiotics.

What do you think waste is if not doing something that uses more resources for no reason?

There are two products, A and B. They are identical in every way except that A uses half the resources of B to produce. How is it NOT wasteful to choose to use B instead of A?
ShadowStarshine wrote:
Thu Oct 04, 2018 2:56 am
If you say "It's still wasteful, why even do that much". I note your opinion, but honestly I would take an aggregate opinion of everyone who shares this planet as to what is okay and what our goals are, so long as we take care of anything that leads to unacceptable health results for us. (Such as a non-sustainable planet).
Does the opinion of somebody who will only be slightly inconvenienced by radical global warming count the same as somebody who will die because of it?
Does the rapists' opinion that it's good to rape weigh equally to the victim who disagrees? Do the opinions of TWO rapists trump one victim of gang rape? Magnitude of consequence just has no meaning to you?

Also, you're ignoring the opinions of all of the animals who suffer under animal agriculture. Mother cows who would rather their babies not be taken away, etc. If you actually add together ALL of the opinions, you might be getting somewhere.

You ostensibly ignore the majority of the opinions you just personally don't like by pretending they don't have opinions because you regard them as non-conscious, ignoring all falsifiable science in favor of bizarre and unfalsifiable subjective models.
ShadowStarshine wrote:
Thu Oct 04, 2018 2:56 am
You can have idealized interests.
In terms of experience, sort of, but that kind of "idealization" is very simple: maximize experienced pleasure.
In that sense, you must claim that EVERYBODY'S idealized interest is to have a lobotomy removing their abilities to experience suffering, and have electrodes hooked up to their brains stimulating their pleasure centers as much as physically possible for as long as possible.

Is that what you want? Or would you say that the only reason you don't want that now is because you're being irrational? And it would be a good thing for me to do that to you despite your protests?
ShadowStarshine wrote:
Thu Oct 04, 2018 2:56 am
Not every motive that is self-serving only serves the self,
You don't seem to know what psychological egoism is.
According to psychological egoism, the only reason you say you want your family to be happy is because when you experience the appearance that happiness it makes you happy: it gives YOU pleasure to see or think about them being happy.

You don't actually want your family to be happy (that is not your idealized interest), you only want to maximize your own pleasure and one way to do that is to experience them being happy because that's something you enjoy. But they don't actually have to BE happy for that, you would be equally satisfied with just thinking they're happy. And you'd be MORE satisfied by having your brain hooked up to electrodes that just maximally stimulate your pleasure centers in a way that doesn't even require thinking at all regardless of what happened to your family.

That's psychological egoism.

If you don't agree with that, then you don't agree with psychological egoism.
Those interests don't idealize in the way you think they do; an idealized psychological egoism interest is merely maximizing personal pleasure by whatever means that may be done.
ShadowStarshine wrote:
Thu Oct 04, 2018 2:56 am
One of your values can be an idealized interest, and if often is. That's why I don't understand the objection.
Under psychological egoism, the only value that can be is experiences that generate pleasure for you (or negative value, in avoiding experience of pain/suffering for you). You only care about others in so far as the experience with them generates pleasure or pain for you.

That's the objection. You don't seem to understand psychological egoism.
ShadowStarshine wrote:
Thu Oct 04, 2018 2:56 am
I think you are confusing meta-ethics with normative ethics.
That's like a creationist saying you're confusing micro and macro evolution. Coherent and meaningful metaethics lead directly into normative ethics in the way coherent and meaningful normative ethics require certain specific metaethical foundations to be substantiated.
ShadowStarshine wrote:
Thu Oct 04, 2018 2:56 am
I'm saying there is no cognitive meta-ethical definition of words like "morals" "good" "bad" that aren't just expressions of things we personally value.
That we may happen to value those things is of no consequence to the definition being both cognitive and objective.

What do you think a definition is?
ShadowStarshine wrote:
Thu Oct 04, 2018 2:56 am
You are giving me examples of normative ethics, unless you were talking about a theist meta-ethical claim, and not a claim about their common deontological structure, then no, I think they don't have a cognitive definition of morals outside of preferences/values.
I'm saying all of those are cognitive claims, from theological ones to utilitarian ones.
The definitions of "good" are not "the things that we like", they are defined irrespective of those things.
That people HAPPEN to like those things, and the reason that they may have been defined that way being (speculatively) BECAUSE people like those things is not of consequence to the definition itself.

From theistic, to deontological, to utilitarian, these definitions all have metaethical roots.
For the deontologist, it's about reason and some notion of self-contradiction (the categorical imperative) from which derive the normative claims, for the utilitarian it's certain assumptions about constructivism or moral realism in the context of scientific naturalism, then normative claims can be derived.

That a classical utilitarian defines suffering as bad and pleasure as good is not non-cognitive or subjective. The existence of these things are objective facts and they can be evaluated as truth apt, it's not just "Boo suffering", it's a mathematical model.
ShadowStarshine wrote:
Thu Oct 04, 2018 2:56 am
If a meta-ethical position has you "personally offended", then I don't think we should be having this conversation at all. It's not meant to be a personal attack, but I can't stop you from taking it as one.
If you're telling me that when I say "X is morally wrong" that I ONLY mean "Boo X" or "I don't personally like X", or even a command "Don't do X" then yes I take personal offense to that because I'm TELLING you that's not what I mean. You're telling me that YOU know better what I mean to say than I do.

There are things I can assess to be morally wrong that I don't really care that much about at all, and wouldn't tell people not to do. It's a completely different category of fact statement, and I do mean it as a fact.

Noncognitivism is all about disputing people's claims to know what they're trying to say.
If you do believe these definitions are noncognitive and you dispute my claim about what I mean to say, then you're not engaging with anything resembling charity or good faith in this conversation. That's what I take offense to.

You need to accept when I say that "X is morally wrong" that I mean it in the same cognitive objective factual sense as when I say "that box weighs 5 kilograms". If you will not accept that, then you're not participating with charity or good faith in this discussion, and you need to drop it and choose another topic to discuss.

If you CAN accept that, then you must accept for the sake of argument that I am following (or trying to follow) a cognitive and non-subjective definition of morality.

If you want to understand what that definition is and what makes it cognitive and objective, that's a fair question, but asserting that I am not attempting to make claims that I plainly say I am attempting to make is done in bath faith and uncharitably.

Do not tell me you know better what I'm trying to communicate than I do.
ShadowStarshine wrote:
Thu Oct 04, 2018 2:56 am
No one is suggesting "magical qualities". I don't know why you're strawmanning that position. These are all theories of mind rooted in materialism. Perhaps you're using a different understanding of intentionality.
To the contrary, it seems they are; two things that are inextricably linked can not be separated. The separatist view seems to be a process of magical thinking, much like mind-body duality.
How do you *want* food without any notion of what food is? How do you make sense of any of these without the other?

Maybe you can take a stab at answering those questions and defending the separatist view in your own words, because it seems to be core to your rejection of animal consciousness (and the only objective evidence we have for ANY consciousness).

ShadowStarshine
Newbie
Posts: 16
Joined: Sat Sep 15, 2018 1:25 pm
Religion: None (Atheist)
Diet: Meat-Eater

Post by ShadowStarshine » Sat Oct 06, 2018 11:33 pm

Psych Ego

Well, I don't want to get into an argument about what words or concepts mean unless you really want to. I'm instead going to outline differences in my outlook, and how I interpret these words, and you can either accept them as the case or reject them, and if you really want, you can outline a specific definition from a source of your choice as to why you think what I'm calling X is not really called X.

When I talk about psychological egoism, I'm saying that what choice we make inevitably is a choice we make because it is a thing we value. Our brains do evaluations of actions, sometimes a mix of both positive and negative, and ultimately make a decision that is best in our interest. Let's say an annoying kid was bothering us and we thought "I'd really like to smack this kid", and we evaluate that satisfaction, vs our perceived guilt of the action vs the consequence of the action and whatever has the stronger pull to our personal values wins out. So I may not be saying it makes me happy to see my family happy, I may be saying I value my family being happy. Necessary to my understanding of that value would be thinking it is actually the case they are happy. Now, does this preclude a demon tricking me? Some solipsistic view of the world? No. One can be fooled into believing something is the case, but I can't both know my family is not actually happy, and be satisfied by some image of it.

Additionally, what it is for me to value something is to value it in the way that I wish to value it. If you were to ask me my opinion of being hooked up to that machine you talk about, my understanding of what it is to value something is to value it in the way that I want to, and the way you propose is not the way I want to value things. If you hooked me up to the machine, you aren't just "satisfying my pleasure centers", for me to truly enjoy that situation, you'd have to fundamentally change what it is I value. I honestly think you are describing psychological hedonism, a specific subset of psychological egoism. If you show me that what I'm describing is incompatible with psychological egoism in its entirety, I'll label it something else.

Inferences

I'm glad that you stated that you are not directly observing consciousness and that you are in fact making inferences. Do I think all inferential statements are the same? No. I do not.

But I don't agree with you, that operant conditioning is the strongest of inferences, in fact, I'm very far in the opposite direction. I think operant conditioning as a model does not need consciousness or moves us closer to it in any way, and I don't understand, other than your tautological definition of it, how you think it does.

Let's take two hypotheticals:

1) One takes in stimuli, such as a pain nerve, the brain associates other sensory information to that pain nerve, the brain develops a new pattern of behavior. (Operant conditioning) This is consciously experienced.

2) One takes in stimuli, such as a pain nerve, the brain associates other sensory information to that pain nerve, the brain develops a new pattern of behavior. (Operant conditioning) This is not consciously experienced.

Now, you want to say the first one is *more likely*. Why? Is it because within the definition of Operant conditioning you believe it is stated that this is the case? (One, I couldn't even find btw). Is it because you think a conscious experience of something is *required* for a behavioral change? If so, why do you believe that? Do you not think we develop behavioral changes without conscious experiences?

Environment

You can't bake your moral argument into your environmental argument. If I already agreed with the moral argument, I would in fact, calculate it differently. But you know at this point I don't. So there's two possibilities:

1) You think that regardless of the ethical argument for veganism, there is still an ethical argument for the environment that doesn't depend on it.

2) You think that the ethical environmental argument is predicated on the ethical argument for veganism being true.

If 1 is the case, fine, but present an argument that doesn't attempt to weave in the ethical argument from veganism then. If 2, then wait to convince me of the ethical argument for veganism before bothering to present me an environmental argument.

If it were truly the case there was a mock meat for every meat product that was better in terms of efficiency and that I truly couldn't notice the difference, then I can see the case for not eating the less efficient option. Heck, I'd even replace individual meats if truly there was no difference to be had and what you said was true, and it could be the case that I'm ignorant to these options, and my awareness of them would satisfy the environmental aspect for me.

Ethics

Oh boy this is a biggie, there's so much to unpack in what you said. Here's why the difference in meta-ethics and normative ethics matter:

They can entirely be mixed and matched. One can be a moral nihilist non-cognitivist and say "I think people have values and preferences, I think these values and preferences have conflicts, and I think there are better solutions than others." These may not be "better" in a moral sense, but more prefered (descriptive ethics). One may think the best path towards this would be deontological:

"I don't like it when people murder, so no murder ever under any circumstance".

One may think the best path was consequential:

"I don't like X, Y and Z, but I'll calculate to see which is worse and tell you which I don't like the most."

One could be a moral subjectivist or a moral realist, but combine those with deontology or consequentialism as well. This is why when I state a moral nihilist, and you say "So you don't believe in these normative ethics?" it doesn't make sense. I'm not at odds with normative functions, though, what I think they are is fundementally different, they still work. I'm at odds, specifically, with the meta-ethical theories.

Also, don't take noncognitivism as "You can't change my mind, so there". It's like my position on atheism, I'm not telling you that God doesn't exist, I'm telling you I have no reason to believe in it. Same with my moral nihilism, I've never heard a description of morals that doesn't equate identically with values and preferences. Now, that's not to say I think every value or preference is classified as a "moral problem". I think it's a particular subset of values, specifically, ones that cause confliction. Where as, no one has a problem with what icecream flavor I want, people will in fact find conflict if I wanted to steal their stuff. But I think these are all values and preferences in the end.

Now sure, offer me a definition outside of that for what is "moral" that isn't circular and perhaps I'll accept it.

Intentionality

I'm not a mind-body dualist. I really need to know what you think intentionality even is.

Post Reply

Who is online

Users browsing this forum: No registered users and 1 guest