Page 6 of 12

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Posted: Tue Feb 16, 2016 4:47 am
by brimstoneSalad
Jaywalker wrote:The importance I place on my dog experiencing further good years is probably higher than the importance Leonardo placed on Mona Lisa's survival after this point.
Since Leo was a vegetarian and animal lover, the importance he would place on your dog would probably be higher than on his painting (as a skeptic and very practical person who basically just used most art commissions to make income). :D

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Posted: Tue Feb 16, 2016 8:35 am
by inator
brimstoneSalad wrote:I would also argue that every sentient being is inherently capable of that kind of abstraction, to the degree it is sentient. This is the root of adaptive behavior and true learning -- it's the why behind behavior and learning.
Too much metacognition actually sabotages that abstraction by thinking too much about the state of the mind and considering that a goal in and of itself, when that has never been the purpose behind behavior (otherwise, we'd all just subsist in a persistent blissful delusion).
I think the flaws of metacognition are well demonstrated by humans in this thread, but I doubt that's anything most other animals are guilty of.
Right, hedonistic experiences are merely incentives that guide bahavior, not the goal of behavior itself.

brimstoneSalad wrote:Persistence of self usually means we choose to change our applied interests due to new information. That is, as we move into the future, we approach closer to our idealized self interests (or at least what we think they are).
So, usually our past interests, in conflict with present ones, are only so due to poor information.
Maybe it's a bit optimistic to say that in time we are constantly progressing towards our idealized self interests (we can also regress, lose sight of old information etc.), but the idealized interests framework does work well in explaining the importance of relevant information in decision making.

brimstoneSalad wrote:I'm not saying that the future self even matters at all in the past (since it doesn't exist yet, and may never -- we don't know), but IF it does, there are at least some substantial limits to the concern we need to have.
Without thinking of our future selves, there's little incentive to choose delayed gratification and therefore little chance of progress. Though, in terms of interests, delayed gratification does assume that the future self will still adhere to the same preferences as the present self.

brimstoneSalad wrote:
inator wrote:If we're against killing me, or against destroying the painter's art, then we are valuing past preferences. So do we take into account the preferences across all existence? Why not also value future preferences and argue against abortion?
If you abort the fetus, it never exists as a person to have preferences. There is no violating the future person's preferences by aborting the past fetus, because that future person will never be to have preferences.
That makes sense, but I'm having a hard time looking at the flow of time outside of the 'block universe' framework suggested by special relativity. There are several ethical implications based around the realization that past and future events are as real as present ones.
Preferences don't disappear from moral radar just because an organism is dead or hasn't been born yet. Those two states are symmetrical.

brimstoneSalad wrote:IF you value future interests, you could make the argument that it is good to make people if when those people acquire interests, on average, they have positive retrospective interests in having been born that outweigh any of their desires not to have been born.
That is, some version of the "repugnant conclusion" in Utilitarianism: https://en.wikipedia.org/wiki/Mere_addition_paradox
That would be an unavoidable conclusion if we consider the future to be 'real' (and I see no way around accepting that). If whether future organisms exist or not is affected by our actions, then we'd have to evaluate each choice against the accumulated morality of the existence of said organisms.

Should we just castrate everyone so that no potentially unfulfilled interests will be produced in the future? Or do we have a responsibility to produce interests that have a good chance of getting satisfied?
In other words, do we just have a responsibility to make preferers satisfied, or also to make satisfied preferers? (someone related to negative utilitarianism said this, I don't remember who right now). That I don't really know.

brimstoneSalad wrote:This could perhaps also be used as an argument against things like genital mutilation (since the person will exist and be able to reflect on what was done to him or her). Or, perhaps, as an additional argument against feeding young children animal products without their informed consent, since they will very likely (or I would hope) grow up to understand that is wrong and regret having been fed on corpses.
Definitely.

brimstoneSalad wrote:
inator wrote:How do we deal with torture victims who temporarily wished they were dead but may not feel the same in retrospect?
The same would seem to apply there as with making people, I think.
Or you could consider his decision to die as a misinformed calculation (relative to his idealized interest), and his preference to not suffer temporarily as no greater than the totality of his future satisfied preferences.

brimstoneSalad wrote:Corporations tend to be headed by sentient beings, and I don't think they in themselves have real interests (due to lack of intelligence) without that executive structure. There may be something to be said for hive minds, though, if there can be shown that there is a legitimate intelligence there that exceeds the sum of its parts.
It's difficult to talk of explicit preferences and unitary decisions without the "executive structure" for humans too.
In the end we are an aggregate of many subsystems, which are themselves aggregates of many neurons. Some push for once action (go eat due to hunger), and others push for a different action (wait until you've finished replying to the comment). The alliance with more supporters wins the election and decides your action. If the votes are close, then you could say that the preference is not very strong compared to a win by an overwhelming majority.
This is not necessarily a hedonistic explanation (non-hedonistic subsystems can win the election), merely a materialistic one.

If you want to consider adaptive behavior as a defining element of intelligence, then nations and corporations meet this condition too.

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Posted: Tue Feb 16, 2016 10:07 am
by brimstoneSalad
Mr. Purple wrote:
Because it's rude to tell people they're rude. It comes off as very SJW-ish.
[...] But people who talk like that are going to take a hit to their credibility during serious arguments. If you are fine with this reality, and want to fight the evil SJW that badly, then have fun :P It's too bad i have to take that credibility hit by association
The thing SJWs don't understand is that whining about others' behavior also reflects very poorly. So, it's kind of hypocritical to complain about somebody else's actions reflecting poorly on you, when your complaints themselves reflect poorly on yourself and others too.

Nobody wants to side with people who are so anal they can't let people say two sentences without a lecture on political correct word usage, or who are acting like everybody's mother and telling them to be nice.

What you did isn't that bad, but still seemed a bit obnoxious. You can complain about people being rude if you want, but please just be aware that will also be seen as rude, and it reflects poorly on us as winy babies. :P

I prefer to just let people step in it if they want to.

Mr. Purple wrote: Yeah, this is the core misunderstanding. I do not believe this currently. I am genuinely curious to how you think this is possible. You say you have evidence of this because of adaptive neural networks, which would be a pretty solid basis, so i would love to see what you are referring to. If you could send some stuff tying what you are saying to our neurons, it would be appreciated.
Crudely: If you want to build a robot that learns to eat food pellets, you set up a primary "interest" (a system that evaluates whether the robot is eating food pellets or not), and then you have that deliver positive or negative feedback based on its fulfillment. As the neural network randomly changes, these positive and negative responses provide selective pressure to guide learning, but they all come from the core evaluation -- that interest the robot has been programmed with to want to eat food pellets.

Does that make sense?
Mr. Purple wrote: Proposing a moral framework that subjugates people is forgoing the reward we would get from a system incorporating empathy and fairness from the start
These are unsubstantiated assumptions. Psychopaths can maintain a perfectly functional capitalistic society. The idea that fairness is somehow materially beneficial is very often dogma, and not reason.
Mr. Purple wrote:and unless you keep yourself in total ignorance of the pain you are causing, it will have unavoidable psychological negatives from violating these foundations.
Or a better argument would be that we just have to evolve into psychopaths who aren't bothered about these things. AND, since psychopathy can be influenced by nurture, that would seem to be self correcting.
Those of us who are empathetic have just been damaged by our upbringing and the status quo of current society.
Mr. Purple wrote: It is also accepting a lot unnecessary risk and conflict into a system almost by definition when you are creating a subset of people people with an interest in getting rid of the system.
Not if they are completely subjugated and powerless, like animals, or like most blacks were during slavery in the states (look into history of slave rebellions, and you'll see a trend of them being led usually by new slaves (who were previously warriors from African tribes) with ethnic ties, or educated slaves who were typically literate and may have had higher ranking positions, and sometimes even white abolitionists and free men).

It's totally possible to keep a population completely subjugated. However, conflict arising from the subjugation just has to be less than the benefit to the empowered provided by it to make sense.
Mr. Purple wrote: It's probably possible that the benefit we get from favoring whites in this scenario is so large that it actually outweighs [the rest] but that's a big hurdle to jump from the start.
We already have this situation to a degree; debt slavery is rampant.
Mr. Purple wrote: Once again you sound like you are saying your conception of interest isn't even tied to our biology. I can't fathom that you would actually argue that. Please define your concept of interest in more detail. You are making it really hard to grasp when you say things like that.
This comes from us being self-authoring beings. Although it's perfectly possible for all interests to be grounded in biology too.

An interest to avoid death. An interest to eat/drink/sleep. An interest to have sex and sometimes to be social. An interest to learn (curiosity).

These are pretty fundamental interests for most intelligent animals.
Each of these, when satisfied or not, provides feedback to the adaptive system which leans how to optimize them (as much as it can) based on that feedback.
Mr. Purple wrote: Empathy and a need to belong are pretty fundamental. You make it sound like those things are just a cultural construction.
It's a mixture, actually. People can be raised in ways that promote or suppress empathy and social tendencies. It's rare for people to be naturally empathetic; just like language, empathy is largely acquired.
Mr. Purple wrote: Even pretty extreme religion like jihadists that are cutting peoples heads off need to think those victims aren't innocent to actually do this.
Usually, but only because they've had pretty typical upbringings -- those are usually cases of misinformation. Tribal societies don't have the same empathy outside their immediate groups that those of modern civilizations do. Stories coming from people out or North Korean prison camps (who have been born in them) also reflect radically different psychology of empathy.
Mr. Purple wrote: What I am proposing actually fits the definition of ethics just fine. " that branch of philosophy dealing with values relating to human conduct,
Note: Philosophy. You were reflecting an ethical framework based only on sentiment. That is, it would be wrong for you to kill somebody, but for a psychopath it could be right because he or she doesn't feel bad about it (and perhaps can't). This is an appeal to nature. If you are naturally inclined to care, then it's wrong, and if you aren't then it's fine.
Mr. Purple wrote: 1. No, I'm just making arguments. We don't have nearly enough information about the human brain to think we could prove something like what is definitely best.
That was kind of a trick question, because as you admit this (there is no philosophical foundation, and it's just an empirical matter), you have to realize that in acquiring that information and evaluating any system in practice, we must appeal to a higher metric of morality by which to evaluate the system.

You can't really go from an IS to an OUGHT like this, without a philosophical foundation for what OUGHT should look like to begin with -- and that is the actual basis of morality.

Compare consequentialism, and rule consequentialism. If rule consequentialism is right, it's only right because consequentialism says it is based on its consequences being better. It all comes back to the underlying premises of morality, and any deductions rely on those fundamentals.
Mr. Purple wrote: 2. What makes you thinks ethics can't involve or center around selfishness when generating a framework for calling something right or wrong in respect to dealing with others? It seems to fit plainly into the definition of ethics honestly.
I've talked about this at some length before; it doesn't fit whatsoever into moral/ethical framework on a number of levels.

1. Ethics is not simply any arbitrary framework for defining "right" and "wrong" as meaninglessly as any new arbitrary concept of "vlorb" and "fridop". That's completely useless. It has a distinct connotation of a philosophical moral nature; an objective and universal aught (which is the only way it's useful; moral relativism makes the notion meaningless).

2. By linguistic consensus. Ask people if they think ethics, or morality, is more appropriately applied to notions like altruism, or to selfishness, or even to sadism.
Language, and the meanings or words, operate on a certain amount of descriptive consensus.

Calling selfishness the root of morality so you can say morality is selfishness is very much the same as defining "God" as something asinine (like your toaster) that nobody else would agree is "God" just so you can say it exists.

3. By prescriptive function in philosophy. Selfishness operates as a default on the moral spectrum between altruism and sadism. Incidentally, we have three widely recognized words reflecting this spectrum: Morality, Amorality, and Immorality.
You could play opposite day and flip the meaning of morality to mean sadism and immorality to mean altruism (maybe they call it this in some hypothetical hell dimension), but the balance -- the default of behavior -- will always lie in the middle. Selfishness can be nothing but amoral.
Point #2 helps us clarify that altruism is more compatible with the word "morality" than "immorality".

Mr. Purple wrote: Making women give blowjobs to men would be a terrible moral framework to propose because it would be incredibly unstable.
It wouldn't be unstable if they were appropriately subjugated. Such as, if their hands were all cut off at birth so they couldn't use tools well, and they were kept uneducated, or taught only the dogma of the blow job, and led to believe they would die without men's vital semen.

So, this, then, you would approve of as a good moral framework? Assuming we can breed that pesky empathy out of men, or keep them delusional enough to believe this is appropriate so it doesn't bother them.
Mr. Purple wrote: This kind of behavior probably would only last as long as ignorance of it's reality lasts.
Ignorance can last a long time, even indefinitely. But surely in a couple hundred years congenital psychopathy could be bred into the population to prevent anybody from ever having hurt feelings if they ever found out.

If you have no evidence this wouldn't work, and it would seem to benefit men more, you must advocate it as the moral thing for men to strive for, right?
Mr. Purple wrote: Once again you are using a very specific definition of the word nature that I can't really find.
Natural in itself is ill defined. But the fallacy (or comparable ones, like appeal to tradition, or the status quo) applies to any claim that a thing is right because it simply is/exists.

A rock falls, therefore it should fall -- rocks falling is good. Brains seek pleasure, therefore they should seek pleasure -- brains being pleasured is good.
Mr. Purple wrote: So far you have been wrong most of the time about it even being a fallacy, and the other times the fallacy wasn't even core to the argument.
From whence, then, were you trying to substantiate value if not directly and arbitrarily from the fact of the matter as it was?
Mr. Purple wrote: I would prefer it if you would talk to me like a human without all the attempted "Gotchas".
Please try to understand the nature of the fallacy, and see if the logic still applies to your modified explanation.

'We should seek pleasure because we're biologically wired to' -- that is, on the face of it, an appeal to nature (or biology, if you like).
Amending it to say it's OK if it's synthetic doesn't rectify the nature of the argument to be fallacious in this way. Naming fallacies is not a very formal affair. The point was that your reasoning didn't follow.

You react too quickly to defend the proposition rather than understanding why I criticized it.
Mr. Purple wrote: People say they know god wants them to do things all the time, what is your point here?
It's an empty claim to hidden motives.
Mr. Purple wrote: 1. Do you agree that we can know things without being consciously aware we know them?
2. Do you agree that those things that we subconsciously know will inform our actions?
3. Do you agree that the mother has built in mechanisms(empathy) that make her feel bad if her child were to die?
4. What definition of interest are you using for your argument?
5. Why do we need a biological punishment and reward system if it isn't what is motivating us?
6. Do you think pain is just an unfortunate biological accident\byproduct of your concept of interest?
1. No. "Consciousness" is kind of an absurd term as it is, and the question isn't very meaningful.
2. No. See above.
3. Yes. But it's incidental (a side effect).
4. That I hope I clarified earlier in this post.
5. It's what makes us intelligent (the feedback system, as explained). Interests -- as the drivers -- provide that feedback to inform adaptation.
6. Pain lets us know our tissue is being damaged, it's an alert system. Most people have an interest in not feeling pain, so adapt to avoid it.

Mr. Purple wrote:It's like you didn't read what i said at all. I framed sense of self and death in terms of suffering and joy.


I read it, but your arguments were not coherent. We're talking about HARD WIRED pleasure. Maximal pleasure, directly into your brain.
If you think brains experiencing pleasure is the ultimate end, nothing else is relevant.

Your loss of sense of self, assuming that provided any suffering, is overridden by the maximal stimulation of pleasure that process provides and the negation of suffering.

Your sense of self is an abstract concept which is just a red herring here. You might as well have said "well, I don't like swallowing pills".
Trying to claim you get infinite suffering from losing your sense of self is absurd. You might as well say you get infinite suffering from swallowing a pill. It's obviously untrue.
We can quantify suffering in terms of neurotransmitters and behavioral choices, and the suffering caused by the thought of death isn't really that great (it's frequently overcome by discomfort; people will beg to be killed when experiencing temporary agony).

In no uncertain terms, your net pleasure will outweigh any minute discomfort you will experience from the idea of losing yourself to the point of making the latter insignificant.
The only two possibilities that would explain you not making that decision is if 1: you were being irrational, and let some immediate discomfort negate overwhelming long term potential good or 2: you actually value other things beyond hedonistic experience of pleasure for your brain.

In the case of 1, like a child not wanting a vaccine despite it offering much long term benefit, you would probably advocate for people to be hooked up against their wills. Unless you're advocating irrationality as moral too, in which case your whole argument of rational egoism falls apart, and all actions become moral (rational or not) [this is very important to understand, and I brought it up later in the post too].
In the case of 2, Q.E.D.

If you expose a brain to a flood of pleasure when the body pushes a button, it will adapt without fail to push the button until it dies. "Sense of self" is irrelevant here.
The brain, outside the context of existential interests, is just a pleasure optimizing machine; but that is NOT our actual interest. It's just the mechanism by which our interests are realized in the world through intelligence -- it's the tool our interests use to engage with the world effectively.

Mr. Purple wrote:That was whole point of of bringing death and sense of self up in the first place. You didn't put forward much effort in reading if you missed all of it. Go back and re-read it for clarification.


Your appeal to sense of self was a cop out. That's something I can do, because maintaining a sense of self is a legitimate non-hedonistic interest, but it's not something you can appeal to within your framework of only valuing experience of pleasure and pain.

Do you really still think I got that all wrong?

Mr. Purple wrote:You asked me the question about myself, so i am giving you my answer. If my expressed interests don't line up with your theory, that doesn't make them absurd.


The problem is that it doesn't line up with your theory.

Mr. Purple wrote:I would not associate that being in the example as being me,


It's the same brain. There is no "you", there is only a brain and experiential pleasure and pain -- the firing of certain neurons. You're completely changing your entire framework here to border on spiritual.

Do you not see that you are introducing something beyond experiential pleasure and pain here?

Either you are being irrational in your projection as not preferring that state of pleasure for your brain, or you don't actually believe what you say you do about hedonism of pure pleasure experienced by a brain (which will elect it over all else) being the ultimate good.

Mr. Purple wrote:If i don't exist, how am i, as the person who accepted the offer, supposed to feel that joy?


The brain in question has its pleasure centers activated. That's all this pleasure is. You don't really exist if you reject the value of anything beyond hedonistic pleasure/pain as activated in a brain.

Mr. Purple wrote:As far as it being irrational, i don't see a problem with that. What prevents interests\values from being irrational?


It's a problem in your framework, which rejects the value of interest as the core of moral framework and substitutes the firing of the pleasure centers of a brain.

In an interest based framework, that's perfectly fine. Interests can be non-rational.

The pure physiology of a brain, though, isn't that complicated. There is no self, it's just a lump of nerves firing in a certain way, connected in certain ways, some of those called pleasurable or unpleasurable-- if that's all you value.


Mr. Purple wrote:There isn't something inherently bad about death, the reason I would never choose the option of death is because i am a being who is biologically set up to feel extremely negative feelings when approaching death.


That's fine, but you would have no idea it's coming, so it's irrelevant.

Mr. Purple wrote:Since for this example, you have made me a being who doesn't feel pain while approaching death, then I would assume that this version of me would happily choose the death option.


You don't lack those feelings if you know it's coming; you don't want to die. You just don't know it's coming.

So, were I to believe that in your life you would experience slightly more pain than pleasure, you agree that I should (morally) kill you now painlessly and without you knowing it was coming?

That is, that we should all go around painlessly euthanizing humans and non-humans alike as a moral duty if we reasonably believe they will experience slightly more pain than pleasure in life? (assuming the consequences of getting caught aren't there).

That is, if it pleased us to do so. If it pleased us more to torture them instead, we should do that -- whatever will maximize our personal pleasure.

Of course, the most moral thing to do being to put electrodes into our brains, if we were rational. But now it's not even necessary to be rational about it (as you seemed to defend earlier with this "sense of self" stuff)... so your system basically comes down to doing whatever we are inclined to do as moral, rational or not, useful or not. So, all voluntary actions are moral.

The problem here with even analyzing the implications of your claims is that it's so problematic on so many levels that it quickly deteriorates into "all actions are moral actions".

Your system is wrong on every level, because it's inconsistent or useless for making moral claims and evaluations.

Mr. Purple wrote:You honestly don't believe humans get personal pleasure from helping others?


Some do, some don't. I'm saying the interest is non-hedonistic in nature -- it is not an interest in activating the pleasure centers of the brain. Interests use feedback to control the brain.

Mr. Purple wrote:
The core of our disagreement seems to be around the semantics of the word interest. It may be more than that, but we probably can't go much further until this is addressed. It would be really helpful if you spent extra effort in your definition and thoughts around the word. Maybe give me some synonyms for your use of it as well.


I don't know how better to do that than my explanation near the top of this post. Maybe inator can. I think he's probably better at talking to people than I am. :-D

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Posted: Tue Feb 16, 2016 11:23 am
by brimstoneSalad
inator wrote: Right, hedonistic experiences are merely incentives that guide bahavior, not the goal of behavior itself.
You might have better luck explaining that notion to Mr. Purple than I, if you're up for it. I've tried explaining it in every way I can think of.
inator wrote: Maybe it's a bit optimistic to say that in time we are constantly progressing towards our idealized self interests (we can also regress, lose sight of old information etc.), but the idealized interests framework does work well in explaining the importance of relevant information in decision making.
In those cases of regression, it may be fair to say we are wronging our past selves' interests.
inator wrote: Though, in terms of interests, delayed gratification does assume that the future self will still adhere to the same preferences as the present self.
This was what I was thinking. It's just an attempt to realize present interests in the future.
inator wrote: There are several ethical implications based around the realization that past and future events are as real as present ones.
Certainly, and perhaps even MWI, although I would be a little more hesitant to consider as relevant interests from another "branch" of our reality; they would have to be specifically interests in the state of other universes, which are unlikely conceptions.

Perhaps the most numerous interests we must consider are those of quantum physicists, who in consideration of our reality (from other realities), are arguably infinite in number. However, since those interests are spread infinitely thin across an infinite number of realities, it probably all cancels out.
inator wrote:Preferences don't disappear from moral radar just because an organism is dead or hasn't been born yet. Those two states are symmetrical.
Sure, but they do if the organism will never be born in this thread of reality. Kind of reminiscent of Epicurus on death.
Epicurus wrote:Death, therefore, the most awful of evils, is nothing to us, seeing that, when we are, death is not come, and, when death is come, we are not.
Whether you abort or not, in the former the being never will exist, so can't have a future interest in existing to be violated, and if not it will, so you perhaps did the right thing by that interest you just created -- if that is a good.

So, perhaps you do right by creating an interest for another and fulfilling it in the same action.
But we do no wrong in preventing it from existing at all (only the lack of a good action, if the former was a good action).

This is an interesting notion to explore, though, because that's a big "if".
inator wrote:In other words, do we just have a responsibility to make preferers satisfied, or also to make satisfied preferers? (someone related to negative utilitarianism said this, I don't remember who right now). That I don't really know.
Interesting phrasing.

It's a little tricky, because it may rely on how you're looking at good.

In terms of strict altruism, perhaps only the former. In terms of maximizing good in a system (maximizing altruistic actions), the latter too, I would think -- but is maximizing good itself a good? All it's really doing is making the system superficially bigger.

It's worth creating living beings so they can do nice things for each other and maximize the good being done, IF we're trying to maximizing good actions in the system. Assuming there is still more good being done than bad.

But is this necessary? Is creating an interest to satisfy good?

To reduce the abstraction of space and time and existence a bit:

If I perform a sales pitch and convince you that you really need a sprocket, then I give you a sprocket, have I done good?
They seem to negate each other, leaving behind only ripples of hedonistic pleasure in their place. If we negated any interest in that pleasure, I would be inclined to call it useless.

There's a bit of a paradox here, and I think it comes down to the subtle distinction of whether you're just doing good, or maximizing the good done, and this seems to come down to the mathematical difference between a line and its derivative. A derivative of a line is related to the line, but it is not the line, so I would say -- no, it is not good to make satisfied preferers. But it IS something else entirely -- some kind of "meta good", amoral in nature, and not directly relateable to good as we know it. It's not bad either, though. I think the distinction would be almost impossible for most people to grasp.
inator wrote:Or you could consider his decision to die as a misinformed calculation (relative to his idealized interest), and his preference to not suffer temporarily as no greater than the totality of his future satisfied preferences.
Right.
inator wrote: It's difficult to talk of explicit preferences and unitary decisions without the "executive structure" for humans too.
Sure, but I would say that to the degree they are now part of the corporate amalgam entity, they no longer count as individuals. So whether you sum up the individual wills of its constituents, or the corporation as a whole hive mind, you get the same result. I don't think they get to be counted twice.

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Posted: Tue Feb 16, 2016 7:42 pm
by Mr. Purple
BrimstoneSalad, The way we are arguing is extremely inefficient. You just asked a bunch more questions very similar to the ones you have asked before. I would respond to these using similar arguments that you think you have disproved using a definition of interest I don't agree with. The only place to go from here is for you to convince me your definition is better, but i sense you are getting tired of trying, so this may not be resolved.

If you know of any notable philosophers, public figures, or Wikipedia pages that talk about your view in detail, i would like to be pointed in that direction if you don't feel like explaining anymore.

I didn't know the name for the premise behind my worldview, but i just found Psychological egoism on Wikipedia. I'm probably of the hedonist variety: https://en.wikipedia.org/wiki/Psychological_egoism
It seems like a really strong position to me as a believer in the subconscious. :P

Of course this is just a descriptive claim about interests, but that has been the main point of disagreement for me. It was a mistake to go into my personal philosophies of what the best moral system would be given psychological egoism, unless you agreed to its validity in the first place. It just complicated things.
Crudely: If you want to build a robot that learns to eat food pellets, you set up a primary "interest" (a system that evaluates whether the robot is eating food pellets or not), and then you have that deliver positive or negative feedback based on its fulfillment. As the neural network randomly changes, these positive and negative responses provide selective pressure to guide learning, but they all come from the core evaluation -- that interest the robot has been programmed with to want to eat food pellets.

Does that make sense?
So you define interest to be a system that evaluates whether you are doing something or not? I guess i understand that, but i don't see why you would define that as interest. It just seems like an unintuitive definition and i've never heard it used like that before.

It also seems like you are using positive and negative feedback(joy\suffering) as the actual motivation for the action. If you just had the evaluation system without the positive and negative feedback, the robot wouldn't "want" to do one thing over another right? I can see how the evaluation system is the robot maker's way of implanting what he wants the robots interest to be, but it seems like you still need the positive and negative feedback to get the robot to want something(experience interest). That seems similar to defining natural selection itself as our interest rather then a process which shaped what our interests would be.

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Posted: Wed Feb 17, 2016 2:02 am
by brimstoneSalad
Mr. Purple wrote:BrimstoneSalad, The way we are arguing is extremely inefficient. You just asked a bunch more questions very similar to the ones you have asked before. I would respond to these using similar arguments that you think you have disproved using a definition of interest I don't agree with.
I demonstrated how, based on your premises and definitions, your framework degenerates into "All actions are moral actions"; that is, that morality is meaningless based on your framework.

1. The ultimate good is a brain experiencing maximal pleasure stimulation as long as possible.
2. Therefore a rational agent will choose a pleasure pill or electrode (with care for the body to sustain the brain) providing precisely this
3. Therefore the morally right thing for you to do is to choose said pill or electrode, if you value rational choice.
4. You reject the consequence of #3 based on your concept of "sense of self" having value, which is a non-rational concept that has nothing to do with #1
5. Either you don't actually believe #1 is the sole dictate of value (proving my point), or you reject the value of rational behavior
6. You implied the latter, by appealing to music and art etc. but then tried to defend it based on the idea that losing sense of self gives you infinite pain, which is empirically false (as I explained). You demonstrated that your priorities are not based on rational choice based on reality, but irrational perception.
7. If we reject the value of rationality, then ALL choices made must be moral choices, even if they do not have good consequences in fact due to being irrational, because they were made based on the perception of maximizing pleasure rather than rational fact.

The only thing that saves "rational egoism" from this fate of trivialism is the "rational" part, but it also requires you to take the happy pill.
Mr. Purple wrote:The only place to go from here is for you to convince me your definition is better, but i sense you are getting tired of trying, so this may not be resolved.
Inator might be able to explain it in some way that I haven't, but please see above how your arguments and premises break down into a meaningless system. The majority of my post was going based off of your beliefs, and showing how they are irrational, not advocating the definition I presented.

I think you may need to re-read some of the earlier posts.
Mr. Purple wrote:If you know of any notable philosophers, public figures, or Wikipedia pages that talk about your view in detail, i would like to be pointed in that direction if you don't feel like explaining anymore.
It's not my view, it's kind of just common knowledge.
Read pretty much any book on neurology and human thought. We aren't as single-minded and metacognitive as you think we are.

http://www.amazon.com/Thinking-Fast-Slo ... 0374533555

You might like this one too:

http://www.amazon.com/Whos-Charge-Free- ... 0061906115

We are a collection of competing interests, bidding over behavior with the currency of pleasure and pain. The idea of "self" and "consciousness" in the common sense is very misleading.
Mr. Purple wrote:I didn't know the name for the premise behind my worldview, but i just found Psychological egoism on Wikipedia. I'm probably of the hedonist variety: https://en.wikipedia.org/wiki/Psychological_egoism
It seems like a really strong position to me as a believer in the subconscious. :P
In order to talk about "subconscious" you have to start by understanding consciousness, which is really a crunchy and indefinite subject. It's not that we don't understand what the brain is doing at all, it's just that "consciousness" doesn't really mean anything. You're better off asking what "god" is, you'll get a more consistent answer.

I encourage you to read Dennett on this subject.

Here's an article we talked about on the forum that makes a good start:
http://instruct.westvalley.edu/lafave/d ... sness.html

https://theveganatheist.com/forum/viewt ... f=7&t=1203


The arguments for psychological egoism are mostly ad hoc rationalizations of behavior. Just like with justifying any religion, you can create convoluted theories on how everything supposedly reduces to hedonism.

Read the criticism section in the article you linked: https://en.wikipedia.org/wiki/Psycholog ... Criticisms
It's quite short and to the point, so I'll quote it here:
Wikipedia wrote: Explanatory power
Even accepting the theory of universal positivity, it is difficult to explain, for example, the actions of a soldier who sacrifices his life by jumping on a grenade in order to save his comrades. In this case, there is simply no time to experience positivity toward one's actions, although a psychological egoist may argue that the soldier experiences moral positivity in knowing that he is sacrificing his life to ensure the survival of his comrades, or that he is avoiding negativity associated with the thought of all his comrades dying.[26] Psychological egoists argue that although some actions may not clearly cause physical nor social positivity, nor avoid negativity, one's current contemplation or reactionary mental expectation of these is the main factor of the decision. When a dog is first taught to sit, it is given a biscuit. This is repeated until, finally, the dog sits without requiring a biscuit. Psychological egoists could claim that such actions which do not 'directly' result in positivity, or reward, are not dissimilar from the actions of the dog. In this case, the action (sitting on command) will have become a force of habit, and breaking such a habit would result in mental discomfort. This basic theory of conditioning behavior, applied to other seemingly ineffective positive actions, can be used to explain moral responses that are instantaneous and instinctive such as the soldier jumping on the grenade.

Circularity
Psychological egoism has been accused of being circular: "If a person willingly performs an act, that means he derives personal enjoyment from it; therefore, people only perform acts that give them personal enjoyment." In particular, seemingly altruistic acts must be performed because people derive enjoyment from them and are therefore, in reality, egoistic. This statement is circular because its conclusion is identical to its hypothesis: it assumes that people only perform acts that give them personal enjoyment, and concludes that people only perform acts that give them personal enjoyment. This objection was tendered by William Hazlitt[27] and Thomas Macaulay[28] in the 19th century, and has been restated many times since. An earlier version of the same objection was made by Joseph Butler in 1726.

Joel Feinberg, in his 1958 paper "Psychological Egoism", embraces a similar critique by drawing attention to the infinite regress of psychological egoism. He expounds it in the following cross-examination:

"All men desire only satisfaction."
"Satisfaction of what?"
"Satisfaction of their desires."
"Their desires for what?"
"Their desires for satisfaction."
"Satisfaction of what?"
"Their desires."
"For what?"
"For satisfaction"—etc., ad infinitum.[29]

Evolutionary Argument
In their 1998 book, Unto Others, Sober and Wilson detailed an evolutionary argument based on the likelihood for egoism to evolve under the pressures of natural selection.[18] Specifically, they focus on the human behavior of parental care. To set up their argument, they propose two potential psychological mechanisms for this The hedonistic mechanism is based on a parent's ultimate desire for pleasure or the avoidance of pain and a belief that caring for its offspring will be instrumental to that. The altruistic mechanism is based on an altruistic ultimate desire to care for its offspring.

Sober and Wilson argue that when evaluating the likelihood of a given trait to evolved, three factors must be considered: availability, reliability and energetic efficiency. The genes for a given trait must first be available in the gene pool for selection. The trait must then reliably produce an increase in fitness for the organism. The trait must also operate with energetic efficiency to not limit the fitness of the organism. Sober and Wilson argue that there is neither reason to suppose that an altruistic mechanism should be any less available than a hedonistic one nor reason to suppose that the content of thoughts and desires (hedonistic vs. altruistic) should impact energetic efficiency. As availability and energetic efficiency are taken to be equivalent for both mechanisms it follows that the more reliable mechanism will then be the more likely mechanism.

For the hedonistic mechanism to produce the behavior of caring for offspring, the parent must believe that the caring behavior will produce pleasure or avoidance of pain for the parent. Sober and Wilson argue that the belief also must be true and constantly reinforced, or it would not be likely enough to persist. If the belief fails then the behavior is not produced. The altruistic mechanism does not rely on belief; therefore, they argue that it would be less likely to fail than the alternative, i.e. more reliable.
I have no interest in playing whack-a-mole with dishonest arguments intent to maintain the original premise at any cost. I know that's not your intent, but this is not a subject I can really take much more time arguing about, since there has been plenty written on it already, and there's nothing to substantiate your view over other models.

FYI: You can also form a mathematical model that preserves geocentrism and rejects the solar model -- it's just very convoluted. Do you insist, then, that the Earth is the center of the universe because it's possible to make a model that is consistent with that premise?

At the very most, if you ignore the evidence of the complexity of cognition (sticking with the black box) you could argue that your egoistic premise of cognition is "possible" given our incomplete knowledge (in the sense that it's superficially "possible" for the Earth to be at the center of the universe and all things revolve around it, if we're wrong about pretty much everything in science), but not that it is true, or that other concepts are false or less likely.
Given that I have demonstrated how your premises plus your moral assertion either require you to take the happy pill, or deteriorate into the assertion that "all actions are moral" by eschewing rational prerequisites, I can't understand why you choose to maintain this assertion in light of models with much better outcomes.
Mr. Purple wrote:It was a mistake to go into my personal philosophies of what the best moral system would be given psychological egoism, unless you agreed to its validity in the first place. It just complicated things.
No, not at all. Actually, that was good.
The problem is you haven't responded to my debunking of it. I wasn't premising my debunking of your claims about the best system upon my definitions, but on yours. I showed you how egoism doesn't result necessarily in any society sane and empathetic people today would see as good. The good of egoism in systems is a Randian Objectivist fantasy. It can form a very strong and efficient economy, sure, and benefit those who are enfranchised greatly, but overall it makes plenty of others miserable and doesn't resemble Utilitarian outcomes of overall well being for the greatest number.

I needed to show you why you were wrong there to encourage you to second guess your premises based on their abhorrent conclusions. If I substituted your premises for mine, that would be useless.
Mr. Purple wrote:So you define interest to be a system that evaluates whether you are doing something or not? I guess i understand that, but i don't see why you would define that as interest. It just seems like an unintuitive definition and i've never heard it used like that before.
Not exactly, but close. Interests are more abstract; they are't physical things. They are the reasons you do things, which can be deduced from behavior -- not merely the proximal or immediate cause, but the reason, which is something immaterial. In practice, the structures in our brains that create interests act by creating pleasure or pain as motivators to prod the executive functions of the brain to obey them.

Inator said something like:
"Hedonistic experiences are merely incentives that guide behavior, not the goal of behavior itself."

Which I thought was a pretty good way of explaining it.

Like the analogy I gave about a car. The driver in the car is the interest. The hedonistic pleasure is the gas pedal (and the pain, perhaps the breaks).
A car isn't intelligent, so the analogy breaks down there, but the neural network example is harder for people to follow if they aren't familiar with them.
Mr. Purple wrote:It also seems like you are using positive and negative feedback(joy\suffering) as the actual motivation for the action.
What is motivation? The feedback is the proximal cause of behavior, but not its own source, or the cause of that cause.

To make another analogy, the brain is an auction house, the interests are the bidders, and pleasure is the currency with which they bid to win the auction lot that determines behavior.
The rules of the auction are to sell to the highest bidders on any lot. Money is motivating the auctioneer, and yet interest in the lot is motivating the bidders who are providing that money.
Mr. Purple wrote:If you just had the evaluation system without the positive and negative feedback, the robot wouldn't "want" to do one thing over another right?
The organism would be without action. Like an auction where all of the bidders are broke (or the credit card machines are down), or a car without gas, or with a broken fuel injector. However, the driver, or the bidders, may still be there.

The system works together, and requires all parts to be functional to yield intelligent behavior we can observe. Without that behavior, there may still be wants there, we are just without the ability to measure or observe them -- kind of like a book written in a forgotten language; it's no longer information if nobody can comprehend it or know what it means.
Mr. Purple wrote:I can see how the evaluation system is the robot maker's way of implanting what he wants the robots interest to be,
Right.
Mr. Purple wrote:but it seems like you still need the positive and negative feedback to get the robot to want something(experience interest).
The pleasure and pain generated by realizing or failing (or potentially/conceptually doing so) our interests is a decent model of how we interact with them, sure (not necessarily accurate, but a fair model). If you got no feedback on whether you're answering math questions right or wrong, you'll have no notion of what math is supposed to be.
Mr. Purple wrote:That seems similar to defining natural selection itself as our interest rather then a process which shaped what our interests would be.
Not quite, because interests don't have to be rational. The effect will often be similar, but we are also very much memetic beings, and self-authoring. We are robots who can reprogram ourselves to want different things, sometimes based on mistakes, but once wanted an irrational interest is its own thing and no less valid than one generated by well executed natural selection.

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Posted: Wed Feb 17, 2016 5:36 am
by Mr. Purple
Im wanting to simplify the argument because it was getting out of hand and we were going in circles. I don't want you to think i'm trying to bail, but something isn't working.

I went back and looked through what i've said and i literally could copy and paste most of it. You just continue on without taking what i said previously into consideration.
1. The ultimate good is a brain experiencing maximal pleasure stimulation as long as possible.
2. Therefore a rational agent will choose a pleasure pill or electrode (with care for the body to sustain the brain) providing precisely this
3. Therefore the morally right thing for you to do is to choose said pill or electrode, if you value rational choice.
4. You reject the consequence of #3 based on your concept of "sense of self" having value, which is a non-rational concept that has nothing to do with #1
5. Either you don't actually believe #1 is the sole dictate of value (proving my point), or you reject the value of rational behavior
6. You implied the latter, by appealing to music and art etc. but then tried to defend it based on the idea that losing sense of self gives you infinite pain, which is empirically false (as I explained). You demonstrated that your priorities are not based on rational choice based on reality, but irrational perception.
7. If we reject the value of rationality, then ALL choices made must be moral choices, even if they do not have good consequences in fact due to being irrational, because they were made based on the perception of maximizing pleasure rather than rational fact.
1. Yeah, something like this probably. Good would be maximal pleasure, and minimal suffering. That is probably like one of the premises I would use for constructing a moral system, so it's only objective for those who accept the premise, but what I do think is strictly objective is psychological egoism.
2. If by rational you mean non-human(for the most part), then sure. An alien species that doesn't get repulsed by the idea of losing their sense of self, i imagine they would take this offer as i have said before. You say the probe gets rid of that fear, but it wouldn't change them before they take the offer obviously, so i don't know how that helps. For a human it would be rational to refuse the offer given that the human would view it as a source of suffering to head in that direction.
3. no, see 2
4. Sense of self doesn't have value in and of itself, it's the fact that losing your sense of self is a terrible feeling to the particular kind of creatures we are. I already explained this. Same with death. Death isn't magically bad in and of itself, it's bad because natural selection programmed us to feel terrible approaching death. If you change this part of our biology for the examples, then the probe or death would be fine. I have said this before.
6. I think my better explanation for this is how i answered in #2. It doesn't need to be infinite, so that's probably a suboptimal way to explain it. Infinite pain part is just my attempt to describe how the negative experience feels to me personally. It just needs to feel bad enough for someone to not want to take the deal due to biological punishments. If pursuing pleasure is what we are calling rational, then i guess art and music probably are rational now that I think about it.
The only thing that saves "rational egoism" from this fate of trivialism is the "rational" part, but it also requires you to take the happy pill.
You seem to only talk about a straw man version of egoism I personally wouldn't advocate for. I explained why i wouldn't take the happy pill multiple times, and i explained the contexts where it would make sense to take it.
It's not my view, it's kind of just common knowledge.
Not common enough for me to find anything online so far. Do those books explicitly lay out the connection you are making between interest and neurons? That's what I want.
In order to talk about "subconscious" you have to start by understanding consciousness, which is really a crunchy and indefinite subject. It's not that we don't understand what the brain is doing at all, it's just that "consciousness" doesn't really mean anything.
I don't agree with this. At the very least we know it exists.I think we know quite a bit on top of that though, and I probably can dig up a few studies showing peoples subconscious at work of you want me too. I'm using this definition "of or concerning the part of the mind of which one is not fully aware but which influences one's actions and feelings." Unless you are using another strange definition like with interest, i don't see how you find this useless to factor into equations involving interest. We don't need mathematical proofs here.
Read the criticism section in the article you linked: https://en.wikipedia.org/wiki/Psycholog ... Criticisms
It's quite short and to the point, so I'll quote it here:
Yeah, i read them when i came across the page. The egoist's responses are much more convincing than the criticisms in my opinion. The criticisms are just a very flat simplified versions of egoism and they do similar things to what i have been criticizing you for doing. The soldier just has to have a biology(probably trained) that gives him more pleasure in moving towards saving his comrades than the biology is giving him pain from knowing he moving towards death. It doesn't seem that complex too me. Reflex or habit could explain it too probably and they talk about that. I'm not expert in this moral stuff though, so maybe i'm missing something.

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Posted: Wed Feb 17, 2016 9:46 am
by brimstoneSalad
Mr. Purple wrote: I went back and looked through what i've said and i literally could copy and paste most of it. You just continue on without taking what i said previously into consideration.
This is very frustrating, and you're insulting me here. I know you think I have ignored or misunderstood something you have said, but it's not true.

I know what you're arguing, because I used to think this was true.

I don't like to talk about myself and it should be unnecessary to say this, but I want you to take a moment and consider the possibility that I have misunderstood nothing that you have said, but that I am trying to explain how you are mistaken and that you are not reading carefully enough.
You don't need to try to explain yourself more clearly here, I understand what you're saying.

I am fully aware of what you said, which is why I broke it down for you, and I feel like you have manipulated what I said. This may be due to your ignorance of philosophy. "Rational agent" has a very specific meaning.

https://en.wikipedia.org/wiki/Rational_agent
Wikipedia wrote: In economics, game theory, decision theory, and artificial intelligence, a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.

Rational agents are also studied in the fields of cognitive science, ethics, and philosophy, including the philosophy of practical reason.
You may not have the prerequisite knowledge to engage in discussions like these, which may be the source of frustration here. Rational has a specific meaning here too.

Examine that definition, and what I said.
Look at #1 and #2

"always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions."

As you said:
Mr. Purple wrote: 1. Yeah, something like this probably. Good would be maximal pleasure, and minimal suffering.
Based on the first premise, this is optimal by definition if the being is after this definition of "good". This, and ONLY this, is optimization. Any side-track for irrational reasons (based on bad reasoning, or bias) is NOT being rational.

Keep this in mind, because this is very important to understand.

If you amended the first premise to be:
"Good is maximizing pleasure for a brain and minimizing suffering, while maintaining a subjective sense of self"
THEN we would be talking about a very different kind of optimization.

A different premise yields different results.
Mr. Purple wrote: You say the probe gets rid of that fear, but it wouldn't change them before they take the offer obviously, so i don't know how that helps. For a human it would be rational to refuse the offer given that the human would view it as a source of suffering to head in that direction.
You completely misunderstand the definition of rational.
No, it would NOT be rational to refuse it, because the fact of the matter is that -- and the human would know this -- the suffering is overridden by the pleasure. Given the truth of the first premise, the rational agent will choose the pleasure.

The only reason you view it as suffering is because you are being irrational, and rejecting the empirical fact of the matter in favor of a distorted world view influenced by irrational personal bias.

Your distorted world view provides you with this picture:
Take it: A moment of "infinite" feeling terror and suffering, followed by nothing because the sense of self is lost and you are dead.
Reject it: Life as usual.

This is not an accurate view of the world. YOU KNOW THIS, and yet you continue to hold onto this irrational perception of the situation.

If you actually believed the first premise, and had a rational approach to this scenario, your corrected world view would be:
Take it: A moment of terror which creates finite and sub-maximal suffering in the brain, followed by a lifetime of maximal pleasure which is in excess of the moment of suffering. This results in net pleasure gain.
Reject it: Life as usual.

There are two possible reasons for you to reject the offer:

1. You do not actually subscribe to the original premise, but rather some variant like
"Good is maximizing pleasure for a brain and minimizing suffering, while maintaining a subjective sense of self"
In which case, fine, but you must admit that there is something considered valuable in this premise that is NOT pure hedonistic experience of pleasure vs. pain in the brain.

2. Despite knowledge to the contrary, you maintain an irrational and inaccurate view of the consequences of taking the proposition. That is, you have made an irrational choice, to your detriment, and against the greater "good" for you based on the original premise.

This should not be difficult to understand. You're still trying to twist things around to make it look like I don't understand what you're trying to say. Stop doing that, and make the assumption that I know that you're trying to say, but you're fundamentally mistaken in your reasoning and what you believe isn't fully coherent.
Mr. Purple wrote: Sense of self doesn't have value in and of itself, it's the fact that losing your sense of self is a terrible feeling to the particular kind of creatures we are.
If you assert this, then you are being irrational in your decision to reject the deal. See above where I have clearly acknowledged this supposed suffering (as I also did in the other posts, which you seem to have ignored for some reason).
If you measure the net suffering, and net pleasure, provided by the deal (with the pleasure immediately following the suffering and lasting a lifetime), you will find the pleasure far outweighs the suffering.

Or do we need to modify the premise again?

"Good is the maximization of pleasure, as long as it's not necessary to endure any suffering whatsoever to obtain said pleasure even if outweighed by the pleasure itself"

That doesn't mesh with any observations of human behavior. You endure suffering every day to obtain pleasure afterward which is in excess of the suffering you endured to get it.

You might not walk across a moat of shattered glass to reach a cupcake, but you'd definitely walk across a floor with scattered corn chips to reach a billion dollars.

The difference in relative magnitude of suffering and pleasure in this maximal pleasure scenario exceeds the latter of those two cases above. Feeling momentary discomfort over your sense of self is trivial (like stepping on corn chips) compared to the maximal pleasure your brain will receive for the rest of your life (the billion dollars).

The key here is comparison, and relative magnitude.
Mr. Purple wrote: It doesn't need to be infinite, so that's probably a suboptimal way to explain it. Infinite pain part is just my attempt to describe how the negative experience feels to me personally.
It does NOT feel that way to you, for one you can't conceive of infinity. You may imagine it will feel that way, but that's why you are wrong.

At the very most, it may be maximally unpleasant (for one second, until the pleasure kicks in), but even that is not true as I clearly explained and you ignored.
Can you imagine a torture profound enough that you would beg to be killed? Because human behavior demonstrates this level of pain isn't really even that great. Torture could be no worse than maximally unpleasant, and since the idea of being killed (or losing sense of self) is less painful than torture as empirically proven, you aren't even talking about maximal pain.

We're comparing a second of very finite, measurable, pain, which is below maximal pain, with a lifetime of MAXIMAL pleasure and no pain at all following it.

In order to reject that deal, either you don't actually accept the first premise we discussed in reality (and it needs modification), or you're being overtly irrational -- in that case, like any Christian, you are choosing a world view that is not real over one that is real due to some kind of personal bias or profound failure at reasoning (the imaginary "infinite" suffering of losing your sense of self being no different from the imagined infinite suffering in eternal hell fire: both are delusions).

Which is it?

A. Do you reject the original premise, and want to amend it to extend to interests beyond pure pleasure and suffering in the brain?
If so, this is good, and we're making some progress.

B. Do you admit that in response to this deal, you were being irrational by rejecting it?
___B-1. If so, do you want to change your answer to say that of course you will accept the deal and sacrifice your sense of self for maximal pleasure?
___B-2. Or will you admit that since maximizing pleasure is good, and you are failing at that, your reaction to that deal would be evil?
___B-3. Or do you maintain that it's OK for a person to make irrational choices if they want, and there's no moral prerogative to rationally maximize pleasure based on reality if somebody doesn't want to or favors an irrational world view that leads to a suboptimal choice?

Mr. Purple wrote: but what I do think is strictly objective is psychological egoism.
As to psychological egoism, you seem to have completely ignored everything I explained about it being a model. It's getting really irritating.
This is very important to understand, and I explained it in the last post.
Mr. Purple wrote: You seem to only talk about a straw man version of egoism I personally wouldn't advocate for. I explained why i wouldn't take the happy pill multiple times, and i explained the contexts where it would make sense to take it.
You accusing me of strawmanning your position is extremely irritating and insulting. I understand your position better than you do, as is obvious from this conversation (and will be in retrospect if you come to understand your position). I have not misrepresented your position, but explained why your arguments are flawed in detail, and asked questions for you to respond to.

You seem dead set on the narrative of me misunderstanding your position and misrepresenting you for some reason, meanwhile this entire time you've been ignoring my arguments rather than thinking carefully about them because you prefer to just assume I don't understand you.

I have explicitly parroted your premises, stated clearly every possible interpretation based on what you've said. I've explained the weighing of pleasure and pain in that scenario multiple times, in multiple ways, and yet you ignore everything I write and keep on like a broken record, now leveling accusations against me.
Mr. Purple wrote: Not common enough for me to find anything online so far. Do those books explicitly lay out the connection you are making between interest and neurons? That's what I want.
Read the books, and read the Dennett article. And please read my post more carefully and pay attention to where I explained that you're talking about a MODEL.

Go to The Flat Earth Society, and see how useful models are.

You might as well be a creationist asking for yet more "transitional" fossils after I've explained that's not how the fossil record works, but here's a bunch anyway. You're asking the wrong questions, because you don't understand the topic at hand. I can't point you to an "interest neuron", because that's not how the mind works.

The mind is more of a conceptual framework, like computer software; there's not necessarily a single chip that does this or that. And it's certainly not human readable yet.
Mr. Purple wrote: I don't agree with this. At the very least we know it exists.
No.... :roll:
Mr. Purple wrote: I think we know quite a bit on top of that though, and I probably can dig up a few studies showing peoples subconscious at work of you want me too.
The narrative you call "consciousness", and subjective experience, is an illusion. Except for by MODELING, we really aren't aware of anything going on in our brains or environments. And nobody's model is very accurate at all.
Mr. Purple wrote: I'm using this definition "of or concerning the part of the mind of which one is not fully aware but which influences one's actions and feelings."
In other words, all of it. :roll:
Did you miss the part where I said consciousness itself is ill defined?

It doesn't matter. We're talking about models. And it went completely over your head why I would reference Christians saying everything is based on god -- it's a bald and unfalsifiable assertion for most purposes. Both are. It's meaningless and irrelevant to this discussion.
Mr. Purple wrote: Unless you are using another strange definition like with interest, i don't see how you find this useless to factor into equations involving interest. We don't need mathematical proofs here.
I'm not using a strange definition for interest. It's a pretty typical one. Something we're interested in and want/don't want. Pretty easy.

You would say we only want to realize our interests because doing so gives us pleasure, I say our interests are the things we want to realize, and in order to do so our brains are provided with negative and positive feedback to motivate action to these ends.

MODELS.

Consciousness, on the other hand and as I have explained repeatedly now, is NOT USEFUL because it is very poorly and crunchily defined. Read Dennett. Read ANY of those references I gave you.

Mr. Purple wrote: Yeah, i read them when i came across the page. The egoist's responses are much more convincing than the criticisms in my opinion.
Of course ad hoc rationalizations for the position you already hold are more convincing to you.
And you still missed my point about this being a MODEL. I understand your ad hoc rationalizations, and I'm not interested in them:
Mr. Purple wrote: The criticisms are just a very flat simplified versions of egoism and they do similar things to what i have been criticizing you for doing. The soldier just has to have a biology(probably trained) that gives him more pleasure in moving towards saving his comrades than the biology is giving him pain from knowing he moving towards death.
And the Earth is flat, the sun just has to be a glowing orb that circles the face of the earth near the surface so it just illuminates part of it!
And the flat plane of the Earth just has to be accelerating through space to create gravity!
And and and...

You can bullshit enough with ad hoc explanations to force any model to fit with observations of reality. That's not being questioned. As I said before, you can form a mathematical framework to put the Earth at the center of the cosmos and show how and why the sun and all other planets orbit the Earth. That doesn't make it right or useful for anything.
Mr. Purple wrote: It doesn't seem that complex too me.
No, it isn't, a child can do it. Mornonic theists have done it for centuries until Science came along and proposed something a little different.

Your commitment to this single model of the mind is a particular dogma. Until you can understand that it's just a model of something that is a "black box" -- and a model with other alternatives that are equally useful (if not more so) at explaining behavior, and MORE useful to moral theory -- you will remain trapped in that dogma because there's little I can show you to falsify it that you can't just rationalize away.

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Posted: Wed Feb 17, 2016 8:12 pm
by Mr. Purple
I like where this is going. If you are right, I will have good information to draw from when I change my mind. I enjoyed the dennett article
I don't like to talk about myself and it should be unnecessary to say this, but I want you to take a moment and consider the possibility that I have misunderstood nothing that you have said, but that I am trying to explain how you are mistaken and that you are not reading carefully enough.
You don't need to try to explain yourself more clearly here, I understand what you're saying.
Ok, i'll believe you. I'll stop saying you misunderstand my previous points. Though I doubt you would be willing to do the same by request.
You may not have the prerequisite knowledge to engage in discussions like these, which may be the source of frustration here.
It may be useful to gain the ability to communicate your ideas to people who don't know as much. It's rare I can't explain a concept in a simplified way that makes it intuitive for people if I know the subject well enough. The times where I can't are usually the times I don't know it as well as I thought I did(I'm not saying this is true for you). You have been pretty patient with me, so I appreciate that.
Interests are more abstract; they are't physical things. They are the reasons you do things, which can be deduced from behavior -- not merely the proximal or immediate cause, but the reason, which is something immaterial.
Based on this definition, would gravity be the interest of a rock? Or is the laws of physics too physical? Natural selection seems like it would fit the definition too. You said natural selection doesn't fit because we are memetic beings, but i don't see why that stops natural selection from being the reason for things we do.
Right, hedonistic experiences are merely incentives that guide behavior, not the goal of behavior itself.
Those "mere incentives that guide behavior" are the things that I value. I don't care about what the "Goal of behavior" is. Once you talk about interests on this abstract level outside of a being's actual experience, it loses all moral relevance to me. I don't know what it's like to have interest in the way you are describing it. You're saying there is a mechanism in the brain that is the true reason you value something though your experiences are creating the illusion suffering\joy is the reason you value it. That's as useless to me as saying there is a mechanism in your brain that factually shows your favorite color is blue, even though your brain is creating an illusion that makes you think red is your favorite color. When talking about conscious experience, it seems like the definitions should be based largely on the experience itself. Me feeling like red is my favorite color should make it my favorite color by definition inherent in the word favorite. The context i've always seen interest is more like favorite in describing a subjective experience. If it doesn't do that, then interest has no moral value for me.
No, it would NOT be rational to refuse it, because the fact of the matter is that -- and the human would know this -- the suffering is overridden by the pleasure. Given the truth of the first premise, the rational agent will choose the pleasure
Alright, I see what you mean by rational actor. If we are assuming a "rational actor" with adequate knowledge and assurance of the outcome, he would literally have no choice but to get the probe. I don't think people can choose what they don't believe will give them the best outcome of pleasure and suffering.
And you still missed my point about this being a MODEL. I understand your ad hoc rationalizations, and I'm not interested in them
So are you saying nobody is justified in forming beliefs about consciousness? It's going to be pretty rough for you to make any moral arguments without believing you can say anything about consciousness. Maybe this is why your interest concept is so foreign.

Psychological egoism fits what I experience perfectly and seems to make accurate predictions of human behavior from what I can tell. This seems like a decent place to start for talking about morality. How much weight is it fair to give to your own conscious experiences when forming an opinion about conscious experiences? It's not like i'm using my conscious experiences to make proclamations about laws of physics. As far as I can tell, when dealing with human consciousness there is only so far we can go before we need to make some assumptions based on our own experiences. I think most people I would be proposing these ideas to would understand this.
Just like with justifying any religion, you can create convoluted theories on how everything supposedly reduces to hedonism.
Saying suffering\joy is the reason humans choose one thing over another is pretty simple, is what i experience, and seems to explain human behavior accurately. It doesn't seem convoluted at all to me.

I can't point you to an "interest neuron", because that's not how the mind works.
That's not what i'm looking for... I just want the same kinds of things you are saying from another source, and if there is some sort of official name for this kind of theory of interest, it would help when searching the web.

Re: An open invitation to stop your misinformed fad and start making an actual difference in the world.

Posted: Thu Feb 18, 2016 2:56 am
by brimstoneSalad
Mr. Purple wrote: Ok, i'll believe you. I'll stop saying you misunderstand my previous points. Though I doubt you would be willing to do the same by request.
Thanks.
Mr. Purple wrote: Based on this definition, would gravity be the interest of a rock? Or is the laws of physics too physical?
They're not too physical, it's just that they're in no way observationally goal oriented. A rock is just as 'happy' obeying gravity by being pressed into the ground as by falling. We have no way to distinguish non-interests from interest. Does a rock want to be broken into pieces, or remain whole? Do rocks like to be polished?

Only with intelligent organisms is there a means by which to determine that; which is their true learning and fully adaptive behavior.

If you have an organism that merely moves toward food, you don't know if it wants the food, or if it just reflexively moves toward that food. You try to tell the difference by mixing things up. Maybe you put a wall or other tricky obstacle between it and the food that requires it to move away from the food before it can advance toward it -- if it acts like a mindless video game monster and just runs into the wall rather than reasoning its way around the wall, you can then reasonably conclude that it just moves toward the food by reflex rather than any kind of legitimate want. If, however, it can learn to overcome your tricky obstacle, that's an indication of intelligence (this is not proof, but makes it more likely).

While things like plants, jellyfish, oysters, and many worms (not all worms) may move toward light, food, etc. they are of the former kind of being that doesn't really learn, it just acts on reflex or "pre-programmed" rules. They don't actually have interests or minds.

Sometimes programming (in biology, or in games) is complex enough to fool you. Some video games get around the lack of intelligence of monsters by having them get sensitized to lack of progress. For example, as long as they are approaching the target, they continue, but as soon as the approach slows or stops, they become "agitated" and start moving more randomly in tangential directions and even away from the food -- this random movement can be chaotic enough to mindlessly bumble through an obstacle and un-stick them. This isn't intelligence, but a little trick from a complex reflex. We see behavior kind of like this in plants too (but more in simple animals like small insects who become increasingly "frustrated" the longer they're held up for and start behaving erratically -- the insects may or may not be intelligent, we'd have to use more tests, but this isn't an indication of it in itself since we know how this behavior can be easily created without an iota of true learning or intelligence). Most cockroach behavior, for example, has been modeled quite convincingly without any intelligence.
Mr. Purple wrote:Natural selection seems like it would fit the definition too. You said natural selection doesn't fit because we are memetic beings, but i don't see why that stops natural selection from being the reason for things we do.
Some interests are hard wired into us by evolution, and those are just as legitimate. In non-human animals, those are usually more prevalent. As long as there's intelligence there to support a mind which comprehends these interests and uses true learning to realize them, it's legit no matter where the interest came from.

You might find "Dennett's creatures" interesting. It covers the evolution of cognition from none to primitive to more advanced.
Mr. Purple wrote:Those "mere incentives that guide behavior" are the things that I value. I don't care about what the "Goal of behavior" is.
Why do you choose to value these things rather than interests?
In doing so, you're taking only part of the mind, and you're saying "this is what's valuable", and rejecting the rest without cause. You're basically trying to short circuit the normal processes that make a mind (that incentive system is an internal process necessary for intelligence, but it's normally controlled by other and essential parts of the mind you have cut out).
You're also denigrating other people's experiences, and saying only your experience of thought is meaningful.

Perhaps you only have an interest in experiencing pleasure, and given all relevant information you accept the "happy pill", but such is not the case for others as demonstrated by behavior.

I'm a rational agent, and because my goal is not maximizing hedonistic pleasure in my brain, I can reject the "happy pill". Pretty simple.
My joy at rejecting the happy pill, or my nominal discomfort at accepting it, would mean nothing next to the maximal ecstasy it would provide my brain, but this is irrelevant because maximizing said pleasure stimulation isn't the goal of my life.

On what basis do you completely reject my account of what I want, and substitute your own?
Mr. Purple wrote:That's as useless to me as saying there is a mechanism in your brain that factually shows your favorite color is blue, even though your brain is creating an illusion that makes you think red is your favorite color.
I admit you may just value pleasure -- it's entirely possible to have pleasure as a sole interest. The interest framework allows for and encapsulates the hedonistic framework within it, but only for those beings that are purely hedonistic.

However, I (and many others) have other values we place above hedonism. Whatever you may experience, that's what you're denying. You're telling me there's some secret "subconscious" mechanism in my brain that factually shows I only value hedonism, and my brain is creating the illusion that I have other values aside from that?

The thought experiment of the happy pill should be proof enough -- by my behavior in response to that situation -- that I do not value only pleasure.
I am rational, and I KNOW that the happy pill will give me more pleasure and less suffering, and I reject it because hedonism does not define the values I hold. Sure I prefer pleasure to suffering, but that is not the purpose of my life as I give it.

If you short circuit my brain by implanting an electrode in it or pumping me full of drugs, obviously my body will do whatever you want, but that is not the will of my full mind which you have broken in doing so. You have only isolated a small part of my mind (blocking out the influence of the rest of my mind upon my behavior), and called it the whole (or the only important part) when in fact it is meaningless without the rest.
Mr. Purple wrote:When talking about conscious experience, it seems like the definitions should be based largely on the experience itself.
Are you a solipsist? Why do you reject the experiences of others?
My experience, and as clearly evidenced by my behavior (which is more important than anecdotes), is that other interests take priority over hedonism. I can clearly feel these things as more important to me when I consider the options, full well knowing it means less pleasure and more suffering.
Mr. Purple wrote:Once you talk about interests on this abstract level outside of a being's actual experience, it loses all moral relevance to me.
We feel our interests pretty clearly, and they are also evident from our actions and choices in thought experiments like the happy pill.
But to the contrary of your claims on moral relevance, without those interests morality has no meaning -- as I have shown in the previous posts and will show again in a moment.
Mr. Purple wrote: Alright, I see what you mean by rational actor. If we are assuming a "rational actor" with adequate knowledge and assurance of the outcome, he would literally have no choice but to get the probe. I don't think people can choose what they don't believe will give them the best outcome of pleasure and suffering.
So, you have been fully informed, and you rationally understand this happy pill will result in maximal pleasure. You now accept this proposition?

You get plugged in, lose all sense of self, and become a vegetable without thought or action -- just a fleshy mass with meaningless chemical reactions powering away in your pleasure centers.

If this is what you truly want -- if this is what you tell me your ideal interest in life is -- then I will not doubt you (as you doubt me).

This is not what I want. I have other values. I have an interest in helping others. I have an interest in thinking and living. I even have an interest in feeling pain sometimes, because it's essential to my existence as a sentient being -- just please not too much of it if you don't mind.

Why do you completely reject the concept that somebody could value something beyond meaningless chemical reactions in one part of the brain rather than another?
Outside the context of intelligent behavior creating the feedback, all you've done is short circuited the system that was once an intelligent being and turned it into a mindless tub of goo -- actually mindless, because a mind depends on variable feedback and exists to think and interact with that feedback.
Mr. Purple wrote: So are you saying nobody is justified in forming beliefs about consciousness? It's going to be pretty rough for you to make any moral arguments without believing you can say anything about consciousness.
Not so, consciousness -- like a magical "soul" in religion -- is not necessary to make moral arguments. It's a subjective experience, or more of an illusion, which isn't even necessarily consistent between two humans, or one human from one moment to the next.

All we need to know is sentience, and get an idea of the interests of other beings to be able to respect those interests -- having an interest in others' interests, which is the core of altruism and the basis of morality.
Mr. Purple wrote:Maybe this is why your interest concept is so foreign.
I don't know why you're having trouble with it, or why you think "consciousness" has any meaning or relevance to philosophy or ethics. Maybe you watched some bad TV programs. ;)
Mr. Purple wrote:Psychological egoism fits what I experience perfectly and seems to make accurate predictions of human behavior from what I can tell.
It would predict a rational agent will take the happy pill. This prediction is false for most rational agents, even fully informed of this, because that premise just isn't true.
Mr. Purple wrote:This seems like a decent place to start for talking about morality.
It's a place to stop talking about morality, because it destroys any sensible or useful concept of morality.

1. Given that you insist that I am lying (or that I don't know my own mind), and that in fact I am psychologically egoistic and seek only pleasure (which I find insulting -- I'll grant that you may only care for your own pleasure, which is perfectly possible in the interest framework by just having one single interest, and having it be in hedonistic pleasure)

2. And given that, in full knowledge of the situation and consequences, I reject the happy pill.

3. You are forced to conclude that I am not a rational agent (which I also find pretty insulting)

4. And that I am evil for making the choice not to maximize my own pleasure, but to accept pain and help others instead.


Do you think I'm an irrational agent? Do you consider me to be an evil person, because I would reject maximal pleasure in order to retain my sense of self and action in the real world to help others?
Mr. Purple wrote:As far as I can tell, when dealing with human consciousness there is only so far we can go before we need to make some assumptions based on our own experiences.
When dealing with god, there's only so far we can go before we need to make some assumptions based on our own spiritual experiences.

Or, you know, just don't bother with it because it's not meaningful or relevant to anything.

Sentience is, Interests are -- these are concrete observable metrics of behavior. Consciousness is unimportant to these discussions.
Mr. Purple wrote:I think most people I would be proposing these ideas to would understand this.
Present the happy pill scenario to some people, and see how they reply.
Or, how about you present the "family being tortured to death" scenario I presented before -- see how they reply then.

Do you think I'm evil because I would prefer to save my family in that scenario, despite it resulting in slightly more net pain for me? Based on the egoist proposition, you have to believe I am.

See if people agree with that notion.
Mr. Purple wrote:It doesn't seem convoluted at all to me.
When you have to appeal to secret/hidden subconscious motivations to things? Yes, it is. Read those books I linked to.

Did you not understand the evolutionary argument against that in the Wiki article? Simply in terms of information processing, this is an inefficient mechanism for cognition; the interest framework is much faster and easier.
Mr. Purple wrote:
I can't point you to an "interest neuron", because that's not how the mind works.
That's not what i'm looking for... I just want the same kinds of things you are saying from another source, and if there is some sort of official name for this kind of theory of interest, it would help when searching the web.
Altruism and Preference Utilitarianism both tend to use an interest framework.
You'll probably find the most material related to preference utilitarianism: https://en.wikipedia.org/wiki/Preference_utilitarianism
Although I'm not a utilitarian.