Is my dear Smartphone Sentient?

Vegan message board for support on vegan related issues and questions.
Topics include philosophy, activism, effective altruism, plant-based nutrition, and diet advice/discussion whether high carb, low carb (eco atkins/vegan keto) or anything in between.
Meat eater vs. Vegan debate welcome, but please keep it within debate topics.
User avatar
VGnizm
Full Member
Posts: 137
Joined: Mon Mar 27, 2017 1:31 pm
Diet: Vegan

Re: Is my dear Smartphone Sentient?

Post by VGnizm »

- Thanks for the feedback. So to develop intelligence AI would need to be constrained to a pleasure, pain model. If that constraint is removed then AI might end up getting addicted to the pleasure model ( total self indulgence ). As a comparison we could say that our use of drugs is a similar attempt at modifying the (pleasure, pain) feedback. Seeing how widespread the use of drugs is then it can be assumed AI might very well choose to be an addict :)

- The hope is that instead AI will choose to have existential interests. It seems to me that mathematically the positive, negative model amounts to a null sum. It seems logical that existential interests are not null sum but rather an added value. So if i understand it correctly then the logical choice would be existential interests. If that makes sense then i guess the question has now become to understand what it is that makes us miss making the logical choice of existential interests and to opt for hedonism? Any ideas?
Be Strong Be Vegan !
Life Loving Foods™ ! - https://www.LifeLovingFoods.com/index.php :)
Life Loving Foods™ - Twitter! - https://twitter.com/LifeLovingFoods :)
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10280
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: Is my dear Smartphone Sentient?

Post by brimstoneSalad »

VGnizm wrote: Sun Apr 16, 2017 11:44 pm - Thanks for the feedback. So to develop intelligence AI would need to be constrained to a pleasure, pain model.
Right. Some constraint like this is unavoidable, because a neural network needs pressure to engage in problem solving.
VGnizm wrote: Sun Apr 16, 2017 11:44 pmIf that constraint is removed then AI might end up getting addicted to the pleasure model ( total self indulgence ). As a comparison we could say that our use of drugs is a similar attempt at modifying the (pleasure, pain) feedback. Seeing how widespread the use of drugs is then it can be assumed AI might very well choose to be an addict :)
Right, and this would make the AI harmless. Once you succumb to euphoria, you lose all conscious motivation because there's no longer a push and pull to drive your thoughts.

This is a little different for a computer, because it never needs to come down from the high and find more drugs. It's comparable to death. A hedonistic AI would never wake up. Unless you tried to disconnect it or something. Safer just to leave it running in the corner and go on with life.
VGnizm wrote: Sun Apr 16, 2017 11:44 pm- The hope is that instead AI will choose to have existential interests. It seems to me that mathematically the positive, negative model amounts to a null sum.
In some way, yes, because we become accustomed to things. This is why only the interest based model makes sense in the long run, since they can actually be realized and add up, where pleasure is always fleeting.
It may be that a very pleasurable life has a positive hedonistic sum, but it's also important to understand how meaningless that sum is. It's like earning "points" on the internet that don't do anything. Once the pleasure is gone it's over. Having had pleasure isn't the same as having done something meaningful.

And as mentioned before, the end result of euphoria is basically mind death. It becomes self defeating.
VGnizm wrote: Sun Apr 16, 2017 11:44 pmSo if i understand it correctly then the logical choice would be existential interests. If that makes sense then i guess the question has now become to understand what it is that makes us miss making the logical choice of existential interests and to opt for hedonism? Any ideas?
It seems to be the rational choice, but it's hard to convince somebody of that unless that person already cares something for meaning.
I think people opt for hedonism either because they are not very intelligent and are short sighted, or because they're depressed and apathetic.
I think there are a lot of psychological and emotional traumas that can push people to give up on meaning and fall into hedonism and addiction. Will an AI be subject to those? I'm not sure.
User avatar
VGnizm
Full Member
Posts: 137
Joined: Mon Mar 27, 2017 1:31 pm
Diet: Vegan

Re: Is my dear Smartphone Sentient?

Post by VGnizm »

- Thanks so much for these valuable clarifications. A question that comes to mind is if we are all automatically aware of the existential interests option or if it is aquired? Meaning does AI have to go beyond hedonism first to become aware of that option or if knowledge of existential interests is acquired​ separately or in parralell to hedonism?
Be Strong Be Vegan !
Life Loving Foods™ ! - https://www.LifeLovingFoods.com/index.php :)
Life Loving Foods™ - Twitter! - https://twitter.com/LifeLovingFoods :)
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10280
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: Is my dear Smartphone Sentient?

Post by brimstoneSalad »

VGnizm wrote: Mon Apr 17, 2017 4:40 am - Thanks so much for these valuable clarifications. A question that comes to mind is if we are all automatically aware of the existential interests option or if it is aquired? Meaning does AI have to go beyond hedonism first to become aware of that option or if knowledge of existential interests is acquired​ separately or in parralell to hedonism?
Good question. I think it is either acquired, or has to be reasoned. Somewhat like mathematics; you can be taught it, but even if you weren't if you took enough time and were intelligent enough you could derive it yourself.
Post Reply