Animal Advocacy - Most Important Focus? [POLL]

Vegan message board for support on vegan related issues and questions.
Topics include philosophy, activism, effective altruism, plant-based nutrition, and diet advice/discussion whether high carb, low carb (eco atkins/vegan keto) or anything in between.
Meat eater vs. Vegan debate welcome, but please keep it within debate topics.

Where do you think more attention should be placed in animal advocacy?

Welfare Reform
0
No votes
Abolitionism
1
10%
Vegan Education
7
70%
Reduction in Consumption (of animal products)
1
10%
Environmental Responsibility
0
No votes
Positive Health Benefits
1
10%
 
Total votes: 10

User avatar
thebestofenergy
Master in Training
Posts: 514
Joined: Fri May 16, 2014 5:49 pm
Diet: Vegan
Location: Italy

Re: Animal Advocacy - Most Important Focus? [POLL]

Post by thebestofenergy »

Thanks for clarifying, brimstoneSalad (yes, the assumption that you made was correct).
I also see the point about interest now, I agree.
For evil to prevail, good people must stand aside and do nothing.
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10280
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: Animal Advocacy - Most Important Focus? [POLL]

Post by brimstoneSalad »

Volenta wrote: Edit: oh boy, I appreciate everything you write brimstoneSalad, but that is one big response... Going to read it later on.
It's to you and Energy. Both of you are right on some points, and mistaken on some other points.
Although I hate to ruin the fun you two are having arguing with each other, this might help you define terms better and do a little less talking past each other. :)
thebestofenergy wrote:Thanks for clarifying, brimstoneSalad (yes, the assumption that you made was correct).
I also see the point about interest now, I agree.
You're welcome. Glad I was able to help. :)
Sometimes incomplete knowledge on a subject can get us into trouble- it's a good article, definitely worth reading the whole thing carefully. The issue of moral relativism is a huge one in philosophy, and a major point of attack by theists against atheists- being able to fully understand and defend non-theistic moral objectivism is essential (without which, a good apologist can often wipe the floor with an atheist).

Volenta wrote: When acacia trees are grazed by animals they react by producing chemicals to make it unappetizing and tough to digest. Some corn and cotton plants release chemicals in the air when they're attacked by caterpillars to attract parasitic wasps to let them kill the caterpillars.
When a rock is released from a high place, it falls.
When baking soda is exposed to an acid, it fizzes and releases CO2
When we touch a hot stove, we reflexively retract from it.
When we go out in the sun, our skin produces more melanin.
When you push the power button on a computer, it turns on.
When you turn the ignition key in a car, the engine starts.

When something happens, often another thing happens in response.

None of these are examples of sentient responses.

Please see my prior post about what sentience is, how it relies on intelligence, and how without that "want" is not a coherent concept.

Volenta wrote:So if there is no practical solution, that would make the problem relative? It's a honest question, I haven't read much about it (yet).
See my last post.

No, it doesn't make it relative. Not any more than ignorance of the contents of a box makes the content of the box conform to opinion.

You were correct on this point, Energy was mistaken.

Volenta wrote:I think there is an objective answer out there, just like the 10.000 people problem has. If it has a high probability, it's probably the right thing to put in into action, but you of course can't be sure. Balancing human life to a certain number of chickens, cows and pigs is hard to do.
Correct.

These questions are difficult, both empirically and emotionally, but they have answers that we can approach by learning more and investigating them.
If they did not, then morality would not be a very useful concept, if it's unable to ever resolve issues where the interests of two sentient beings are in conflict.

The easy answers come where there's a "win-win" or "lose-lose" scenario- in those cases we don't have to weigh good for one against harm of another.

Meat eating is an example of a "lose-lose" scenario. It's obviously bad from any perspective.

When we run into "lose-win" scenarios, or those things that are much closer to zero-sum games, the decision becomes harder, and we need much more information and wisdom to move forwards.
Unfortunately, a lot of matters in life are exactly this kind of situation.

In a perfectly zero-sum game (in every meaningful respect), there may be no consequential answer at all, because both of the potential consequences are equally good or bad.
That doesn't make it relative, but instead makes it irrelevant, or an amoral situation.
There may never be such a situation in reality, though- perfect balance doesn't usually exist outside of theoretical abstracts.

thebestofenergy wrote:But that's not driven by survival instincts (which is the case for the insect). That's a non-conscious mechanism built by evolution so that they could survive. In the case of insects, it's a conscious decision.
Instinct is a little more complicated... I'm not going to touch that at the moment for fear of writing an essay.

It would be really good if we could stop using the term "conscious".
In the context of sentience, however, this is correct.

Reflexes are no more meaningful than any mechanical or chemical reaction. Reflexes do not denote intention.
thebestofenergy wrote: You could argue so. If there's no 'best' solution that can be proven to be so, then I think there's no way to determine what is the correct way to go.
Perhaps, but that's not relativism. That's just ignorance. And it's OK to admit we're ignorant on some things (until we find more information).
We don't know some things, that doesn't stop them from being knowable if we can gain more wisdom and information on the subject.
We should never give up and say something like that is unknowable, though.

In a pure zero-sum game, there is not always a consequential answer to which is better. Just as there is no answer to which side of any balanced equation is larger- they are naturally equal.

If you look deep enough, though, you'll probably never encounter any situation that is really perfectly balanced. There's always some difference which has moral relevance- it just might take time to find it.

volenta wrote:I'm still wondering how conscious the decision of the insect really is, but there is probably no point in continuing that discussion without some scientific evidence.
We have to get away from this nebulous term "conscious". Please let's stick to sentient.
Insect sentience has been demonstrated through operant conditioning.
So, at least if we avoid the absurd term "consciousness", there is science behind sentience.
User avatar
bobo0100
Senior Member
Posts: 314
Joined: Thu Jun 12, 2014 10:41 pm
Diet: Vegan
Location: Australia, NT

Re: Animal Advocacy - Most Important Focus? [POLL]

Post by bobo0100 »

I'm going to have to go with education. people wont decide to make a easy, but still considered un normal choice for no reason. education in human nutrition and animal ethics is a critical step to achieving this.
vegan: to exclude—as far as is practicable—all forms of exploitation of, and cruelty to, animals for any purpose; and by extension, promotes the development and use of animal-free alternatives for the benefit of humans, animals and the environment.
User avatar
Volenta
Master in Training
Posts: 696
Joined: Tue May 20, 2014 5:13 pm
Diet: Vegan

Re: Animal Advocacy - Most Important Focus? [POLL]

Post by Volenta »

@brimstoneSalad
The below comments are to make thing more clear from my perspective, most of what you said I agree on.

I used want for lack of a better word, I meant it in the same sense as a computer 'wants' you to insert a CD in the CD-drive. It almost feels like you think that I said that plants are sentient; that was NOT my intention. What I tried to explain is that actions (in this case avoiding death) don't necessarily require sentience. A robot (if that makes you more comforting than using a plant as example :P) can also try to keep it's own system working (I won't use want again :P). I tried to make clear—not successfully it seems—that the argument that because insects do certain actions, that this does not necessarily mean something. That insects are sentient however, that could be used as an argument.

You're right that consciousness is still a vague thing and there's no definitive answer where it comes from, but is there with sentience? (I will try to use sentience from now on) There is progress in science about this subject, and it seems that it arises from complex brain activity, both within a certain area and communication between areas. I don't know how complex an insects brain is, and since it's so small, I think it's not a stupid question to ask or doubt whether they are sentient. Operant conditioning only shows that their brains are plastic, not that they are sentient. Our brains also are very plastic and a lot of parts change without being aware of it. And don't misunderstand me: I'm NOT saying they are not sentient.

And I by the way don't think you can put plants and rocks on the same level. A plant is—again, although not sentient—alive, a rock isn't. Does that mean you should treat it different from a rock? No. (although you're probably not going to eat the rock :D) Does that mean they have wants (in the deeper sense)? No.
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10280
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: Animal Advocacy - Most Important Focus? [POLL]

Post by brimstoneSalad »

Volenta wrote: I used want for lack of a better word, I meant it in the same sense as a computer 'wants' you to insert a CD in the CD-drive.
What?

I don't think we should be using want for that either. A computer may need or require a CD to be inserted in order to perform a certain action. It has no concept of want, and doesn't care if we insert it or not- it just won't continue to execute whatever action WE want it to do if we don't insert the CD.
What I tried to explain is that actions (in this case avoiding death) don't necessarily require sentience. A robot (if that makes you more comforting than using a plant as example :P) can also try to keep it's own system working (I won't use want again :P).
Many robots are sentient, so that might be a bad example.

http://en.wikipedia.org/wiki/Cognitive_robotics

These kinds of robots do have wants. Although, their current levels of sentience approach insects.

Merely performing a pre-programmed operation that just happens to avoid damage in certain limited situations doesn't require sentience, but robustly acting to avoid death or damage in a variety of situations through learning does require sentience- both sensation and meaningfully processing that information in a learning context relative to certain wants.

I tried to make clear—not successfully it seems—that the argument that because insects do certain actions, that this does not necessarily mean something. That insects are sentient however, that could be used as an argument.
Of course, that's why we have to look towards organisms with much simpler nervous systems like jellyfish, and certain kinds of worms, which act more automatically based on pure reflex (which is non-sentient).
You're right that consciousness is still a vague thing and there's no definitive answer where it comes from, but is there with sentience?
Sentience is one of the things consciousness vaguely refers to, so even by changing the word usage without a clearer definition of sentience, you've narrowed things down (think Occam's razor; smaller subsets of possible definitions are preferable here).

That said, though, sentience is pretty well established. There is a secondary usage which mirrors consciousness and is non-technical, but the more technical usage refers to sense experience, and the comprehension thereof within a certain cognitive framework- mere cause-effect reflex doesn't properly satisfy it.
I don't know how complex an insects brain is, and since it's so small, I think it's not a stupid question to ask or doubt whether they are sentient.
Insect brains are pretty complex; we still struggle to map and simulate them.

A fruit fly has around 100,000 neurons, and ten million synapses.
Round worms, at 302 neurons and 5,000 synapses may have very rudimentary learning ability.
Operant conditioning only shows that their brains are plastic, not that they are sentient.
Why do you think that?

Responsiveness to operant conditioning demonstrates that an animal has wants, otherwise it would be unresponsive, or the response would be random.
Operant conditioning gets the animal to respond by "unnatural" means, by learning to do something they would never do in the wild, which would have no application in their environments (so to rule out sensitization, or pre-programmed gene or reflexive expression of some kind).
Operant conditioning simply does not work without sentience.

Plants can only become temporarily sensitized to conditions found normally in their environments (like increasing or decreasing processes related to photosynthesis), they have hormonal systems that act like logic gates, depressing or expressing certain responses relative to others- this is not intelligence, it's a very elegant machine that has been programmed with behavior by evolution.
Our brains also are very plastic and a lot of parts change without being aware of it.
Sentience and meta-cognitive ability are different things.
Most of what we do is unconscious.
To the extent that many people think their bodies are controlled by a magical, supernatural soul- so thorough is ignorance of our own cognition.

If you want to go by meta-cognition, most humans probably don't qualify.

And I by the way don't think you can put plants and rocks on the same level. A plant is—again, although not sentient—alive, a rock isn't.
I'm not sure how you think them being alive means anything...

Life and sentience are irrelevant to each other. Most living and non-living things are not sentient. Some living and non-living things are sentient.

If you want to draw random lines in the sand over qualities that are irrelevant to the question at hand, you might as well say: "A plant is green, a rock isn't." or "Plant begins with the letter 'P', rock doesn't."

So? It doesn't matter if plants are alive, green, or the word starts with the letter 'p'; all of those things are equally irrelevant.

With regards to the matter at hand, plants are rocks are on precisely the same level.

With regards to the matter of words that start with 'p', plants are higher, on the same level as pans, puddles, poop, and every other word that begins with 'p'; rocks are lower along with all other things that don't begin with 'p'.

Sorry to make such a silly example, but it's kind of silly to rank things higher based on qualities that have nothing to do with the subject.
User avatar
Volenta
Master in Training
Posts: 696
Joined: Tue May 20, 2014 5:13 pm
Diet: Vegan

Re: Animal Advocacy - Most Important Focus? [POLL]

Post by Volenta »

brimstoneSalad wrote:What?

I don't think we should be using want for that either. A computer may need or require a CD to be inserted in order to perform a certain action. It has no concept of want, and doesn't care if we insert it or not- it just won't continue to execute whatever action WE want it to do if we don't insert the CD.
I agree with everything you say. Sorry if I don't articulate myself so well, but I think that's just how language is used. I also could have said the computer 'asked' for the CD. Did it really? No, it's just computations that produce a text on the screen. So what? There's always a kind of inaccuracy in using words. But I will try to avoid want from now on.
brimstoneSalad wrote:Many robots are sentient, so that might be a bad example.

http://en.wikipedia.org/wiki/Cognitive_robotics

These kinds of robots do have wants. Although, their current levels of sentience approach insects.

Merely performing a pre-programmed operation that just happens to avoid damage in certain limited situations doesn't require sentience, but robustly acting to avoid death or damage in a variety of situations through learning does require sentience- both sensation and meaningfully processing that information in a learning context relative to certain wants.
Well, maybe we are misunderstanding each other because of the definition of sentience. Is sentience interacting with the outside world (input/output with a blackbox between it) and learning from it, or having feelings that you are subjectively aware of? I thought it was the latter one.

Not sure there are robots that have subjective feelings (yet). Also hard to prove.

If your using the first definition of sentience I gave, would that mean you think killing a robot (if it isn't subjectively aware) without a good reason is immoral?
brimstoneSalad wrote:Insect brains are pretty complex; we still struggle to map and simulate them.

A fruit fly has around 100,000 neurons, and ten million synapses.
Round worms, at 302 neurons and 5,000 synapses may have very rudimentary learning ability.
Sure, but that doesn't really answer the question. There are also very complex parts in our brains that have even more neurons, but that we aren't aware of.
brimstoneSalad wrote:Why do you think that?

Responsiveness to operant conditioning demonstrates that an animal has wants, otherwise it would be unresponsive, or the response would be random.

Operant conditioning gets the animal to respond by "unnatural" means, by learning to do something they would never do in the wild, which would have no application in their environments (so to rule out sensitization, or pre-programmed gene or reflexive expression of some kind).
Operant conditioning simply does not work without sentience.
An insect is better at surviving when they can react to and learn from their environment. This is done by changing neurons and synapses which shows their plasticity. Wanting something is an activity that can be described within the brain processes itself (neurons interacting with each other).

Nature is very unpredictable for an insect and should be able to react in a lot of different situations in order to survive better. Again, this shows how plastic their brains are.

I wanted to give the robot example again, but if you think they too are sentient, it serves no purpose until you've specified what you mean by sentience. Just want to say that their complex algorithms give an alternative to plastic brains.
brimstoneSalad wrote:I'm not sure how you think them being alive means anything...

Life and sentience are irrelevant to each other. Most living and non-living things are not sentient. Some living and non-living things are sentient.

If you want to draw random lines in the sand over qualities that are irrelevant to the question at hand, you might as well say: "A plant is green, a rock isn't." or "Plant begins with the letter 'P', rock doesn't."

So? It doesn't matter if plants are alive, green, or the word starts with the letter 'p'; all of those things are equally irrelevant.
It depends on the context whether it matters. I thought it fitted with the subject, and it seems you didn't.
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10280
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: Animal Advocacy - Most Important Focus? [POLL]

Post by brimstoneSalad »

Volenta wrote: I think that's just how language is used. I also could have said the computer 'asked' for the CD. Did it really? No, it's just computations that produce a text on the screen. So what? There's always a kind of inaccuracy in using words. But I will try to avoid want from now on.
"Asks" might be more correct- we treat that more linguistically than cognitively, but the important point is that the technical implications of these words, in a more philosophical discussion (compared to just casual conversation), should be considered carefully. :)
Well, maybe we are misunderstanding each other because of the definition of sentience. Is sentience interacting with the outside world (input/output with a blackbox between it) and learning from it, or having feelings that you are subjectively aware of? I thought it was the latter one.
True learning requires feelings. These are one in the same thing.
Not sure there are robots that have subjective feelings (yet). Also hard to prove.
It's quite easy: they've been programmed to feel one stimulus as positive, and another as noxious (or sometimes just a positive or negative, and not both together). We know they have feelings, because those are at the foundation of their neural architectures- the basis relative to which their learning capabilities function.
A neural network can not improve without a goal, and some kind of input which is interpreted positively or negatively (feelings) relative to that goal.
If your using the first definition of sentience I gave, would that mean you think killing a robot (if it isn't subjectively aware) without a good reason is immoral?
What do you mean, 'not subjectively aware'?
All sentient beings are aware when they are conscious (in the true sense; e.g. awake, and not asleep)
If the robot were turned on (conscious) it would be aware.
If the robot was powered off or otherwise in sleep mode (unconscious) then it would not be aware.

However, it would be wrong to destroy it in either case, even if it was currently turned off, because when it was on it wouldn't have wanted to be left off (being turned off temporarily likely would make no difference to it) because it had other things it wanted to do (like move in proximity to abstract "food" sources which provide it with positive sensation). The fact that the robot wants to do something, based on its neural architecture, makes it possible to respect those wants to some degree.
Sure, but that doesn't really answer the question. There are also very complex parts in our brains that have even more neurons, but that we aren't aware of.
That's why behavioral metrics are more meaningful. I was just answering as to the adequate level of infrastructure there to maintain cognition.
Also, and this might seem creepy- many of the subconscious (in the Freudian sense) subsystems in your brain are fundamentally sentient slaves to your executive processes. Your visual cortex, for example, is slaving away without your awareness all of the time, receiving molecular carrots when it resolves patterns, before passing that information along- unfortunately, it's overzealous and not terribly smart, so it gladly finds patterns when there are none too.
Your brain is a collection of loosely cooperative drug slaves supporting your executive processes.
As you go farther down the rabbit hole and look at the implications of some of this, it's like we're all Lovecraftian abominations.
An insect is better at surviving when they can react to and learn from their environment.
Yes, an insect is better at surviving when it is sentient.
This is done by changing neurons and synapses which shows their plasticity.
Not just randomly; these changes are guided by the organism's wants.

That's like saying evolution is random, and forgetting that it's guided by a very rational and reliable process of natural selection.

Wants are the natural selection that guides the evolution of a neural network.
Wanting something is an activity that can be described within the brain processes itself (neurons interacting with each other).
The foundational impulses of desire are much more basic than that, and are expressed through neural hardware. Wanting isn't just a resultant process of mind, but the cause of mind.
Every mind has to be primed with instinctual wants in order to begin developing at all.
Nature is very unpredictable for an insect and should be able to react in a lot of different situations in order to survive better. Again, this shows how plastic their brains are.
This shows that they are learning. Which shows that they have wants. By which sensation can be interpreted, and in which context sensation is understood and thus felt, which is sentience.
I wanted to give the robot example again, but if you think they too are sentient, it serves no purpose until you've specified what you mean by sentience. Just want to say that their complex algorithms give an alternative to plastic brains.
The robots I'm talking about do have plastic brains.

Neurons are emulated in the computer.
http://en.wikipedia.org/wiki/Artificial_neural_network

There's not really an alternative to learning; we just create those systems in software.

I don't believe that an information system in a digital world is fundamentally different from an information system in the physical world in any meaningful respect (e.g. that elements such as quantum mechanics are not relevant to cognition).
User avatar
Volenta
Master in Training
Posts: 696
Joined: Tue May 20, 2014 5:13 pm
Diet: Vegan

Re: Animal Advocacy - Most Important Focus? [POLL]

Post by Volenta »

brimstoneSalad wrote:True learning requires feelings. These are one in the same thing.
I don't agree with that. Something like voice recognition (Siri for example) is capable to learn from the voice of the user to make less mistakes. When is something truly learning and who decides that?
brimstoneSalad wrote:It's quite easy: they've been programmed to feel one stimulus as positive, and another as noxious (or sometimes just a positive or negative, and not both together). We know they have feelings, because those are at the foundation of their neural architectures- the basis relative to which their learning capabilities function.
A neural network can not improve without a goal, and some kind of input which is interpreted positively or negatively (feelings) relative to that goal.
I think you are making a mistake by saying something has feelings when it is capable of differentiating a positive stimulus from a negative one. Then you're using 'feeling' here the same way I used 'want' previously.
brimstoneSalad wrote:Yes, an insect is better at surviving when it is sentient.
Absolutely, if you mean learning from the environment and recognizing which stimuli are positive and negative and changing your behavior to that. But if something were not conscious, do you still have moral obligations towards it? I'm sorry to start about consciousness again, but I think it matters a lot.
brimstoneSalad wrote:Not just randomly; these changes are guided by the organism's wants.

That's like saying evolution is random, and forgetting that it's guided by a very rational and reliable process of natural selection.

Wants are the natural selection that guides the evolution of a neural network.
Never said it was random. Is this 'metaphor' of evolution by natural selection of a neural network a valid one? Not sure it works like that.
brimstoneSalad wrote:This shows that they are learning. Which shows that they have wants.
Again, learning does not imply wants (or feelings).
brimstoneSalad wrote:The robots I'm talking about do have plastic brains.

Neurons are emulated in the computer.
http://en.wikipedia.org/wiki/Artificial_neural_network

There's not really an alternative to learning; we just create those systems in software.

I don't believe that an information system in a digital world is fundamentally different from an information system in the physical world in any meaningful respect (e.g. that elements such as quantum mechanics are not relevant to cognition).
Well alright, but apply everything I said about robots to 'classic' robots—that don't have a neural network—which I meant when using the robot as example. Neural networks may be the best way of learning, but that doesn't mean it is a requirement for learning.
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10280
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: Animal Advocacy - Most Important Focus? [POLL]

Post by brimstoneSalad »

Volenta wrote:
brimstoneSalad wrote:True learning requires feelings. These are one in the same thing.
I don't agree with that. Something like voice recognition (Siri for example) is capable to learn from the voice of the user to make less mistakes. When is something truly learning and who decides that?
What constitutes true learning, and what merely appears to be learning, or has some characteristics of learning, is a pretty major subject in philosophy of mind.

True learning isn't just recording information or playing out a simple algorithm, but involves meaningful contextual understanding of that information and adaptation.

Crudely, "learning" has a broad array of meanings (not so bad as consciousness though): http://en.wikipedia.org/wiki/Learning

You'll notice two main divisions: non-associative learning, which is not related to desires/wants or necessarily any form of sentience, and associative learning.

The latter is more in line with what "true learning" is in the philosophical sense.

Daniel Dennett may be a good author to read on the subject of cognition and mind.

His notion of the "skinnerian creature" is what I'm talking about. This is the best summary I could find quickly:
Darwinian creatures are created by random mutation and selected by the external environment. The best designs survive and reproduce.

Skinnerian creatures can learn by testing actions (responses) in the external environment. Favourably actions are reinforced and then tend to be repeated. Pigeons can be trained to press a bar to receive food.

Skinnerian creatures ask themselves, "What do I do next?"

Popperian creatures can preselect from possible behaviours / actions weeding out the truly stupid options before risking them in the harsh world. Dennett calls them Popperian because Popper said this design enhancement "permits our hypotheses to die in our stead". This is Dennett's enhancement of behaviourism. Popperian creatures have an inner environment that can preview and select amongst possible actions. For this to work the inner environment must contain lots of information about the outer environment and its regularities. Not only humans can do this. Mammals, birds, reptiles and fish can all presort behavioural options before acting.

Popperian creatures ask themselves, "What do I think about next?"

Gregorian creatures are named after Richard Gregory, an information theorist. Gregorian creatures import mind-tools (words) from the outer cultural environment to create an inner environment which improve both the generators and testers.

Gregorian creatures ask themselves, "How can I learn to think better about what to think about next?"
https://learningevolves.wikispaces.com/dennett (I'm not sure if that website is any good or not, I only read the one article there)

Volenta wrote: I think you are making a mistake by saying something has feelings when it is capable of differentiating a positive stimulus from a negative one. Then you're using 'feeling' here the same way I used 'want' previously.
To the contrary, I'm being literal, and trying to get at the existential definition of the notion.

It's not a positive or negative stimulus until it is meaningfully manifest as such through adaptive behavior.
See Dennett's notion of the Skinnerian creature- that's the most primitive form of sentience, and any in any context less than that, we can't really call the stimulus positive or negative reinforcement because it's not acting within any framework that understand it as such (with the exception of the evolutionary framework).

Is Siri's voice recognition sentient? I don't know how Siri's system works, but probably not (neural network architecture is not very efficient at solving problems like this). It's debatable whether that qualifies as actual learning- most things we think are learning in machines actually aren't learning, but just normalization, or recording preferences.
Volenta wrote:Absolutely, if you mean learning from the environment and recognizing which stimuli are positive and negative and changing your behavior to that. But if something were not conscious, do you still have moral obligations towards it? I'm sorry to start about consciousness again, but I think it matters a lot.
It matters a lot whether they are awake or alseep?

Yes, we have moral obligations to unconscious beings if they would normally wake up, because they were previously conscious, and didn't want to be killed when sleeping/under anesthesia.

What does that have to do with anything?

Volenta wrote:Never said it was random. Is this 'metaphor' of evolution by natural selection of a neural network a valid one? Not sure it works like that.
It's a common approach. Trial and error, with small random or genetically guided changes, producing and improvement or not- if it's an improvement it's retained and iterated upon.
Volenta wrote:
brimstoneSalad wrote:This shows that they are learning. Which shows that they have wants.
Again, learning does not imply wants (or feelings).
True learning does; metrics (desires) are required for adaptive changes- otherwise there's nothing to adapt to, and the information is without context (thus not understood or learned, just recorded).
Volenta wrote: Well alright, but apply everything I said about robots to 'classic' robots—that don't have a neural network—which I meant when using the robot as example. Neural networks may be the best way of learning, but that doesn't mean it is a requirement for learning.
Neural networks are incredibly inefficient, and pretty bad at solving most practical problems compared to simple algorithms- they are not the best way of 'learning' in the way you're using it.

I think the problem is that you're using 'learning' in the most casual and general sense, just as you were using 'want' in a casual sense- it's leading to misunderstanding in discussion of cognition and philosophy.
Software recording your preferences is not learning anything, any more than a book is learning when you write in it, or a camera is learning when it takes a picture.
That doesn't qualify as true learning in the philosophical sense, which requires some contextual understanding of that information to give it intrinsic value.

Anyway, it's unlikely you will find neural networks in most consumer applications, since the effect and efficiency of a neural network are kind of unpredictable. It's values are essentially unreadable except by itself (like the neural weights of a biological being, which are all in context to each other).
Messy solution to otherwise quite simple problems.

http://en.wikipedia.org/wiki/Speech_rec ... Algorithms
Note that Neural Networks only represent one of the solutions, and can't really be used exclusively for the task.

Neural networks are mostly confined to experimentation in cognition, by simulating minds and learning how they develop communication, and for research basically devoted to learning about neural networks and trying to find a real-world application for them.

This is an old experiment, but interesting none-the-less: http://scienceblogs.com/notrocketscienc ... e-another/


Getting into this stuff scrapes at the edges of sentience- but certainly helps us understand it, because a notion like this is defined by its boundaries.
Post Reply