carnap wrote: ↑Sun Mar 25, 2018 1:38 pm
Based on what evidence?
Again, we don't need to know exactly what's going on in a black box to know things for a moral certainty. It's mechanistically obvious to that degree.
carnap wrote: ↑Sun Mar 25, 2018 1:38 pmThe capacity for desires is based on advanced cognition,
A desire is something pretty primitive; it's necessary to move a neural network to learn. However, some things that appear to be simple associative learning could be hardwired (computers can "remember" user settings without having a neural network to do it, but this function is limited to what they've been specifically programmed to be able to remember). Like I said before, a cockroach associating a certain smell with food may not be a smoking gun in itself.
carnap wrote: ↑Sun Mar 25, 2018 1:38 pmfor an entity to have desires it has to have some sense of itself, it has to be conscious and it has to have a conceptual model of its behavior.
What a wonderful coincidence! All things needed for operant conditioning, which is what I said is the smoking gun.
carnap wrote: ↑Sun Mar 25, 2018 1:38 pmThe ability for associative learning doesn't hinge on any of these capacities.
Not the most basic appearances of it, but operant conditioning does.
You seem to be ignoring what I'm saying about this, or reading what you want into it.
Like I said, I'm on the fence on expression of learning that falls short of operant conditioning.
carnap wrote: ↑Sun Mar 25, 2018 1:38 pmYes that is your point and my point is that its easy to show that this is wrong with computer models of learning, for example, you can easily get a robot to learn via conditioning without that robot having any any active capacity for "interests".
Again, begging the question! You have a bad habit of that.
You're assuming these robots are not sentient. What if sentience isn't "magical"? What if that's all sentience and subjective experience is?
carnap wrote: ↑Sun Mar 25, 2018 1:38 pmThe algorithm just needs a criteria for associating variables which could be anything. For example:
Those variables are its interests and its sense experience.
If you think we're anything fundamentally different from that, then you'll need to put up some evidence.
Here, maybe this will help you out:
https://www.gotquestions.org/human-soul.html
But if you're appealing to religious experience of your soul, it's not an argument that's going to pass here.
carnap wrote: ↑Sun Mar 25, 2018 1:38 pm
So if operant conditioning "demonstrates interests" than you're committed to the claim that robots also have interests in similar ways as animals.
Seriously? I
just said that.
brimstoneSalad wrote:we have synthetic intelligence that probably fits the bill
I'm
well aware of this. Currently, SI is around insect level and used in a very limited scope, so it's not something I'm worried about today, but I am concerned about the potential for abuse of Dorothy (as are all sensible secular individuals who don't write off robots for lacking souls) and other highly sentient robots.
carnap wrote: ↑Sun Mar 25, 2018 1:38 pmWhy does sensory input and "interests" imply sentience?
Why does 1+1=2?
carnap wrote: ↑Sun Mar 25, 2018 1:38 pmYour computer also has sensory input and also has "interests" in the very general sense of the word.
My computer is not running any artificial intelligence as far as I know. I don't know how you ascribe interests to it when it can't express any. You think it's interested in simply running?
It remembers very specific settings, but I don't think it's capable of basic operant conditioning.
carnap wrote: ↑Sun Mar 25, 2018 1:38 pmWhen people discuss "interests" they seem to equivocate a lot.
They
seem to employ magical thinking and wax theological.
carnap wrote: ↑Sun Mar 25, 2018 1:38 pmIn general this isn't true but it also depends on the animal. Cattle in the US, for example, are predominately pastured
Pastured on managed forage which is grown for their consumption, including being fertilized much of the time (just a quick Google search shows how common this is
http://www.beefmagazine.com/pasture-range/0331-fertilizing-pastures-spring ). And if it's not fertilized, they're just degrading the land (as occurs on some grasslands rented out for grazing).
People seem to think cows are these
magical creatures that just capture an infinite and perfectly sustainable resources (grass) for our use.
That isn't the case. Forage is grown for cows much like any other crop, the only major difference is that the cows harvest it themselves (burning a lot of calories while they do it), and with cows (unlike with monogastric animals) there's substantial enteric fermentation which is a leading contributor to global warming. I assume you're just a climate change denialist because you consistently ignore that fact or seem indifferent to it.
There's no reason to support the beef industry or propagate fictions to defend it.
carnap wrote: ↑Sun Mar 25, 2018 1:38 pmand even when they are finished in feedlots their diet is composed of large amounts of byproducts. For example soy meal (a byproduct of soy oil production).
Soy meal is a coproduct, not just a byproduct; the value on the market is similar to the oil. You should know that. Other oilseeds would be grown instead (with better oil yield) if not for that fact.
It's also something that should be fed to
humans, as defatted soy flour in products, not be wasted on cows and contribute more to environmental damage in the process.
Cows also eat a lot of silage grown and fermented specifically for them.
There are very few things cows eat that humans fundamentally can not, and most of those are grown specifically for them in fields that could otherwise be growing food for human beings.
carnap wrote: ↑Sun Mar 25, 2018 1:38 pm
That wouldn't be very effective as neuron count hinges on the size of the animal and the overall efficiency of the brain which vary greatly from animal to animal. But it also doesn't explain how you'd do a comparison. So animal A has 50% the neurons of animal B.....what does that mean in terms of relative sentience and relative suffering?
I said it is probably an overestimation. The point is if it's even close, then that means there may be something to look into more.
I didn't say it's the final word on the issue (unless it's not close, then there's no point in examining it further at this point).