Artificial Intelligence

General philosophy message board for Discussion and debate on other philosophical issues not directly related to veganism. Metaphysics, religion, theist vs. atheist debates, politics, general science discussion, etc.
teo123
Senior Member
Posts: 433
Joined: Tue Oct 27, 2015 3:46 pm
Religion: None (Atheist)
Diet: Vegan

Artificial Intelligence

Post by teo123 » Thu May 30, 2019 10:33 pm

So, in the other thread, @brimstoneSalad has made quite a few statements about artificial intelligence that seem, well, extraordinary, if not simply incoherent:
1) Robots that can be considered sentient have already been made.
2) Truly intelligent computer programs would not be able to decide to stop working or become unresponsive for some time.
3) An intelligent program that could decide to stop working would probably function about as bad as human beings on psychedelic drugs.
4) It would be trivial to sandbox programs with advanced artificial intelligence (which would, of course, include self-modifying code).
5) Most of the people who study artificial intelligence agree that it somehow argues that fishes and insects feel pain.
I hope I understood you correctly. But, like I've said, although I've studied quite a bit of computer science (I've been to a competition in computer science for university students this year, and I ended up being the 7th in Croatia), what you are saying seems a bit like gibberish to me, so I am not sure I understood what you are trying to say.
So, where are you getting your information about those things from? You are obsessed with the term "scientific", yet your way of arguing there seems very unscientific. It's OK not to give citations for the well-known things and things that are easy to look up, such as that "Issa" was the ancient name for Vis and that there were big thermae there in ancient times. If you are using something that isn't well-known to make your point, well, that's already not great, but then you should make it clear where you are getting your information from. Even more so if the premisses of your arguments seem rather extraordinary. If you make extraordinary claims, and you refuse to provide evidence, I am left with little choice but to walk away.

teo123
Senior Member
Posts: 433
Joined: Tue Oct 27, 2015 3:46 pm
Religion: None (Atheist)
Diet: Vegan

Post by teo123 » Mon Jun 03, 2019 1:29 am

What, do you think I've crossed some form of line again, so you won't respond to me? I don't see it. Or you simply don't really understand those things, and are trying to hide that (reminds me of what Einstein said that if you can't explain something to your grandmother, you don't really understand it, and the similar quotes by Richard Feynman and others).
I also don't see what kind of line I crossed regarding the Hard-Science-Soft-Science debate, if you make a post saying "I think you are bad at math, so your linguistic theories must be as wrong as the claims about ghosts are, and as wrong as the nonsense that sometimes gets published in gender studies journals is.", how else should I have replied? We'll talk about it later (hopefully).

User avatar
brimstoneSalad
neither stone nor salad
Posts: 9387
Joined: Wed May 28, 2014 9:20 am
Religion: None (Atheist)
Diet: Vegan

Post by brimstoneSalad » Sat Jun 08, 2019 8:46 pm

Teo, I only just read this. It's hard for me to keep up due to the time involved in debunking erroneous claims. It's much easier for you to make claims than for me to debunk them.

1) yes sentient, but not sapient or at anything like a human level intellect. Sentience is a low bar of experiencing sensation.
2) this is a drastic misrepresentation of what I said.
3) mirepresentation again. I spoke of pain and opioids. Do you not even try, Teo?
4) Yes. I would be very interested in seeing you providing an expert source that says otherwise. That said, attack and release from outside would always be possible.
5) What? Not at all what I said.

I've been trying to explain to you fundamentally what intelligence is and what sentience means. Synthetic Intelligence is a good way to learn about it.

User avatar
brimstoneSalad
neither stone nor salad
Posts: 9387
Joined: Wed May 28, 2014 9:20 am
Religion: None (Atheist)
Diet: Vegan

Post by brimstoneSalad » Sat Jun 08, 2019 8:52 pm

teo123 wrote:
Mon Jun 03, 2019 1:29 am
Or you simply don't really understand those things, and are trying to hide that (reminds me of what Einstein said that if you can't explain something to your grandmother, you don't really understand it, and the similar quotes by Richard Feynman and others).
Ah the motto of the Dunning-Kruger afflicted.

No, Einstein/Feynman didn't say that, it's only something intellectually lazy people attribute to them to let themselves feel an unjustified sense of intellectual superiority.

Here's a real quote:
"If I could explain it to the average person, it wouldn't have been worth the Nobel Prize."

Richard P. Feynman.

User avatar
Red
Supporter
Supporter
Posts: 2937
Joined: Wed Jul 09, 2014 8:59 pm
Location: Toluca Lake

Post by Red » Sat Jun 08, 2019 9:11 pm

brimstoneSalad wrote:
Sat Jun 08, 2019 8:52 pm
No, Einstein/Feynman didn't say that, it's only something intellectually lazy people attribute to them to let themselves feel an unjustified sense of intellectual superiority.
I want to say something about this. Just because you know everything there is to know about the subject, that doesn't mean you're able to communicate the material properly. I've seen this first hand; Scientists and engineers who are college educated and definitely know what they're doing, but they aren't able to communicate it to the layman. There are often tons of concepts and definitions that someone has to know before they are able to further understand something in a subject, and not every scientist knows how to communicate these things effectively and efficiently. Sometimes, it's possible that a person is just incapable of understanding something.

The only scientists I'd expect to be able to communicate the material properly are teachers and professors.
Learning never exhausts the mind.
-Leonardo da Vinci

teo123
Senior Member
Posts: 433
Joined: Tue Oct 27, 2015 3:46 pm
Religion: None (Atheist)
Diet: Vegan

Post by teo123 » Mon Jun 17, 2019 9:38 am

brimstoneSalad wrote:yes sentient, but not sapient or at anything like a human level intellect. Sentience is a low bar of experiencing sensation.
Again, where are you getting that information from?
brimstoneSalad wrote:mirepresentation again. I spoke of pain and opioids.
So, what is the difference? If a sentient program decides to become unresponsive, it can't feel pain while it's unresponsive.
brimstoneSalad wrote:Yes. I would be very interested in seeing you providing an expert source that says otherwise.
Have you watched the Computerphile's videos about why artificial intelligence is dangerous? I believe they have a video exactly called "Why Sandboxing isn't a Solution" (or something like that).
Sandboxing self-modifying code is not at all trivial. Antivirus programs can do that relatively safely only because they can allow that self-modifying code to run only for a short amount of time. Human factor is also a big issue, but it's not the only one.
brimstoneSalad wrote:No, Einstein/Feynman didn't say that, it's only something intellectually lazy people attribute to them to let themselves feel an unjustified sense of intellectual superiority.
OK, fine I haven't studied it that much, it's indeed dubious whether Einstein said that. But it's not at all dubious that Richard Feynman said that:
David L. Goodstein, "Richard P. Feynman, Teacher," Physics Today, volume 42, number 2, February 1989, p. 70-75, at p. 75 wrote:Once I asked him to explain to me, so that I can understand it, why spin-1/2 particles obey Fermi-Dirac statistics. Gauging his audience perfectly, he said, "I'll prepare a freshman lecture on it." But a few days later he came to me and said: "You know, I couldn't do it. I couldn't reduce it to the freshman level. That means we really don't understand it."

Jamie in Chile
Senior Member
Posts: 279
Joined: Tue Apr 11, 2017 7:40 pm
Religion: None (Atheist)
Diet: Vegetarian

Post by Jamie in Chile » Mon Jun 17, 2019 7:12 pm

Putting an artificial intelligence in a box so that they can't get access to the internet or the outside world is not as easy as you think. Sam Harris and Eliezer Yudkowsky discussed this in a podcast I listened to. Nick Bostrom also discusses this and related issues in his book SuperIntelligence.

The AI may be able to trick a person into letting it out, get out with technologies and ideas that we can't even imagine etc. For example cutting off the ability to pass electromagnetic waves between the box and the internet may not work if the AI knows how to reach the internet with other technologies we can't imagine, or knows how to pass EM waves through thick lead.

To a stone age man, a fast flowing river might be an impenetrable barrier if they can't conceive of the concept of a boat, or swimming. We are the stone age man that has never seen a boat nor learnt how to swim, and our boxing may be like putting a moat of water around the AI and assuming that this is the end of the matter.

User avatar
brimstoneSalad
neither stone nor salad
Posts: 9387
Joined: Wed May 28, 2014 9:20 am
Religion: None (Atheist)
Diet: Vegan

Post by brimstoneSalad » Tue Jun 18, 2019 12:12 am

Jamie in Chile wrote:
Mon Jun 17, 2019 7:12 pm
Putting an artificial intelligence in a box so that they can't get access to the internet or the outside world is not as easy as you think.
I disagree, however I don't want to argue it here because on reflection a victory would be Pyrrhic.
An irrational fear of AI escaping could actually help prevent a high general intelligence SI from being developed and put into broad use -- a use that while it could have some utility to humans would likely be terribly abusive.
I don't want to see SI suffering replace farmed animal suffering.

Maybe we can discuss it privately if you're curious of my responses to those challenges.

teo123
Senior Member
Posts: 433
Joined: Tue Oct 27, 2015 3:46 pm
Religion: None (Atheist)
Diet: Vegan

Post by teo123 » Tue Jun 18, 2019 12:53 pm

brimstoneSalad wrote:An irrational fear of AI escaping could actually help prevent a high general intelligence SI from being developed and put into broad use
Isn't that fear even more rational than the fear of global warming? I mean, it's based on an even harder science (computer science) than climate science is. And while there are a few climatologists arguing global warming isn't dangerous, I am yet to hear from a computer scientist arguing artificial intelligence isn't dangerous.

User avatar
brimstoneSalad
neither stone nor salad
Posts: 9387
Joined: Wed May 28, 2014 9:20 am
Religion: None (Atheist)
Diet: Vegan

Post by brimstoneSalad » Wed Jun 19, 2019 6:24 pm

teo123 wrote:
Tue Jun 18, 2019 12:53 pm
brimstoneSalad wrote:An irrational fear of AI escaping could actually help prevent a high general intelligence SI from being developed and put into broad use
Isn't that fear even more rational than the fear of global warming? I mean, it's based on an even harder science (computer science) than climate science is.
No, because it's much more speculative.
teo123 wrote:
Tue Jun 18, 2019 12:53 pm
And while there are a few climatologists arguing global warming isn't dangerous, I am yet to hear from a computer scientist arguing artificial intelligence isn't dangerous.
Of course highly advanced AI is dangerous if it falls in the wrong hands. Like nuclear weapons. The argument is more about policy limiting AI and keeping it under control.
There is no real controlling the weather. We have cloud seeding, but that's about it. AI also has major potential benefits that the coming climate disaster lacks.

Post Reply

Who is online

Users browsing this forum: No registered users and 4 guests