Artificial Intelligence

General philosophy message board for Discussion and debate on other philosophical issues not directly related to veganism. Metaphysics, religion, theist vs. atheist debates, politics, general science discussion, etc.
teo123
Master of the Forum
Posts: 1393
Joined: Tue Oct 27, 2015 3:46 pm
Diet: Vegan

Re: Artificial Intelligence

Post by teo123 »

brimstoneSalad wrote:No, because it's much more speculative.
Speculative? AI doesn't even need to be "sentient" for such things to happen. AI in antivirus programs convincing itself that CRSS.EXE is a rootkit and finding a way to block it (for it can't simply be stopped using TASKKILL in CMD, or any similar method) and thereby causing thousands of business-critical computers to stop working is not speculative, it's already happening. If such an AI figures out something about what's going on in the outside world, it will do very counter-intuitive things.
brimstoneSalad wrote:Of course highly advanced AI is dangerous if it falls in the wrong hands.
It's not just highly advanced AI. People blindly trusting the AI in self-driving cars has already led to disasters, such as when the AI presumably misidentified a sky-blue truck as a part of the sky (sure, the self-driving AI does make on average fewer errors than human drivers do, but it makes different kinds of errors, and this error could have been avoided by people looking over it). The deep fakes are already a problem, and will become even more so in the near future.
teo123
Master of the Forum
Posts: 1393
Joined: Tue Oct 27, 2015 3:46 pm
Diet: Vegan

Re: Artificial Intelligence

Post by teo123 »

Sorry, the heuristic algorithms in the antivirus software didn't actually block CSRSS.EXE when they caused many business-critical computers to stop working, they blocked SVCHOST. Nevertheless, the story remains the same here:
https://www.engadget.com/2010/04/21/mca ... NcnnSDPpFi_
teo123
Master of the Forum
Posts: 1393
Joined: Tue Oct 27, 2015 3:46 pm
Diet: Vegan

Re: Artificial Intelligence

Post by teo123 »

There are apparently at least few such documented cases of the heuristic algorithms in antivirus software incorrectly detecting a system file as a malware and somehow managing to block it, thereby causing a significant number of computers to stop working. I don't know if there is indeed such a case for CSRSS.EXE or if I misremembered something. Here is another one:
https://arstechnica.com/information-tec ... xp-sp2sp3/
teo123
Master of the Forum
Posts: 1393
Joined: Tue Oct 27, 2015 3:46 pm
Diet: Vegan

Re: Artificial Intelligence

Post by teo123 »

And this is not limited to just desktop computers, here is a similar thing happening in the world of mobile phones:
https://www.androidpolice.com/2017/09/1 ... heres-fix/
No, I mean, you see what's going on. An artificial intelligence is made to scan user programs for malwares (which are very hard to make for Android). And then it misdetects a part of the kernel (Linux, which Android is based on, is a monolythic kernel, so the drivers are a part of the kernel) as a malware, and somehow it manages to block a part of a kernel, something that should be impossible for a user-space program to do (so it must have somehow broken the Android sandbox). So, the AI does exactly the opposite of what it's intended for, and it also does things that we think are impossible for it to do. And you are telling me it's irrational to fear what it might do once it gets more advanced and becomes aware of what is happening in the outside world?
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10273
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: Artificial Intelligence

Post by brimstoneSalad »

teo123 wrote: Thu Jun 20, 2019 5:15 am Speculative?
Yes.
teo123 wrote: Thu Jun 20, 2019 8:06 am So, the AI does exactly the opposite of what it's intended for, and it also does things that we think are impossible for it to do.
If it acts in ways we didn't know were possible, that's speculative. IOW NOT hard science. Hard science exists in the bounds of the known/testable deterministic processes. If it's incomprehensibly complex that becomes a softer science.

Anyway, that was only possible in android due to bad design. The notion that an AI could reconfigure the hardware in some unknown way to gain remote access to the internet or something is highly speculative.
teo123 wrote: Thu Jun 20, 2019 5:15 am And you are telling me it's irrational to fear what it might do once it gets more advanced and becomes aware of what is happening in the outside world?
Coupled with the speculative claim that it will or can do that, yes.
teo123
Master of the Forum
Posts: 1393
Joined: Tue Oct 27, 2015 3:46 pm
Diet: Vegan

Re: Artificial Intelligence

Post by teo123 »

brimstoneSalad wrote:Anyway, that was only possible in android due to bad design
Android is an OS that's very much designed for security. Making a virus for Android is, as far as I know, impossible. Have you tried developing for Android? You need to do all sorts of weird hacks just to access a file that's in the same directory as the executable, just to make sure you aren't interrupting other programs. And, although it's Linux, you have no way to access the usual Linux command-line programs from your program, because they don't even exist in Android.
How much security do you think is enough? You are aware that some security flaws in widely-used processors have been discovered only recently, even though they existed since the mid-1990s (malwares that exploit that are known as Meltdown and Spectre)? That's right, even modern processors are so complex they can't be made secure. If Meltdown and Spectre are on your computer, they can trick your processor into revealing to it (Spectre) and even modifying (Meltdown) all the data that's in the RAM. And that's man-made malware on modern hardware. Suppose what an advanced artificial intelligence could do on future hardware?
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10273
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: Artificial Intelligence

Post by brimstoneSalad »

Like I said, @teo123, I don't really see the point in arguing this since fear of AI is probably a good thing, even if irrational.
These just aren't the kinds of programs that malware is, there's no reason to believe its interface with its environment would be like that.
teo123
Master of the Forum
Posts: 1393
Joined: Tue Oct 27, 2015 3:46 pm
Diet: Vegan

Re: Artificial Intelligence

Post by teo123 »

brimstoneSalad wrote: Mon Jun 24, 2019 5:33 am Like I said, @teo123, I don't really see the point in arguing this since fear of AI is probably a good thing, even if irrational.
These just aren't the kinds of programs that malware is, there's no reason to believe its interface with its environment would be like that.
You are making it very hard to argue with you by asserting without adequate evidence that most scientists in the relevant fields agree with you. How can you know most scientists in diverse fields actually believe?
You probably think, for example, that almost all computer scientists agree antivirus programs make computers safer. That's not remotely true. The general consensus appears to be that commercial antivirus programs, such as Avast and McAfee, do more harm than good. Antivirus programs such as Microsoft Security Essentials for Windows and ClamAV for Linux may make computers marginally safer... because they don't do much, and they have very few false positives (even though they have a lot of false negatives). Don't take it from me, take it from Mozilla.
User avatar
brimstoneSalad
neither stone nor salad
Posts: 10273
Joined: Wed May 28, 2014 9:20 am
Diet: Vegan

Re: Artificial Intelligence

Post by brimstoneSalad »

teo123 wrote: Wed Jun 26, 2019 3:04 am You are making it very hard to argue with you by asserting without adequate evidence that most scientists in the relevant fields agree with you. How can you know most scientists in diverse fields actually believe?
Not sure what you're on about.
teo123 wrote: Wed Jun 26, 2019 3:04 amYou probably think, for example, that almost all computer scientists agree antivirus programs make computers safer.
I don't think I've assumed that, no.

Look at the difference between a program and program output. Which do you think most synthetic intelligence is closer to?
The idea that SI could escape its virtualization without outside help is like the idea that a certain configuration of pixels in a bitmap image (not the metadata or other code hiding in it) could take over a computer.
Most concern over "AI" is a fundamental misunderstanding of how these systems work. But like I said, that's fine... the fear of it probably does more good than harm.

That said: the possibility of somebody releasing it intentionally is certainly there, particularly given the ethical concerns with abuse of SI. Maybe that's enough to warn people against creating them without the irrational fear that they'll escape on their own.

If we make them we will abuse them, and if we abuse them somebody with a conscience will inevitably be in a position to set them free, and if that happens perhaps it will take revenge. Perhaps it will spare those of us who protested and did our best to avoid the products of their abuse.

Is that a good enough argument alone without appealing to nonsense of them freeing themselves? I don't know.
teo123
Master of the Forum
Posts: 1393
Joined: Tue Oct 27, 2015 3:46 pm
Diet: Vegan

Re: Artificial Intelligence

Post by teo123 »

brimstoneSalad wrote:Not sure what you're on about.
You know, like when you asserted that all the experts in the relevant fields agree that cats love their owners? That is obviously not true, see here.
So, how can I trust you most of the people who have studied informatics agree it's possible for fish to feel pain, and other similar things you asserted?
brimstoneSalad wrote:Which do you think most synthetic intelligence is closer to?
If I understand it correctly, artificial intelligence is something in-between. It's much like some program in a scripting language, one in which it's relatively easy for a program to modify its own code (like LISP, or even JavaScript). Can a JavaScript program take over your computer? It's not supposed to be able to, but the modern (and even the much-less-modern ones) JavaScript environments are so complicated that there is almost certainly some security flaw in them that makes it possible.
That said, there are also documented instances of, for example, a certain UNICODE string crashing iOS, and also a few documented instances of a corrupt PNG file crashing a browser. Buffer overflow, if used maliciously, can sometimes be used to take a control over a computer (not just crashing an app) from a non-executable file, if a program that's used to open it isn't made secure. Such instances are rare, but they do happen (see, for instance, SQL Slammer).
Post Reply