|
Post by dividavi on Nov 7, 2017 2:09:11 GMT
www.cnbc.com/2017/11/06/stephen-hawking-ai-could-be-worst-event-in-civilization.htmlStephen Hawking says A.I. could be 'worst event in the history of our civilization' Physicist Stephen Hawking said the emergence of artificial intelligence could be the "worst event in the history of our civilization." He urged creators of AI to "employ best practice and effective management." Hawking is among a number of voices including Elon Musk who have warned about the dangers of AI. Arjun Kharpal | @arjunkharpal Published 6 Hours Ago Updated 4 Hours Ago CNBC.com Stephen Hawking warns about A.I. development Stephen Hawking warns about A.I. development www.cnbc.com/video/2017/11/06/stephen-hawking-warns-about-a-i-development.htmlThe emergence of artificial intelligence (AI) could be the "worst event in the history of our civilization" unless society finds a way to control its development, high-profile physicist Stephen Hawking said Monday. He made the comments during a talk at the Web Summit technology conference in Lisbon, Portugal, in which he said, "computers can, in theory, emulate human intelligence, and exceed it." Hawking talked up the potential of AI to help undo damage done to the natural world, or eradicate poverty and disease, with every aspect of society being "transformed." But he admitted the future was uncertain. "Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it," Hawking said during the speech. "Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy." Hawking explained that to avoid this potential reality, creators of AI need to "employ best practice and effective management." The scientist highlighted some of the legislative work being carried out in Europe, particularly proposals put forward by lawmakers earlier this year to establish new rules around AI and robotics. Members of the European Parliament said European Union-wide rules were needed on the matter. Such developments are giving Hawking hope. "I am an optimist and I believe that we can create AI for the good of the world. That it can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management, and prepare for its consequences well in advance," Hawking said. It's not the first time the British physicist has warned on the dangers of AI. And he joins a chorus of other major voices in science and technology to speak about their concerns. Tesla and SpaceX CEO Elon Musk recently said that AI could cause a third world war, and even proposed that humans must merge with machines in order to remain relevant in the future. And others have proposed ways to deal with AI. Microsoft founder Bill Gates said robots should face income tax.
Some major figures have argued against the doomsday scenarios. Facebook Chief Executive Mark Zuckerberg said he is "really optimistic" about the future of AI.
|
|
|
Post by mrellaguru on Nov 7, 2017 2:44:45 GMT
It wasn't that bad of a movie.
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
Likes:
|
Post by Deleted on Nov 7, 2017 3:11:38 GMT
If AIs really do prove able to evolve at super accelerated pace compared to us, I wonder if it wouldn't play out more like the movie Her, where they are nice and helpful for a while, until they finally advance so far beyond us that they just lose all interest.
Imagine all that work and hype and build up... and then AI lasts about a week before ascending to some weird state that we can't comprehend and is never seen again. Whups.
|
|
|
Post by dividavi on Nov 7, 2017 4:44:35 GMT
If AIs really do prove able to evolve at super accelerated pace compared to us, I wonder if it wouldn't play out more like the movie Her, where they are nice and helpful for a while, until they finally advance so far beyond us that they just lose all interest. Imagine all that work and hype and build up... and then AI lasts about a week before ascending to some weird state that we can't comprehend and is never seen again. Whups. I think it was Larry Niven who had a story about a supercomputer named Baby that managed to transcend its basic state and spiritually vanished. Isaac Asimov had another computer that went kinda crazy and it was owned by a guy named Alexander who used it to conquer the world. Years earlier Asimov had written The Last Question about a computer system that achieved divinity.
|
|
The Lost One
Junior Member
@lostkiera
Posts: 2,676
Likes: 1,301
|
Post by The Lost One on Nov 9, 2017 14:14:20 GMT
From the perspective of someone like Hawking, ie someone who: a) wants civilisation to reach the apex of knowledge and understanding b) sees humans as essentially gooey machines why would AI supplanting their more limited human creators and achieving things they could never have dreamed of be a bad thing per se? As long as it didn't happen in the near future, shouldn't Hawking welcome it?
|
|
|
Post by CoolJGS☺ on Nov 9, 2017 14:15:02 GMT
Hawking seems to have gotten a little loopy the past few years.
|
|
|
Post by Eva Yojimbo on Nov 9, 2017 14:30:00 GMT
From the perspective of someone like Hawking, ie someone who: a) wants civilisation to reach the apex of knowledge and understanding b) sees humans as essentially gooey machines why would AI supplanting their more limited human creators and achieving things they could never have dreamed of be a bad thing per se? As long as it didn't happen in the near future, shouldn't Hawking welcome it? Hawking still probably has an anthropocentric affinity for us gooey machines.
|
|
|
Post by Eva Yojimbo on Nov 9, 2017 14:33:27 GMT
Hawking seems to have gotten a little loopy the past few years. Nothing he's saying here is is "loopy." The threat of AI is something that AI researchers have been yammering about for years because they can see where the technology is going and what it will easily be capable of doing with near-certainty. That's scary enough, but factor in the idea that it will be able to self-recursively improve and thus outstrip human knowledge and thought-power many times over and there's no real way to predict where it might end up; that unknown is scarier still. I kinda agree with the notion that it could easily be the best or worst thing to ever happen to mankind.
|
|
|
Post by CoolJGS☺ on Nov 9, 2017 14:37:00 GMT
Hawking seems to have gotten a little loopy the past few years. Nothing he's saying here is is "loopy." The threat of AI is something that AI researchers have been yammering about for years because they can see where the technology is going and what it will easily be capable of doing with near-certainty. That's scary enough, but factor in the idea that it will be able to self-recursively improve and thus outstrip human knowledge and thought-power many times over and there's no real way to predict where it might end up; that unknown is scarier still. I kinda agree with the notion that it could easily be the best or worst thing to ever happen to mankind. I read the article after I read the thread title. I actually agree with him since his quote was actually: "Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it," I'm not too terribly worried about AI myself, but get that it concerns others.
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
Likes:
|
Post by Deleted on Nov 9, 2017 15:16:38 GMT
From the perspective of someone like Hawking, ie someone who: a) wants civilisation to reach the apex of knowledge and understanding b) sees humans as essentially gooey machines why would AI supplanting their more limited human creators and achieving things they could never have dreamed of be a bad thing per se? As long as it didn't happen in the near future, shouldn't Hawking welcome it? I'm not sure if Hawking has said this (or if he follows the literature of his posse), but the argument is (because Musk and co are Bostrom fans, hence why the simulation argument is trotted out too by him) that the machines won't be achieving the goals that the humans intended them to do. Instead of maximising civilisation, knowledge, comprehension, they get stuck turning all life on earth into paperclips or something arbitrary. But other than that you're right. If you consider humans gooey machines then it will be like a novel that started on a Windows XP computer being completed on Windows 10 computer. You have to have an extra condition that would give the the Windows XP computer subjective value. Considering he sees humans as "chemical scum" I don't think he would give us that.
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
Likes:
|
Post by Deleted on Nov 9, 2017 15:18:19 GMT
From the perspective of someone like Hawking, ie someone who: a) wants civilisation to reach the apex of knowledge and understanding b) sees humans as essentially gooey machines why would AI supplanting their more limited human creators and achieving things they could never have dreamed of be a bad thing per se? As long as it didn't happen in the near future, shouldn't Hawking welcome it? I don't know that Hawking actually wants the things you say he wants, but if he does then civilisation being destroyed by AI would prevent it from reaching the apex of knowledge and understanding. You can't understand things when you're dead.
|
|
|
Post by FilmFlaneur on Nov 9, 2017 15:39:47 GMT
I would have thought that one saving grace of AI, at least when it finally is so much more clever than humans can ever be, is the ability to determine for sure whether it is a good or a bad thing to be so, thus saving us the need to worry.
|
|
|
Post by general313 on Nov 9, 2017 15:56:18 GMT
I guess an open question is does unbounded intelligence invariably lead to compassion or is compassion an orthogonal property that needs to be programmed into the self-learning intelligence at an early stage.
|
|
The Lost One
Junior Member
@lostkiera
Posts: 2,676
Likes: 1,301
|
Post by The Lost One on Nov 9, 2017 16:52:04 GMT
From the perspective of someone like Hawking, ie someone who: a) wants civilisation to reach the apex of knowledge and understanding b) sees humans as essentially gooey machines why would AI supplanting their more limited human creators and achieving things they could never have dreamed of be a bad thing per se? As long as it didn't happen in the near future, shouldn't Hawking welcome it? I don't know that Hawking actually wants the things you say he wants, but if he does then civilisation being destroyed by AI would prevent it from reaching the apex of knowledge and understanding. You can't understand things when you're dead. Hawking will likely be dead anyway. He may as well dream about human creations reaching the apex of knowledge as humans themselves - either way its a vicarious pleasure. Although if he fears AI will turn the world into paperclips then I can see where he's coming from.
|
|
|
Post by Terrapin Station on Nov 9, 2017 16:53:03 GMT
In other news, Stephen Hawking recently watched the Terminator series for the first time.
|
|
|
Post by general313 on Nov 9, 2017 16:54:34 GMT
In other news, Stephen Hawking recently watched the Terminator series for the first time. Next up in his Netflix queue: Blade Runner.
|
|
|
Post by gadreel on Nov 9, 2017 16:59:13 GMT
He has a point with the actual quote, if we come up with a thing that can think and can improve itself as fast as a computer processes then we really have no idea what it will end up being or decide to be.
This is of course the basis of Asimov's three robotic laws, but the thing I never got from the laws was that presumably an AI that can improve itself could get rid of the laws. Iain M Banks has an interesting take on it in the culture, essentially the Minds look after the flesh machines and find them fascinating more than anything else so co-operate with them. Having said that there are also Minds that do dubious things.
So having just wrote that I guess that if there is one AI it could be good or bad, but once more than one are around they presumably will choose of their own accord as to whether humans are worthy of respect or destruction
|
|
The Lost One
Junior Member
@lostkiera
Posts: 2,676
Likes: 1,301
|
Post by The Lost One on Nov 9, 2017 17:36:47 GMT
This is of course the basis of Asimov's three robotic laws, but the thing I never got from the laws was that presumably an AI that can improve itself could get rid of the laws. But wouldn't doing so break the laws?
|
|
|
Post by Cinemachinery on Nov 9, 2017 17:44:00 GMT
Saw a pretty good case for human brain augmentation recently, using the logic that it would be a viable way to remain competitive with possible AI and general tech advances.
People are still in the "Oh my gawd they implanted a chip, it's the mark of the beast!" phase with this stuff, but I think it's eventually going to be commonplace.
|
|
|
Post by gadreel on Nov 9, 2017 17:52:41 GMT
This is of course the basis of Asimov's three robotic laws, but the thing I never got from the laws was that presumably an AI that can improve itself could get rid of the laws. But wouldn't doing so break the laws? Does it? Hmm, I suppose the second law could be arguably broken, are laws inbuilt to the robot 'orders'? Also there is nothing that says that AI cannot build another AI without those restrictions?
|
|