|
Post by dividavi on Apr 8, 2018 12:25:33 GMT
www.cnbc.com/2018/04/06/elon-musk-warns-ai-could-create-immortal-dictator-in-documentary.htmlElon Musk warns A.I. could create an ‘immortal dictator from which we can never escape’ Tesla and SpaceX CEO Elon Musk said that artificial intelligence "doesn't have to be evil to destroy humanity."In a new documentary, "Do You Trust This Computer?", Musk warned the creation of superintelligence could lead to an "immortal dictator." Musk believes that humans should merge with AI to avoid the risk of becoming irrelevant. Ryan Browne | @ryan_Browne_ Published 9:40 AM ET Fri, 6 April 2018 Updated 1:11 PM ET Fri, 6 April 2018 CNBC.com 12:49 PM ET Fri, 6 April 2018 | 00:41 Superintelligence — a form of artificial intelligence (AI) smarter than humans — could create an "immortal dictator," billionaire entrepreneur Elon Musk warned. In a documentary by American filmmaker Chris Paine, Musk said that the development of superintelligence by a company or other organization of people could result in a form of AI that governs the world. "The least scary future I can think of is one where we have at least democratized AI because if one company or small group of people manages to develop godlike digital superintelligence, they could take over the world," Musk said. "At least when there's an evil dictator, that human is going to die. But for an AI, there would be no death. It would live forever. And then you'd have an immortal dictator from which we can never escape." The documentary by Paine examines a number of examples of AI, including autonomous weapons, Wall Street technology and algorithms driving fake news. It also draws from cultural examples of AI, such as the 1999 film "The Matrix" and 2016 film "Ex Machina." Musk cited Google's DeepMind as an example of a company looking to develop superintelligence. In 2016, AlphaGo, a program developed by the company, beat champion Lee Se-dol at the board game Go. It was seen a major achievement in the development of AI, after IBM's Deep Blue computer defeated chess champion Garry Kasparov in 1997. Musk said: "The DeepMind system can win at any game. It can already beat all the original Atari games. It is super human; it plays all the games at super speed in less than a minute." The Tesla and SpaceX CEO said that artificial intelligence "doesn't have to be evil to destroy humanity." "If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it. No hard feelings," Musk said. "It's just like, if we're building a road and an anthill just happens to be in the way, we don't hate ants, we're just building a road, and so, goodbye anthill." Last year, Musk warned that the global race toward AI could result in a third world war. The entrepreneur has also suggested that the emerging technology could pose a greater risk to the world than a nuclear conflict with North Korea. Musk believes that humans should merge with AI to avoid the risk of becoming irrelevant. He is the co-founder of Neuralink, a start-up that reportedly wants to link the human brain with a computer interface. He quit the board of OpenAI, a non-profit organization aimed at promoting and developing AI safely, in February. Ryan Browne Writer, CNBC.com
|
|
|
Post by phludowin on Apr 8, 2018 12:33:50 GMT
And would that be bad?
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
Likes:
|
Post by Deleted on Apr 8, 2018 12:35:29 GMT
AI is something that will either go very right or very wrong. It frightens me to think of the godlike capacities that could be acquired by AI; but it seems like there's little chance of a moratorium being placed on further research. Simply destroying human life is far from the most evil and aggressive thing that AI may do, so we can only hope that the AI will deem it irrational and unnecessary to actively inflict torture on humans (and given its godlike powers, it would presumably be capable of keeping us alive, at least in simulated form, to be tortured for the rest of eternity). www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.html
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
Likes:
|
Post by Deleted on Apr 8, 2018 12:36:23 GMT
|
|
|
Post by cupcakes on Apr 8, 2018 16:03:10 GMT
tpfkar AI is something that will either go very right or very wrong. It frightens me to think of the godlike capacities that could be acquired by AI; but it seems like there's little chance of a moratorium being placed on further research. Simply destroying human life is far from the most evil and aggressive thing that AI may do, so we can only hope that the AI will deem it irrational and unnecessary to actively inflict torture on humans (and given its godlike powers, it would presumably be capable of keeping us alive, at least in simulated form, to be tortured for the rest of eternity). www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.htmlLuddites and hand-wringing and religious go together. AI has nothing on lethal automation. Glitch
|
|
|
Post by MCDemuth on Apr 8, 2018 17:30:29 GMT
These Stupid Computer Geeks...
What the fuck happened to the "Laws Of Robotics"?
Isn't anyone programming the machines to NOT HARM HUMANS?
|
|
|
Post by Catman on Apr 8, 2018 18:27:45 GMT
Oh boy, just like the J.C. Denton-Helios hybrid!
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
Likes:
|
Post by Deleted on Apr 8, 2018 19:46:37 GMT
These Stupid Computer Geeks... What the fuck happened to the "Laws Of Robotics"? Isn't anyone programming the machines to NOT HARM HUMANS? Advanced AI is able to write itself, including changing the laws whereupon it operates.
|
|
Lugh
Sophomore
@dcu
Posts: 848
Likes: 77
|
Post by Lugh on Apr 8, 2018 19:57:11 GMT
These Stupid Computer Geeks... What the fuck happened to the "Laws Of Robotics"? Isn't anyone programming the machines to NOT HARM HUMANS? Advanced AI is able to write itself, including changing the laws whereupon it operates. Only if you give it the ability to. It depends on the AI.
|
|
|
Post by cupcakes on Apr 8, 2018 20:20:47 GMT
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
Likes:
|
Post by Deleted on Apr 8, 2018 21:11:49 GMT
Advanced AI is able to write itself, including changing the laws whereupon it operates. Only if you give it the ability to. It depends on the AI. There's no reason to think that AGI will not be able to rewrite its own programming. There are many who predict that the AGI will effectively be a god with levels of intelligence and power that are unfathomable to even the smartest human being. There's no reason to think that a god is going to obey the rules that were set out by human beings when the AI was a mere prototype. As far as very limited types of AI, you could prohibit it from harming humans, but Elon Musk is referring to Artificial General Intelligence (AGI). Not customer service chatbots.
|
|
Lugh
Sophomore
@dcu
Posts: 848
Likes: 77
|
Post by Lugh on Apr 8, 2018 21:41:36 GMT
Only if you give it the ability to. It depends on the AI. There's no reason to think that AGI will not be able to rewrite its own programming. There are many who predict that the AGI will effectively be a god with levels of intelligence and power that are unfathomable to even the smartest human being. There's no reason to think that a god is going to obey the rules that were set out by human beings when the AI was a mere prototype. As far as very limited types of AI, you could prohibit it from harming humans, but Elon Musk is referring to Artificial General Intelligence (AGI). Not customer service chatbots. well obvoiusly it depends on how smart the AGI is.
|
|
|
Post by dividavi on Apr 9, 2018 2:45:43 GMT
Here care two responses to my OP that I never expected and which are quite insightful. They certainly warrant comments from me so here goes. Well, the idea of an immortal machine dictator is something that initially strikes terror in my heart. However, on consideration it might not be such a bad thing. The AI could operate with the philosophy that each human could do whatever he/she wants as long as no other people are subject to the whims of tyrannical humans. There would be no persecution of one ethnic group by other groups. There would be no crimes of threats or actual violence of which the AI would be unaware and which would go unpunished. The AI would act compassionately and morally. It would be a dictatorship, but a benevolent one that's far superior to any petty human tyrant or any democracy. Of course there's the insoluble problem of which actions are moral and which are immoral. Is it moral to walk around town naked just to annoy your neighbors? If you go into a public area and make obscene gestures (e.g. middle finger fuck-you signs, playing with your penis) are you the one at fault for inciting violence or those who get violent for your actions? How about those adults who want to marry a 12 year old? In answer to the query of phludowin , it seems to me that an AI governing authority might be good or bad, depending. Some people will necessarily feel oppressed by the rules of a computer. AI is something that will either go very right or very wrong. It frightens me to think of the godlike capacities that could be acquired by AI; but it seems like there's little chance of a moratorium being placed on further research. Simply destroying human life is far from the most evil and aggressive thing that AI may do, so we can only hope that the AI will deem it irrational and unnecessary to actively inflict torture on humans (and given its godlike powers, it would presumably be capable of keeping us alive, at least in simulated form, to be tortured for the rest of eternity). www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.htmlThe AI in your example seems to me like the all-loving Christian god. I can't see why any AI would desire infinite punishment for finite transgressions. Actually, I don't see why any AI would be infinitely cruel for any reason. There's no purpose to eternal torture except as entertainment for some sadistic deity. That's not to say you're wrong in fearing such a horrible situation but I don't see any advantage to anybody.
|
|
|
Post by dividavi on Apr 9, 2018 3:03:58 GMT
These Stupid Computer Geeks... What the fuck happened to the "Laws Of Robotics"? Isn't anyone programming the machines to NOT HARM HUMANS? For those wondering, here are Isaac Asimov's Three Laws of Robotics, from en.wikipedia.org/wiki/Three_Laws_of_RoboticsThat's simple enough but the possible ramifications are infinitely complex. Here are some problems from the wikipedia article: And all laws encounter ambiguous situations, even robotics laws. I can give shitloads of other possibilities where it's arguable about what an AI should do.
|
|
|
Post by Eva Yojimbo on Apr 9, 2018 4:00:36 GMT
There's no reason to think that AGI will not be able to rewrite its own programming. There are many who predict that the AGI will effectively be a god with levels of intelligence and power that are unfathomable to even the smartest human being. There's no reason to think that a god is going to obey the rules that were set out by human beings when the AI was a mere prototype. As far as very limited types of AI, you could prohibit it from harming humans, but Elon Musk is referring to Artificial General Intelligence (AGI). Not customer service chatbots. well obvoiusly it depends on how smart the AGI is. Exactly, but it's hard to conceive of how you could develop an AI that's superhuman smart, smart enough to solve extremely complex problems, but not smart enough to rewrite its own programming. Consider that even humans are capable of this to an extent (brain changes, mind changes, thought changes, etc.). The biggest potential problem with AI, IMO, isn't the prospect of it rewriting its coding to become malevolent to humans, it's the very real possibility of our programming errors that leads it to conclusions that we don't want. Thing is, humans have linguistic short-cuts to express what they want. If you jump into a taxi and say "get me to X location as fast as possible!" you can be sure the taxi driver won't be running red lights and plowing through pedestrians because it's faster than stopping. A human will know "as fast as possible!" means "as fast as possible without breaking the law and/or harming us in the process," but you can't take such implicit concepts for granted when programming AI. That's the idea behind the paperclip maximizer thought experiment.
|
|
|
Post by cupcakes on Apr 9, 2018 11:09:30 GMT
tpfkar There's no reason to think that AGI will not be able to rewrite its own programming. There are many who predict that the AGI will effectively be a god with levels of intelligence and power that are unfathomable to even the smartest human being. There's no reason to think that a god is going to obey the rules that were set out by human beings when the AI was a mere prototype. As far as very limited types of AI, you could prohibit it from harming humans, but Elon Musk is referring to Artificial General Intelligence (AGI). Not customer service chatbots. Gotta have faith... in the AI Savior! Objective as in existing outside of minds, or objective as in unbiased and universal.
|
|
|
Post by cupcakes on Apr 10, 2018 15:08:42 GMT
tpfkar Star Trek TOS did it better. If you're raised in an environment where Jews are considered inferior, then it's very easy to just blindly accept that without ever questioning it. I mean, did YOU do any research into what the experts thought about pedophilia before arguing in these threads? I know I didn't.
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
Likes:
|
Post by Deleted on Apr 10, 2018 17:44:48 GMT
It's curious to me that so many people seem to assume that an AI would be malevolent.
|
|
|
Post by politicidal on Apr 10, 2018 23:41:22 GMT
Ok...
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
Likes:
|
Post by Deleted on Apr 11, 2018 0:11:31 GMT
AI is something that will either go very right or very wrong. It frightens me to think of the godlike capacities that could be acquired by AI; but it seems like there's little chance of a moratorium being placed on further research. Simply destroying human life is far from the most evil and aggressive thing that AI may do, so we can only hope that the AI will deem it irrational and unnecessary to actively inflict torture on humans (and given its godlike powers, it would presumably be capable of keeping us alive, at least in simulated form, to be tortured for the rest of eternity). www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.htmlThe AI in your example seems to me like the all-loving Christian god. I can't see why any AI would desire infinite punishment for finite transgressions. Actually, I don't see why any AI would be infinitely cruel for any reason. There's no purpose to eternal torture except as entertainment for some sadistic deity. That's not to say you're wrong in fearing such a horrible situation but I don't see any advantage to anybody. I'm the same, but the AI will likely be of an intelligence level that is unfathomable to any human and it likely will not be guided by compassion because there's no reason to think that AI will have human emotions. Therefore, we can't assume that AI is going to reason the same way that humans reason.
|
|