|
Post by Winter_King on Nov 24, 2017 12:28:40 GMT
|
|
|
Post by hi224 on Nov 24, 2017 14:07:22 GMT
Anyone else thinking they could give a shit about Hillary's take on AI? I give more shits about the thoughts of people like Hawking, Musk, Gates and others (including many AI researchers) who've been saying this for a while; but I say good on Hilary for listening to them and trying to get the word out. Although I don't know what good it will do as it's only a matter of time before we get there, and how likely is it that of all those trying to get there that someone won't do it before we're completely ready (even assuming "completely readiness" is even possible)? this right here agree.
|
|
|
Post by thorshairspray on Nov 25, 2017 2:45:29 GMT
Said it before and I'll say it forevermore. There is no happy ending for humanity in creating something more intelligent than ourselves by orders of magnitude. How well do we treat animals? And we have the capacity for compassion and empathy.
|
|
|
Post by Eva Yojimbo on Nov 25, 2017 6:30:28 GMT
Said it before and I'll say it forevermore. There is no happy ending for humanity in creating something more intelligent than ourselves by orders of magnitude. How well do we treat animals? And we have the capacity for compassion and empathy. Yes, but we were "programmed" by natural selection to look after our own interests; we won't design an AI to do that . Of course there is the question of whether any self-improving AI might end up doing that anyway. The "good" that could come of it would be their ability to solve complex problems that we can't.
|
|
|
Post by thorshairspray on Nov 25, 2017 16:08:58 GMT
Said it before and I'll say it forevermore. There is no happy ending for humanity in creating something more intelligent than ourselves by orders of magnitude. How well do we treat animals? And we have the capacity for compassion and empathy. Yes, but we were "programmed" by natural selection to look after our own interests; we won't design an AI to do that . Of course there is the question of whether any self-improving AI might end up doing that anyway. The "good" that could come of it would be their ability to solve complex problems that we can't. I'm not convinced you can created a functional AI, with the ability to problem solve beyond ours and make it altruistic or make it so it puts us first. Even it you programmed it to be subservient or to only offer adivice and not take action, we would be relying on this super advanced intelligence not breaking that program. For example, if the AI is programmed to assist us but not to take action, couldn't it reason that the best way to assist us would be to act on our behalf and thus seek to break the prohibition on taking action? We would be creating intelligence and reasoning untempered by 12,000,000 years of social primate evolution. Like going back in time and replacing the Wright Brothers plane with an F-22. And we were also programmed by evolution to be social, but even then our own species comes first. Kill 100 dogs to save a child, not even a choice worth consideration. Kill every dog to save my own child? Bye doggos.
|
|
|
Post by Eva Yojimbo on Nov 26, 2017 0:33:12 GMT
Yes, but we were "programmed" by natural selection to look after our own interests; we won't design an AI to do that . Of course there is the question of whether any self-improving AI might end up doing that anyway. The "good" that could come of it would be their ability to solve complex problems that we can't. I'm not convinced you can created a functional AI, with the ability to problem solve beyond ours and make it altruistic or make it so it puts us first. Even it you programmed it to be subservient or to only offer adivice and not take action, we would be relying on this super advanced intelligence not breaking that program. For example, if the AI is programmed to assist us but not to take action, couldn't it reason that the best way to assist us would be to act on our behalf and thus seek to break the prohibition on taking action? We would be creating intelligence and reasoning untempered by 12,000,000 years of social primate evolution. Like going back in time and replacing the Wright Brothers plane with an F-22. And we were also programmed by evolution to be social, but even then our own species comes first. Kill 100 dogs to save a child, not even a choice worth consideration. Kill every dog to save my own child? Bye doggos. I'm not convinced we can either, but then again I'm not an AI researcher, so what would I know? Your second paragraph is a very well-known and recognized problem in AI research, but even an "Oracle" AI could have its downsides. See HERE
|
|
|
Post by Cinemachinery on Nov 27, 2017 22:57:33 GMT
Pfff. The US, at least, isn't even prepared for low-level things like DDOS attacks, much less actual high level security breaches. To think we'd in any way be "ready" to take on the worst cases of AI gone wrong is silly.
Still and all, technology increases faster every day - better buckle up.
|
|