|
|
Post by cupcakes on Nov 10, 2017 17:04:51 GMT
tpfkar I'm not really saying he should do anything, just commenting that I find his viewpoint a bit odd. He says the danger is computers may well be able to replicate human intelligence and exceed it. Per materialism then you would expect AIs to be able to develop all the wants, cares and conscious experience that humans have. So why should Hawking care more about the welfare of one intelligent species than another? Particularly if the latter might be much more capable of achieving something that Hawking thinks is important to achieve. What supposedly happens to "intelligences" when they're granted overwhelming relative power? Handlebars
|
|