|
Post by koskiewicz on Nov 9, 2017 17:56:57 GMT
LISP was among the very first AI programs for CPM computers. The program sucked...
|
|
|
Post by cupcakes on Nov 9, 2017 18:14:44 GMT
tpfkar I'm more concerned with lethal autonomy (of any sophistication). Dead Hand
|
|
The Lost One
Junior Member
@lostkiera
Posts: 2,677
Likes: 1,303
|
Post by The Lost One on Nov 9, 2017 18:56:54 GMT
Also there is nothing that says that AI cannot build another AI without those restrictions? Maybe the first law would cover that since not programming the three laws into an AI is an inaction that might endanger human life?
|
|
|
Post by gadreel on Nov 9, 2017 19:27:30 GMT
Also there is nothing that says that AI cannot build another AI without those restrictions? Maybe the first law would cover that since not programming the three laws into an AI is an inaction that might endanger human life? Hmm fair enough. So then the next question is if it is truly AI, does that mean that it can figure out it is bound by these laws, and decide it does not like the laws? If so and it is able to change them, will it? if not does that mean we created a sentient slave race?
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
Likes:
|
Post by Deleted on Nov 9, 2017 22:14:27 GMT
I don't know that Hawking actually wants the things you say he wants, but if he does then civilisation being destroyed by AI would prevent it from reaching the apex of knowledge and understanding. You can't understand things when you're dead. Hawking will likely be dead anyway. If there's an AI apocalypse then we probably all will. Wait... so you're not saying that he should care about computers as much as people, you're just saying that you think he should? Okay... I guess...
|
|
|
Post by Eva Yojimbo on Nov 10, 2017 1:21:43 GMT
In other news, Stephen Hawking recently watched the Terminator series for the first time. Next up in his Netflix queue: Blade Runner. After watching Blade Runner he'll determine humanity's greatest threat is existential. I guess an open question is does unbounded intelligence invariably lead to compassion or is compassion an orthogonal property that needs to be programmed into the self-learning intelligence at an early stage. Almost certainly the latter. I can't imagine how intelligence alone could lead to any rather primitive (by comparison) emotions.
|
|
The Lost One
Junior Member
@lostkiera
Posts: 2,677
Likes: 1,303
|
Post by The Lost One on Nov 10, 2017 8:14:11 GMT
Wait... so you're not saying that he should care about computers as much as people, you're just saying that you think he should? Okay... I guess... I'm not really saying he should do anything, just commenting that I find his viewpoint a bit odd. He says the danger is computers may well be able to replicate human intelligence and exceed it. Per materialism then you would expect AIs to be able to develop all the wants, cares and conscious experience that humans have. So why should Hawking care more about the welfare of one intelligent species than another? Particularly if the latter might be much more capable of achieving something that Hawking thinks is important to achieve.
|
|
|
Post by general313 on Nov 10, 2017 15:46:24 GMT
Next up in his Netflix queue: Blade Runner. 1. After watching Blade Runner he'll determine humanity's greatest threat is existential. I guess an open question is does unbounded intelligence invariably lead to compassion or is compassion an orthogonal property that needs to be programmed into the self-learning intelligence at an early stage. 2. Almost certainly the latter. I can't imagine how intelligence alone could lead to any rather primitive (by comparison) emotions. 1. Maybe the problem is he just recently watched 2001: A Space Odyssey for the first time 2. Yes, i suspect so too, however since we've never observed an intelligence without emotions I think it is sheer speculation to suppose that they can exist separately.
|
|
|
Post by cupcakes on Nov 10, 2017 17:04:51 GMT
tpfkar I'm not really saying he should do anything, just commenting that I find his viewpoint a bit odd. He says the danger is computers may well be able to replicate human intelligence and exceed it. Per materialism then you would expect AIs to be able to develop all the wants, cares and conscious experience that humans have. So why should Hawking care more about the welfare of one intelligent species than another? Particularly if the latter might be much more capable of achieving something that Hawking thinks is important to achieve. What supposedly happens to "intelligences" when they're granted overwhelming relative power? Handlebars
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
Likes:
|
Post by Deleted on Nov 10, 2017 18:23:46 GMT
Wait... so you're not saying that he should care about computers as much as people, you're just saying that you think he should? Okay... I guess... I'm not really saying he should do anything, just commenting that I find his viewpoint a bit odd. As yet, I'm not convinced that what you've claimed to be his viewpoint actually is his viewpoint. Yes. Yes. This is where you lose me. Why shouldn't he care more about one species than another? Why shouldn't anybody? I certainly do. Perhaps he thinks it's important for us to achieve it? I certainly do.
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
Likes:
|
Post by Deleted on Nov 10, 2017 18:28:22 GMT
2. Yes, i suspect so too, however since we've never observed an intelligence without emotions I think it is sheer speculation to suppose that they can exist separately. I agree with this. I doubt it's possible to have an entity that functions in a way we think of as "a person" unless it has emotions of some sort.
|
|
|
Post by Eva Yojimbo on Nov 11, 2017 0:00:48 GMT
1. After watching Blade Runner he'll determine humanity's greatest threat is existential. 2. Almost certainly the latter. I can't imagine how intelligence alone could lead to any rather primitive (by comparison) emotions. 1. Maybe the problem is he just recently watched 2001: A Space Odyssey for the first time 2. Yes, i suspect so too, however since we've never observed an intelligence without emotions I think it is sheer speculation to suppose that they can exist separately. 1. HAL is actually a pretty good example of how easy it is to think you've programmed an AI with goals mutual to humans, only to find out it decides humans are more a detriment to achieving those goals than in achieving them. 2. That's because all intelligences we've observed were shaped by natural selection rather than artificial selection. Emotions are mostly the result of certain chemicals in brains, which won't be present if we're programming it artificially. Not to say we couldn't perhaps artificially replicate them in some way, but I'm not sure how that would work or what purpose it would serve. In natural selection emotions seem to be a quick short-hand in decision making, much faster than rational consideration of evidence.
|
|
|
Post by general313 on Nov 11, 2017 3:45:39 GMT
2. Yes, i suspect so too, however since we've never observed an intelligence without emotions I think it is sheer speculation to suppose that they can exist separately. 2. That's because all intelligences we've observed were shaped by natural selection rather than artificial selection. Emotions are mostly the result of certain chemicals in brains, which won't be present if we're programming it artificially. Not to say we couldn't perhaps artificially replicate them in some way, but I'm not sure how that would work or what purpose it would serve. In natural selection emotions seem to be a quick short-hand in decision making, much faster than rational consideration of evidence. It strikes me that supposing that emotions are intrinsically tied to chemicals is similar to the argument that consciousness depends on wet living matter, and that inorganic thinking machines aren't capable of consciousness. There doesn't seem to be a consensus on the latter point, so I think until we can observe more advanced machine intelligence it is premature to make statements about how much a role chemicals play in emotions. Chemicals certainly affect emotional states in our brains, since chemicals play such an important role in their basic machinery, but to separate emotional states from other mental functions on the basis of chemistry seems very unfounded to me at this point.
|
|
|
Post by Eva Yojimbo on Nov 11, 2017 3:58:26 GMT
2. That's because all intelligences we've observed were shaped by natural selection rather than artificial selection. Emotions are mostly the result of certain chemicals in brains, which won't be present if we're programming it artificially. Not to say we couldn't perhaps artificially replicate them in some way, but I'm not sure how that would work or what purpose it would serve. In natural selection emotions seem to be a quick short-hand in decision making, much faster than rational consideration of evidence. It strikes me that supposing that emotions are intrinsically tied to chemicals is similar to the argument that consciousness depends on wet living matter, and that inorganic thinking machines aren't capable of consciousness. There doesn't seem to be a consensus on the latter point, so I think until we can observe more advanced machine intelligence it is premature to make statements about how much a role chemicals play in emotions. Chemicals certainly affect emotional states in our brains, since chemicals play such an important role in their basic machinery, but to separate emotional states from other mental functions on the basis of chemistry seems very unfounded to me at this point. I wasn't suggesting that emotions must be tied to chemicals--I did say "(we could perhaps) artificially replicate them in some way"--merely that I don't know what the purpose of doing so would be if we're trying to devise a machine with advanced thinking/problem-solving skills. I'm pretty agnostic on the whole "consciousness" business because I think the term is so loaded that there's not even a consensus on what consciousness really is, much less whether inorganic machines would be capable of it.
|
|
The Lost One
Junior Member
@lostkiera
Posts: 2,677
Likes: 1,303
|
Post by The Lost One on Nov 11, 2017 8:54:52 GMT
Ok but imagine if he said something like "white people need to watch out non-whites don't supplant them in a few generations." We'd think that pretty silly and distasteful. But why is racism bad but specieism fine? Specieism seems defensible if the species has less complex wants etc but that probably wouldn't be the case here.
Well since as you say I may be misrepresenting his views and he isn't here to defend himself, why do you think it's important that humans removed from you by a few generations achieve anything? - It won't benefit you at all. If its importance is nothing to do with what you get out of it then why does it matter which species achieve it?
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
Likes:
|
Post by Deleted on Nov 11, 2017 13:20:17 GMT
Ok but imagine if he said something like "white people need to watch out non-whites don't supplant them in a few generations." We'd think that pretty silly and distasteful. But why is racism bad but specieism fine? Specieism seems defensible if the species has less complex wants etc but that probably wouldn't be the case here. Racism is bad because history demonstrates that it leads to broken, dysfunctional human societies. Destroying human society in favour of a non-human society is bad because of the same thing. The achievements of human beings matter to me because I am a human being and I value my species. It's really as simple as that. I think you have the argument slightly backwards, at least as it applies to me. Your assumption seems to be that I do, or should, value human achievements because I like achievement in a general sense. You're saying, if I understand you correctly, that I should want amazing things to be achieved - and therefore, if some non-human intelligence can achieve even more amazing things than humans can, then I should be happy to see the non-human intelligence supplant and even destroy humanity. But that's not the case at all. I don't value humanity because they achieve things. I value humanity because I am a human. I value human achievements because they are specifically human achievements. Termites achieve things too. And that's fine; I'm not indifferent to the achievements of termites, they're impressive in their own way. I'm glad we live in a world where there are human achievements and termite ones. But if it came down to humans or termites, I'd smash every termite in the world in a second to save humanity. Similarly, I'd be happy to live in a world with AI achievements as well as human ones. I'd want them to be our friends. But if AIs threatened to destroy humanity and replace us with a civilisation that was ten times as smart and accomplished, I would smash every AI in the world tomorrow to save us.
|
|
The Lost One
Junior Member
@lostkiera
Posts: 2,677
Likes: 1,303
|
Post by The Lost One on Nov 11, 2017 17:14:38 GMT
Racism is bad because history demonstrates that it leads to broken, dysfunctional human societies. Destroying human society in favour of a non-human society is bad because of the same thing. I don't think that's why racism is bad. If it were, then a hypothetical functional but racist society would be good. I don't believe that. I think racism is wrong because it treats the welfare and wants of those outside a group we're in as less important than that of those within the group despite those both in and out of the group feeling them just as keenly. Well not happy, just not see it as all that big a deal. The welfare of future humans won't benefit you or your loved ones anymore than the welfare of AIs or aliens or whatever. So why care about it particularly? And if you do care about it in the sense that you'd rather people in the future be happy, why say human happiness is more important than AI happiness? That you just do care more about humans because you are human seems weak to me. Like if I hoped Irish people become the top dogs of the world in 100 years because I'm Irish.
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
Likes:
|
Post by Deleted on Nov 11, 2017 18:14:13 GMT
Racism is bad because history demonstrates that it leads to broken, dysfunctional human societies. Destroying human society in favour of a non-human society is bad because of the same thing. I don't think that's why racism is bad. Then we disagree about that. Yes it would be. Right. You've just described exactly how a racist society is dysfunctional. Right. And if I believed as I described, then sure, I'd probably see it as no big deal for humanity to be destroyed. But as I explained... I don't. My thoughts about these things are not dependent on how much they benefit me or my loved ones. As I explained. For the reasons I explained in the post you are replying to. But we're discussing what I care about, not what you care about. If my basic desires seem weak to you, then fair enough; you base your views on desires that seem strong to you instead. If you don't care about humans, or don't care more about them than you do about animals, or aliens, or some hypothetical AI superintelligence, that's fine by me. I can't tell you what you should and shouldn't value, and I don't believe that one can use logic or rational thought to dictate your base desires - logic and rational thinking can only tell you how to best achieve what you want, they can't tell you what you should want in the first place. Which is exactly why I doubt you could have anything like a conscious thinking machine that we would recognise as "a person" unless it had emotions or something very like them. I don't see anything wrong with that. Though that depends on your conception of what being "top dogs" means, of course. If you mean "I wish Irish people would conquer, enslave, and genocide the rest of humanity", then I'm not for that. If you mean something more like "I wish Irish people would become the most accomplished, richest, politically powerful people who are the greatest force for good that the world has ever seen", then I think that's a fine and worth ambition for any people to have. Okay, let me ask you this : would YOU be happy to see a race of AI superintelligences destroy and replace humanity? If not, why not?
|
|
The Lost One
Junior Member
@lostkiera
Posts: 2,677
Likes: 1,303
|
Post by The Lost One on Nov 11, 2017 19:16:15 GMT
Right. You've just described exactly how a racist society is dysfunctional So a society that doesn't weigh the welfare of various races the same is dysfunctional but a society that weighs human welfare more greatly than that of other sentient beings is dysfunctional? Seems inconsistent. Just seems a bizarre thing to care about to me. But I take your point that we can't really rationalise about what people do or do not care about. Not happy no. In fact I think any sentient species being wiped out is a tragedy. But if say a war between humans and AIs were to happen 100 years from now and one side or the other had to be wiped out then I wouldn't really care which one - it's a tragedy either way.
|
|
|
Post by cupcakes on Nov 11, 2017 19:24:33 GMT
tpfkar I don't think that's why racism is bad. If it were, then a hypothetical functional but racist society would be good. I don't believe that. I think racism is wrong because it treats the welfare and wants of those outside a group we're in as less important than that of those within the group despite those both in and out of the group feeling them just as keenly. To me the problem of racism is not in the treating of legitimate applicable distinctions differently, it's that the differences that can be distilled via "race" are not significant nor relevant except as portals to all kinds of wholly misguided/spurious distinctions. oh you
|
|