|
|
Post by The Lost One on Nov 11, 2017 19:59:30 GMT
tpfkar I don't think that's why racism is bad. If it were, then a hypothetical functional but racist society would be good. I don't believe that. I think racism is wrong because it treats the welfare and wants of those outside a group we're in as less important than that of those within the group despite those both in and out of the group feeling them just as keenly. To me the problem of racism is not in the treating of legitimate applicable distinctions differently, it's that the differences that can be distilled via "race" are not significant nor relevant except as portals to all kinds of wholly misguided/spurious distinctions. oh youYeah I probably would agree with that. What do you reckon counts as a significant and/or relevant difference?
|
|
|
|
Post by cupcakes on Nov 11, 2017 20:15:17 GMT
tpfkar To me the problem of racism is not in the treating of legitimate applicable distinctions differently, it's that the differences that can be distilled via "race" are not significant nor relevant except as portals to all kinds of wholly misguided/spurious distinctions.
Yeah I probably would agree with that. What do you reckon counts as a significant and/or relevant difference? I suppose any real difference in capabilities, empathy, etc. Certainly not just physical features, nor susceptibilities/resistances to certain diseases and the like, nor culture histories. Dead Fox
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
|
Post by Deleted on Nov 11, 2017 21:00:16 GMT
So a society that doesn't weigh the welfare of various races the same is dysfunctional but a society that weighs human welfare more greatly than that of other sentient beings is dysfunctional? Seems inconsistent. You're suggesting that a society that seeks its own demise should be considered to be a functional one. I find that a peculiar notion. And once again, it's only inconsistent if one begins with the premise that all societies are of equal value and that therefore the failure of any society is equal to the failure of any other. That may be your premise, but you don't seem to be able to grasp that it isn't mine. Indeed. I agree that it's a tragedy either way. But I don't believe the tragedies would be equal. And I think very few humans would.
|
|
|
|
Post by The Lost One on Nov 11, 2017 21:20:39 GMT
You're suggesting that a society that seeks its own demise should be considered to be a functional one. I find that a peculiar notion. Well what do you mean by society? There's no global society of humankind at present. Perhaps in the future, society will be a mixture of AI and human. Anyway societies seeking to preserve themselves is generally not considered a good excuse. The kinda society the white people in The Help have was pretty much destroyed by giving equal rights to black citizens. Yet we don't consider Bryce Dallas Howard's character justified in trying to prevent that. Quite possibly - never claimed my view wasn't weird, it just seems more consistent to me.
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
|
Post by Deleted on Nov 11, 2017 22:17:01 GMT
You're suggesting that a society that seeks its own demise should be considered to be a functional one. I find that a peculiar notion. Well what do you mean by society? There's no global society of humankind at present. Perhaps in the future, society will be a mixture of AI and human. Society is a fairly nebulous term. The world is full of societies, many of them overlapping. Some strictly defined, some not. I have no problem saying that the entirety of human civilisation is one big society. It's just a very loosely bound one. And yes, I'd have no problem with defining a greater society that included humans and AI. I'd also have no problem with humans destroying the AIs if they decided to destroy the rest of the society. Or even if they wanted to destroy chunks of it. At the risk of Godwin, it's really no different from how certain human nations started to be so dangerous to other nations, and so obnoxious in their behaviour within their own borders, that everyone else decided to put a stop to it. Different thing, though. That wasn't one group trying to protect itself from attack by another, but one group trying to prevent another from stopping an ongoing attack upon them. Well, like I said, I'm not about to try and tell other people what they should value. I'm just glad you're in a small minority.
|
|
|
|
Post by general313 on Nov 11, 2017 22:26:32 GMT
It strikes me that supposing that emotions are intrinsically tied to chemicals is similar to the argument that consciousness depends on wet living matter, and that inorganic thinking machines aren't capable of consciousness. There doesn't seem to be a consensus on the latter point, so I think until we can observe more advanced machine intelligence it is premature to make statements about how much a role chemicals play in emotions. Chemicals certainly affect emotional states in our brains, since chemicals play such an important role in their basic machinery, but to separate emotional states from other mental functions on the basis of chemistry seems very unfounded to me at this point. I wasn't suggesting that emotions must be tied to chemicals--I did say "(we could perhaps) artificially replicate them in some way"--merely that I don't know what the purpose of doing so would be if we're trying to devise a machine with advanced thinking/problem-solving skills. I'm pretty agnostic on the whole "consciousness" business because I think the term is so loaded that there's not even a consensus on what consciousness really is, much less whether inorganic machines would be capable of it. I most heartedly agree with that. If I were to place a bet though, I'd put it on inorganic machines would be capable of it. I do wonder if we'll ever understand conciousness well enough to really have any certainty that an advanced AI was conscious.
|
|
|
|
Post by The Lost One on Nov 11, 2017 23:11:23 GMT
I'd also have no problem with humans destroying the AIs if they decided to destroy the rest of the society. Or even if they wanted to destroy chunks of it. At the risk of Godwin, it's really no different from how certain human nations started to be so dangerous to other nations, and so obnoxious in their behaviour within their own borders, that everyone else decided to put a stop to it. Yeah I can understand that. But I would think limiting the freedoms and rights of AI to prevent them ever having the opportunity to launch a campaign against humans is a step too far. Also, perhaps the AI wouldn't try to attack humans, merely start doing their own things rather than be subservient. Humans might be supplanted by non-violent means. Or maybe humans will start the conflict, in fear of the AI turning on them.
|
|
|
|
Post by Eva Yojimbo on Nov 12, 2017 0:10:44 GMT
I wasn't suggesting that emotions must be tied to chemicals--I did say "(we could perhaps) artificially replicate them in some way"--merely that I don't know what the purpose of doing so would be if we're trying to devise a machine with advanced thinking/problem-solving skills. I'm pretty agnostic on the whole "consciousness" business because I think the term is so loaded that there's not even a consensus on what consciousness really is, much less whether inorganic machines would be capable of it. I most heartedly agree with that. If I were to place a bet though, I'd put it on inorganic machines would be capable of it. I do wonder if we'll ever understand conciousness well enough to really have any certainty that an advanced AI was conscious. As I understand it, the biggest problem with consciousness is that it's mostly (not universally, perhaps, but mostly) defined as a purely subjective phenomena: ie, what we actually experience when we see the color red, as opposed to understanding the machinery that goes into allowing us to see the color (photons, eyes, the wiring of the visual cortex, etc.). If you replicate the machinery of sight, but that machinery exists in a different medium/substrate, will that seeing-thing still experience red in the same way? Perhaps the only way we might could ever know this would be if a human could actually have part of their brains replaced by the same stuff as AIs and were then able to report on whether there was any difference in their conscious experiences before and after.
|
|
|
|
Post by general313 on Nov 12, 2017 23:50:40 GMT
I most heartedly agree with that. If I were to place a bet though, I'd put it on inorganic machines would be capable of it. I do wonder if we'll ever understand conciousness well enough to really have any certainty that an advanced AI was conscious. As I understand it, the biggest problem with consciousness is that it's mostly (not universally, perhaps, but mostly) defined as a purely subjective phenomena: ie, what we actually experience when we see the color red, as opposed to understanding the machinery that goes into allowing us to see the color (photons, eyes, the wiring of the visual cortex, etc.). If you replicate the machinery of sight, but that machinery exists in a different medium/substrate, will that seeing-thing still experience red in the same way? Perhaps the only way we might could ever know this would be if a human could actually have part of their brains replaced by the same stuff as AIs and were then able to report on whether there was any difference in their conscious experiences before and after. We can't even be sure two humans see the same "red" qualia when they look at something with red light.
|
|
|
|
Post by Eva Yojimbo on Nov 12, 2017 23:54:27 GMT
As I understand it, the biggest problem with consciousness is that it's mostly (not universally, perhaps, but mostly) defined as a purely subjective phenomena: ie, what we actually experience when we see the color red, as opposed to understanding the machinery that goes into allowing us to see the color (photons, eyes, the wiring of the visual cortex, etc.). If you replicate the machinery of sight, but that machinery exists in a different medium/substrate, will that seeing-thing still experience red in the same way? Perhaps the only way we might could ever know this would be if a human could actually have part of their brains replaced by the same stuff as AIs and were then able to report on whether there was any difference in their conscious experiences before and after. We can't even be sure two humans see the same "red" qualia when they look at something with red light. True enough, but I think there's at least a more likely assumption that two humans experience something similar when seeing red; much harder to assume that when the medium used to experience is completely different as it would be with AI.
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
|
Post by Deleted on Nov 12, 2017 23:57:53 GMT
I think that A.I. is what is most likely to be the undoing of the human race. It's rather difficult to envisage a way that conscious A.I. could live alongside humanity in a peaceful way without eradicating us, and it's also difficult to envisage how the super rich (who own the robots) could be brought on board with a system whereby their wealth gets transferred to the rest of the (now obsolete) population in order to enable the unemployed (almost the entire human population) to survive.
|
|
|
|
Post by FilmFlaneur on Nov 13, 2017 13:53:20 GMT
It's rather difficult to envisage a way that conscious A.I. could live alongside humanity in a peaceful way without eradicating us, There is another system. Ask Dr.Forbin.
|
|
|
|
Post by general313 on Nov 13, 2017 15:57:58 GMT
We can't even be sure two humans see the same "red" qualia when they look at something with red light. True enough, but I think there's at least a more likely assumption that two humans experience something similar when seeing red; much harder to assume that when the medium used to experience is completely different as it would be with AI. I tend towards the idea that hardware is irrelevant. Maybe I'm being mislead by over-reliance on the following analogies, but it seems to be the best that we have to go with for the time being. When computers transitioned from vacuum tubes to transistors, it didn't make a bit of difference to the software engineers that developed the operating system and applications for these computers. Similarly neural network behavior seems to be completely unaffected by whether it is run in a software simulation, a GPU, even more specialized digital hardware, analog hardware or biological cellular tissues. All the scientific evidence so far suggests that the brain operates very much like a very sophisticated system of interconnected neural networks. I suspect that red qualia properties are determined by the structure of a neural network and not at all on the physical details of how that neural network is implemented.
|
|
Deleted
Deleted Member
@Deleted
Posts: 0
|
Post by Deleted on Nov 13, 2017 17:01:19 GMT
I agree that it's a tragedy either way. But I don't believe the tragedies would be equal. And I think very few humans would. If all humans were extinguished, then the only organisms left to think 'this is a tragedy' would be the Artificial Intelligence itself. There will be no humans left to lament the loss of human civilisation. The sense of 'tragedy' could only live on in whichever individuals of whatever species are left to survey the wreckage. So it would really only boil down to whichever life form - human or their AI overlords - were more capable of experiencing regret and remorse for the genocide of a competing species. My guess is that any A.I. created will probably be too sophisticated to be speciesist, or to be fearful of death. So in that case, the AI would not feel a sense of tragedy or remorse for the loss of humans, but humans are more likely to perceive the loss of an AI civilisation as a tragedy.
|
|
|
|
Post by faustus5 on Nov 13, 2017 18:02:01 GMT
I tend towards the idea that hardware is irrelevant. . .All the scientific evidence so far suggests that the brain operates very much like a very sophisticated system of interconnected neural networks. I suspect that red qualia properties are determined by the structure of a neural network and not at all on the physical details of how that neural network is implemented. The nervous system has to guide your body through a world filled with both opportunities and threats and has to do this in real time. This makes hardware incredibly important. Since we aren't computers except in an extremely metaphorical sense, hardware is all we've got, or at least, the hardware/software distinction in biology breaks down as a useful metaphor. And given that biological systems are also quite chaotic in the sense that minor perturbations can be amplified into large scale effects, the physical details of how our neural networks are implemented become important even at the molecular level. We're made of physical stuff, and nothing else, of course. But I think people tend to really under-estimate just how important what we're made of and how we're put together really is. We evolved from replicating molecule-like entities bit by bit, with each change built upon what came before, and at no point did biology just jettison the processes happening at the very lowest levels of processing. Functionalism and mind-as-computer are useful ideas and we shouldn't abandon them, but if you take the motivations of functionalism seriously, you have to apply them consistently all the way down, even to the kinds of exchanges happening between atoms and molecules within an in between neurons. It all counts.
|
|
|
|
Post by Eva Yojimbo on Nov 13, 2017 23:57:59 GMT
True enough, but I think there's at least a more likely assumption that two humans experience something similar when seeing red; much harder to assume that when the medium used to experience is completely different as it would be with AI. I tend towards the idea that hardware is irrelevant. Maybe I'm being mislead by over-reliance on the following analogies, but it seems to be the best that we have to go with for the time being. When computers transitioned from vacuum tubes to transistors, it didn't make a bit of difference to the software engineers that developed the operating system and applications for these computers. Similarly neural network behavior seems to be completely unaffected by whether it is run in a software simulation, a GPU, even more specialized digital hardware, analog hardware or biological cellular tissues. All the scientific evidence so far suggests that the brain operates very much like a very sophisticated system of interconnected neural networks. I suspect that red qualia properties are determined by the structure of a neural network and not at all on the physical details of how that neural network is implemented. I simply tend towards being entirely agnostic on the issue. It's one issue I feel genuinely 50/50 on. If you're interested, there was an interesting debate between two of my favorite contemporary philosophers, Yudkowsky and Pigliucci, on the subject. I do tend to find that Yudkowsky's usually on the right track of things and often got there long before I did, and he sides with you on the consciousness/hardware issue: bloggingheads.tv/videos/2561
|
|
|
|
Post by general313 on Nov 15, 2017 15:42:23 GMT
I tend towards the idea that hardware is irrelevant. Maybe I'm being mislead by over-reliance on the following analogies, but it seems to be the best that we have to go with for the time being. When computers transitioned from vacuum tubes to transistors, it didn't make a bit of difference to the software engineers that developed the operating system and applications for these computers. Similarly neural network behavior seems to be completely unaffected by whether it is run in a software simulation, a GPU, even more specialized digital hardware, analog hardware or biological cellular tissues. All the scientific evidence so far suggests that the brain operates very much like a very sophisticated system of interconnected neural networks. I suspect that red qualia properties are determined by the structure of a neural network and not at all on the physical details of how that neural network is implemented. I simply tend towards being entirely agnostic on the issue. It's one issue I feel genuinely 50/50 on. If you're interested, there was an interesting debate between two of my favorite contemporary philosophers, Yudkowsky and Pigliucci, on the subject. I do tend to find that Yudkowsky's usually on the right track of things and often got there long before I did, and he sides with you on the consciousness/hardware issue: bloggingheads.tv/videos/2561Thanks for the link - I'll check it out when I have some time.
|
|
|
|
Post by general313 on Nov 15, 2017 16:04:47 GMT
I tend towards the idea that hardware is irrelevant. . .All the scientific evidence so far suggests that the brain operates very much like a very sophisticated system of interconnected neural networks. I suspect that red qualia properties are determined by the structure of a neural network and not at all on the physical details of how that neural network is implemented. The nervous system has to guide your body through a world filled with both opportunities and threats and has to do this in real time. This makes hardware incredibly important. Since we aren't computers except in an extremely metaphorical sense, hardware is all we've got, or at least, the hardware/software distinction in biology breaks down as a useful metaphor. And given that biological systems are also quite chaotic in the sense that minor perturbations can be amplified into large scale effects, the physical details of how our neural networks are implemented become important even at the molecular level. We're made of physical stuff, and nothing else, of course. But I think people tend to really under-estimate just how important what we're made of and how we're put together really is. We evolved from replicating molecule-like entities bit by bit, with each change built upon what came before, and at no point did biology just jettison the processes happening at the very lowest levels of processing. Functionalism and mind-as-computer are useful ideas and we shouldn't abandon them, but if you take the motivations of functionalism seriously, you have to apply them consistently all the way down, even to the kinds of exchanges happening between atoms and molecules within an in between neurons. It all counts. If functionalism is a useful paradigm for how the mind works, then I would fall back on the fact that computer software runs on machines whose molecular details are unimportant in the sense that an algorithm depends very much on a mathematical structure, and not at all on whether the computer uses NMOS or CMOS transistors. The semiconductor device physics certainly matters greatly concerning the operation of a single gate, but computers are assembled in such a way that the analog properties (those properties that might vary from one technology to another) are de-emphasized. That's an important part of the appeal of digital electronics. A binary adder is composed of a number of logic gates that produce the same result whether they're implemented with TTL or CMOS transistors, or even vacuum tubes. It seems the same is true of neural networks. A machine learning network trained to recognize pictures of hippos could be run on specialized hardware or in a software simulator. It might run faster on the specialized hardware but they produce the same result: the details of how the network nodes are implemented simply don't matter.
|
|
|
|
Post by faustus5 on Nov 15, 2017 17:46:52 GMT
It seems the same is true of neural networks. A machine learning network trained to recognize pictures of hippos could be run on specialized hardware or in a software simulator. It might run faster on the specialized hardware but they produce the same result: the details of how the network nodes are implemented simply don't matter. It doesn't matter so long as your goal is just to make a machine that satisfies your norms for what a conscious, intelligent being should behave like. If the idea is to fully replicate a human's consciousness, then it absolutely matters, and for all the reasons why functionalism is so appealing: what's floating around a single synaptic ion channel contributes to whether the neuron fires, which in turn contributes to the judgement and behavior of the neural networks it is a part of. Individual neurons are their own creatures with their own agenda's. They aren't simple mechanical circuits.
|
|
|
|
Post by general313 on Nov 15, 2017 20:35:02 GMT
It seems the same is true of neural networks. A machine learning network trained to recognize pictures of hippos could be run on specialized hardware or in a software simulator. It might run faster on the specialized hardware but they produce the same result: the details of how the network nodes are implemented simply don't matter. It doesn't matter so long as your goal is just to make a machine that satisfies your norms for what a conscious, intelligent being should behave like. If the idea is to fully replicate a human's consciousness, then it absolutely matters, and for all the reasons why functionalism is so appealing: what's floating around a single synaptic ion channel contributes to whether the neuron fires, which in turn contributes to the judgement and behavior of the neural networks it is a part of. Individual neurons are their own creatures with their own agenda's. They aren't simple mechanical circuits. Trouble is no one knows enough about consciousness for anyone to claim more knowledge about it than "satisfying one's norms". But there does seem to be a repetition of that tendency, oft-repeated throughout the history of science, that the closer we look, the less difference we see between the living and the non-living. Vitalism gave way to an acceptance that living animals follow the same chemistry and physics laws as non-living animals, including the development of offspring from conception to birth. As neural scientists probe deeper into how human vision works, they are discovering more about how the brain processes images for recognition, and it bears a striking resemblance to the neural networks used in machine learning. Learning and memory retention in human brains and neural networks work the same way, by adjusting interconnection weights (synapses in the case of humans and layer node coefficients in a neural network). Neural networks use artificial neurons which have a very concise mathematical description. You dismiss "simple mechanical circuits", but brains and smart computers derive their power in the organization of very large numbers of simple elements.
|
|