|
Post by 🌵 on Mar 15, 2017 21:38:54 GMT
Narrow AI can achieve narrow tasks and is an intelligent agent, the latter being defined as an autonomous machine which recieves information about an environment and acts to achieve its goals. en.wikipedia.org/wiki/Intelligent_agentI dont think it needs to be explained how this qualifies Siri. Well, both "autonomous" and "acts" are highly questionable there. In philosophy, an act, for example, is characterized by intentionality. But Siri certainly has no intentionality. So we have to figure out what definition of autonomy and act we're using, too, because normal definitions would seem to be metaphorical at best. A camera, for example, functions differently depending on the light received via the aperture. Are cameras considered weakly artificially intelligent based on this definition? Why do you say that Siri certainly has no intentionality? I don't think there's any good reason to think that Siri has intentionality, just wondering why you're so sure that it doesn't. (Presumably the immediate answer is "because Siri certainly has no consciousness/mind", in which case I'd ask why you say that Siri certainly has no consciousness/mind.)
|
|
PanLeo
Sophomore
@saoradh
Posts: 919
Likes: 53
|
Post by PanLeo on Mar 15, 2017 21:40:25 GMT
Narrow AI can achieve narrow tasks and is an intelligent agent, the latter being defined as an autonomous machine which recieves information about an environment and acts to achieve its goals. en.wikipedia.org/wiki/Intelligent_agentI dont think it needs to be explained how this qualifies Siri. Well, both "autonomous" and "acts" are highly questionable there. In philosophy, an act, for example, is characterized by intentionality. But Siri certainly has no intentionality. So we have to figure out what definition of autonomy and act we're using, too, because normal definitions would seem to be metaphorical at best. A camera, for example, functions differently depending on the light received via the aperture. Are cameras considered weakly artificially intelligent based on this definition? Ok then activity, as far as I know in AI they are used synomously. AI researchers dont really care or pay much attention to philosophy besides things like the Chinese Room argument. A camera isnt taking in information about the thing they are photographing and deciding to function differently in order to obtain a goal. A narrow AI camera would take in information about the lighting, where the person being photgraphed is standing etc and act appropriately to take the best picture.
|
|
|
Post by Terrapin Station on Mar 15, 2017 21:42:37 GMT
Well, both "autonomous" and "acts" are highly questionable there. In philosophy, an act, for example, is characterized by intentionality. But Siri certainly has no intentionality. So we have to figure out what definition of autonomy and act we're using, too, because normal definitions would seem to be metaphorical at best. A camera, for example, functions differently depending on the light received via the aperture. Are cameras considered weakly artificially intelligent based on this definition? Ok then activity, as far as I know in AI they are used synomously. AI researchers dont really care or pay much attention to philosophy besides things like the Chinese Room argument. A camera isnt taking in information about the thing they are photographing and deciding to function differently in order to obtain a goal. A narrow AI camera would take in information about the lighting, where the person being photgraphed is standing etc and act appropriately to take the best picture. In what sense is Siri deciding something that a camera isn't when it functions differently based on the light coming into the aperture? It seems like weak AI is simply a subjective application of metaphors to some things based on some superficial similiarities to literally intelligent things.
|
|
|
Post by Terrapin Station on Mar 15, 2017 21:44:22 GMT
Well, both "autonomous" and "acts" are highly questionable there. In philosophy, an act, for example, is characterized by intentionality. But Siri certainly has no intentionality. So we have to figure out what definition of autonomy and act we're using, too, because normal definitions would seem to be metaphorical at best. A camera, for example, functions differently depending on the light received via the aperture. Are cameras considered weakly artificially intelligent based on this definition? Why do you say that Siri certainly has no intentionality? I don't think there's any good reason to think that Siri has intentionality, just wondering why you're so sure that it doesn't. (Presumably the immediate answer is "because Siri certainly has no consciousness/mind", in which case I'd ask why you say that Siri certainly has no consciousness/mind.) Well, intentionality is a mark of consciousness. There's no reason at all to believe that it would be conscious or that anything that isn't made out of brain stuff could be conscious.
|
|
|
Post by 🌵 on Mar 15, 2017 21:48:53 GMT
Why do you say that Siri certainly has no intentionality? I don't think there's any good reason to think that Siri has intentionality, just wondering why you're so sure that it doesn't. (Presumably the immediate answer is "because Siri certainly has no consciousness/mind", in which case I'd ask why you say that Siri certainly has no consciousness/mind.) Well, intentionality is a mark of consciousness. There's no reason at all to believe that it would be conscious or that anything that isn't made out of brain stuff could be conscious. I agree, but "there's no reason to believe that X" doesn't justify concluding "certainly not X" or even simply "not X". I have no view on whether or not Siri has some sort of consciousness, or more generally whether or not consciousness can obtain in things other than brains. There's no reason to believe that it can, but I also don't see any reason to believe that it can't.
|
|
PanLeo
Sophomore
@saoradh
Posts: 919
Likes: 53
|
Post by PanLeo on Mar 15, 2017 21:51:56 GMT
Ok then activity, as far as I know in AI they are used synomously. AI researchers dont really care or pay much attention to philosophy besides things like the Chinese Room argument. A camera isnt taking in information about the thing they are photographing and deciding to function differently in order to obtain a goal. A narrow AI camera would take in information about the lighting, where the person being photgraphed is standing etc and act appropriately to take the best picture. In what sense is Siri deciding something that a camera isn't when it functions differently based on the light coming into the aperture? It seems like weak AI is simply a subjective application of metaphors to some things based on some superficial similiarities to literally intelligent things. The same way humans decide to cover their eyes with their hands in order to change the amount of light that goes in in comparison to how the pupil of an eye becomes smaller and bigger in order to control the amount of light let in. There are certain processes that go on in the human brain just like there are with Siri. Either way how is weak AI " a subjective application of metaphors to some things based on some superficial similiarities to literally intelligent things." but not AGI?
|
|
|
Post by Terrapin Station on Mar 15, 2017 22:00:05 GMT
Well, intentionality is a mark of consciousness. There's no reason at all to believe that it would be conscious or that anything that isn't made out of brain stuff could be conscious. I agree, but "there's no reason to believe that X" doesn't justify concluding "certainly not X" or even simply "not X". I have no view on whether or not Siri has some sort of consciousness, or more generally whether or not consciousness can obtain in things other than brains. There's no reason to believe that it can, but I also don't see any reason to believe that it can't. I don't agree that there being absolutely no good reason to believe that P doesn't justify believing that not-P. It's sounding almost like you're appealing to proving empiricals prior to commiting to beliefs. But no empiricals are provable. I'm not going to withhold beliefs just because people want to fantasize about whatever.
|
|
|
Post by Terrapin Station on Mar 15, 2017 22:01:00 GMT
In what sense is Siri deciding something that a camera isn't when it functions differently based on the light coming into the aperture? It seems like weak AI is simply a subjective application of metaphors to some things based on some superficial similiarities to literally intelligent things. The same way humans decide to cover their eyes with their hands in order to change the amount of light that goes in in comparison to how the pupil of an eye becomes smaller and bigger in order to control the amount of light let in. There are certain processes that go on in the human brain just like there are with Siri. Either way how is weak AI " a subjective application of metaphors to some things based on some superficial similiarities to literally intelligent things." but not AGI? I don't buy strong determinism though.
|
|
PanLeo
Sophomore
@saoradh
Posts: 919
Likes: 53
|
Post by PanLeo on Mar 15, 2017 22:05:05 GMT
The same way humans decide to cover their eyes with their hands in order to change the amount of light that goes in in comparison to how the pupil of an eye becomes smaller and bigger in order to control the amount of light let in. There are certain processes that go on in the human brain just like there are with Siri. Either way how is weak AI " a subjective application of metaphors to some things based on some superficial similiarities to literally intelligent things." but not AGI? I don't buy strong determinism though. How is it relevant? Do you deny there processes in the brain that make humans intelligent? If not why does it matter what happens on a quantum level?
|
|
|
Post by Terrapin Station on Mar 15, 2017 22:06:46 GMT
I don't buy strong determinism though. How is it relevant? Do you deny there processes in the brain that make humans intelligent? If not why does it matter what happens on a quantum level? How is it relevant? In that it's not the same way that a human decides something.
|
|
PanLeo
Sophomore
@saoradh
Posts: 919
Likes: 53
|
Post by PanLeo on Mar 15, 2017 22:13:16 GMT
How is it relevant? Do you deny there processes in the brain that make humans intelligent? If not why does it matter what happens on a quantum level? How is it relevant? In that it's not the same way that a human decides something. How? en.wikipedia.org/wiki/Artificial_neural_network
|
|
|
Post by 🌵 on Mar 15, 2017 22:17:29 GMT
I agree, but "there's no reason to believe that X" doesn't justify concluding "certainly not X" or even simply "not X". I have no view on whether or not Siri has some sort of consciousness, or more generally whether or not consciousness can obtain in things other than brains. There's no reason to believe that it can, but I also don't see any reason to believe that it can't. I don't agree that there being absolutely no good reason to believe that P doesn't justify believing that not-P. It's sounding almost like you're appealing to proving empiricals prior to commiting to beliefs. But no empiricals are provable. I'm not going to withhold beliefs just because people want to fantasize about whatever. Okay, we disagree on that then. I think it's not justified to believe a proposition on the basis that there's no good reason to believe its negation. Do you have any further reason for believing that Siri is not conscious, beyond the fact that there's no good reason for believing that Siri is conscious, or is that all? No, I'm not asking for proof that not-P. I can't prove that it's not currently raining heavily, but I do have good reason to think that it's not currently raining heavily: I can't hear any rain, and when it's raining heavily, I can usually hear it through my window. That's not proof. And of course I could easily be wrong (maybe my ears are not working so well, maybe the glass in the window has been changed while I was away, etc). A situation more analogous to the Siri case is if I were in a bunker, and had no ability to see or hear the outside world. Then I would simply suspend judgement on whether or not it's currently raining heavily.
|
|
|
Post by Terrapin Station on Mar 15, 2017 22:17:57 GMT
Human decisions are not strongly deterministic. Are you saying that artificial neural networks are not strongly deterministic?
|
|
PanLeo
Sophomore
@saoradh
Posts: 919
Likes: 53
|
Post by PanLeo on Mar 15, 2017 22:22:30 GMT
Human decisions are not strongly deterministic. Are you saying that artificial neural networks are not strongly deterministic? I am not sure what your point is.
|
|
|
Post by ArArArchStanton on Mar 16, 2017 1:47:41 GMT
Things like Deep Blue, Siri, etc. Those are artificial intelligence.
Just for a second consider that you combine a responsive unit like Siri with the calculating ability like Deep Blue in a structure that can perform tasks. How far away are we from something like Jarvis in the Marvel films, or Hal 9000 from 2001?
But it's a serious stretch calling anything like that artificial intelligence, and it certainly doesn't amount to artificial consciousness. There's no good reason to believe that artificial consciousness is even possible. There's no good reason to believe that substratum dependence isn't the case. In other words, there's no good reason to believe that consciousness isn't a property that only arises in certain substances/certain materials (when they're in specific sorts of structures, undergoing specific sorts of processes). And calling the others AI, they couldn't pass the Turing test, for example. And that would just be a "manner of speaking" AI anyway. It's not a stretch at all en.wikipedia.org/wiki/Timeline_of_artificial_intelligence, but I'm not pretending these reach human capability at this point.
Before you get confused by consciousness, consider that it may not really exist, and what I mean by that is it there may very be no hard line defining where consciousness begins. Is a infant conscious? Is a dog? Is a snake? Etc. There are different levels of intelligence, and that includes more proficient levels are being able to interpret what is going on around you, including what you yourself are doing.
There is very good reason to think that consciousness arises in the exact materials we see in life forms, and no reason to think it requires anything else.
|
|
|
Post by 🌵 on Mar 16, 2017 1:56:16 GMT
Before you get confused by consciousness, consider that it may not really exist, and what I mean by that is it there may very be no hard line defining where consciousness begins. That strikes me as a needlessly misleading way of speaking. There's no hard line demarcating people with white skin, since skin colours display a continuous gradient, but we wouldn't say that therefore white skin doesn't exist.
|
|
|
Post by ArArArchStanton on Mar 16, 2017 2:23:47 GMT
That strikes me as a needlessly misleading way of speaking. There's no hard line demarcating people with white skin, since skin colours display a continuous gradient, but we wouldn't say that therefore white skin doesn't exist. I agree,
What I mean is, there isn't a moment where something becomes conscious. For instance a human baby. It's made of the same stuff we are of course so if it goes from not conscious to conscious, it hasn't required anything new to get there. It is engaged in the same processes which simply become more refined, but aren't fundamentally different.
So consider Siri. It's clearly not a human intellect, but imagine steps towards that. What if we could interlink two and it could respond to itself? What if it could make multiple restaurant recommendations by considering travel time, price, and how long it had been since you asked it for Italian?
Is it doing anything fundamentally different? Isn't the process still basically the same, but only processing more information?
What I'm saying is, consciousness might ultimately be defined simply as a certain level of information processing and not some separate phenomenon.
|
|
|
Post by Eva Yojimbo on Mar 22, 2017 22:21:02 GMT
The simulation theory is difficult for me to gauge via Occam. On the on hand, if you assume that such simulations are possible, then it seems likely that at some point they would've been created and, given the possibility of nearly infinite trials, we would be more likely to be living in one; on the other hand, the conjunction fallacy would seem to vote against it, ie, that "reality" is simpler than "reality+simulation." I'm not sure if the notion of "possibility" in the former, of our ability to imagine that simulations are possible, negates the conjunction fallacy enough in order to declare the former more likely given that it's a priori less likely. One problem is that you have to assume that we're in a simulation to start the argument, because if you don't assume that, then you only can conclude that such simulations are possible in the future, but we haven't created them yet. So it's pretty question-begging. I don't think you have to assume we are, merely that such simulations are possible. They don't even have to be possible in OUR future, they just have to be metaphysically possible. They may not be. I take much the same approach with philosophical zombies.
|
|
|
Post by Eva Yojimbo on Mar 22, 2017 22:30:46 GMT
Just to jump in here: it depends on what you think falls under the rubric of "empirical," but simplicity and elegance can be demonstrated mathematically as in Solomonoff Induction and the general idea of computational complexity (in which objects, hypotheses/theories, etc. can be modeled in binary). Generally, mathematics is considered a priori, though. So showing that one theory is simpler than another in the sense that it has lower Kolmogorov complexity, or whatever, is surely not going to count as "evidence" for that theory per lostkiera's usage of the term. Of course, one could argue that there is empirical evidence that theories that are simpler in that sense also tend to be correct, or at least more correct than less simple theories. It's not clear to me how elegance can be demonstrated mathematically. To me, to say that something is elegant is to say that it has not just simplicity but also beauty, style, charm, etc. That's all subjective. So elegance is certainly not an empirical virtue of a theory in my view, even if there is some sense in which simplicity could be said to be an empirical virtue. I disagree with the notion of math being a priori. Without access to the empirical we have nothing to model via math. The only reason we accept the mathematical axioms we do is because they agree without empirical experience. If they didn't, we wouldn't have them. That said, I essentially agree that such a thing wouldn't really count as evidence in the a posteriori sense either. Still, I often think that Occam is often the only real "evidence" that needs considering on some matters where a posteriori evidence can't distinguish between hypotheses (as in the case of quantum mechanics). I guess it depends on what you mean by "elegant," but in this sense I just took it as a kind of subjective counterpart to the objective notion of "simplicity:" ie, the "simplest" hypothesis would appear the most "elegant" to us. If the former can be formally quantified then the latter, in a sense, can to (since it's just our recognition of the objectively quantifiable aspect of simplicity).
|
|
|
Post by 🌵 on Mar 22, 2017 23:13:18 GMT
Generally, mathematics is considered a priori, though. So showing that one theory is simpler than another in the sense that it has lower Kolmogorov complexity, or whatever, is surely not going to count as "evidence" for that theory per lostkiera's usage of the term. Of course, one could argue that there is empirical evidence that theories that are simpler in that sense also tend to be correct, or at least more correct than less simple theories. It's not clear to me how elegance can be demonstrated mathematically. To me, to say that something is elegant is to say that it has not just simplicity but also beauty, style, charm, etc. That's all subjective. So elegance is certainly not an empirical virtue of a theory in my view, even if there is some sense in which simplicity could be said to be an empirical virtue. I disagree with the notion of math being a priori. Without access to the empirical we have nothing to model via math. The only reason we accept the mathematical axioms we do is because they agree without empirical experience. If they didn't, we wouldn't have them. That said, I essentially agree that such a thing wouldn't really count as evidence in the a posteriori sense either. Still, I often think that Occam is often the only real "evidence" that needs considering on some matters where a posteriori evidence can't distinguish between hypotheses (as in the case of quantum mechanics). Surely you don't need to have something to model with mathematics in order to do mathematics. Modelling empirical phenomena is one application of mathematics. It's not the only application. Some mathematical systems might not have any application whatsoever. A mathematician might develop a mathematical system just for fun, without any particular application in mind. In what sense, exactly, does the justification for a claim like "17 is a prime number" depend on experience? You don't distinguish the primes from non-primes by e.g. splitting the light of the natural numbers with a spectrometer or peering at the natural numbers through a microscope. Maybe our acceptance of an arithmetic system derived from the Peano axioms is based on our experience. Perhaps the thought is that our best models of the world appeal to such an arithmetic. But there are other arithmetics, such as modular arithmetics and inconsistent arithmetics, and these can also be used to model aspects of our experiences (most obviously, modular arithmetics can be used to model clocks). In any case, we can weaken the claim to "in the standard Peano arithmetic, 17 is a prime number". That's all that's needed for the purposes of mathematics. In what sense does the justification for this claim depend on experience? I suppose we need experience in order to acquire the relevant concepts. I learn about arithmetic, 17, primes, etc, in school. But obviously this is much broader than is usually meant by " a posteriori justification".
|
|