Post by cupcakes on Jun 12, 2017 16:34:17 GMT
tpfkar
Karl Aksel said: "Slaves to the process" is a metaphor, not an analogy. Not sure how you made it out to be an analogy, much less a poor one.
I am also not sure how you find that this metaphor doesn't say anything. You say so, but as far as I can see, I was quite clear. Your response leaves me with very little to say, ironically because you haven't said anything.
You seem to be having issues with how I have defined "free will", and I think it would be helpful if you offered up what definition you are discussing.
"The ability to choose and act according to our preferences and who we are."
Unless you are simply baiting me - you keep disagreeing with me without actually offering any points of disagreement.
I don't know what to tell you, other than I of course think this statement is utterly false, and perhaps maybe go grab some comfort food.Yes, I thought you meant "understatement". But consciousness does not contradict automation. It has been postulated that if we were able to create artificial intelligence of sufficient capacity, it might become conscious and self-aware.

There are (at least) two conflated, although related tracks that have sprung in this thread. The nature of "free will" as exists or makes sense, and the nature and culpability of any being that would intentionally install a situation like the one we exist in. For me, the meaningful "free will" that we have in no way exonerates a being that created all, as he in fact had to have ultimately controlled or had the ability to control every aspect going into it, including what traits and preferences everybody and anybody gets, or just not be capable. But additionally, the fact that we like everything else are the result of "reasons" in no way diminishes that we are free to act according to who/what we are, what preferences we have, all steered by our consciousnesses, of which we don't really have a sound idea of how work or even what they really are. Conjecture is great, and ponderings on individual culpability and (over?)statement of implications and extrapolations of cause and effect can certainly be engaged in. Maybe one day we'll be able to map it all out, and maybe it will then show that we really are just glorified dippy birds. As of today, it still doesn't seem so.
Can neuroscience understand Donkey Kong?
