|
Post by faustus5 on Nov 16, 2017 14:33:56 GMT
But there does seem to be a repetition of that tendency, oft-repeated throughout the history of science, that the closer we look, the less difference we see between the living and the non-living. When you are looking at the lowest levels, watching protein formation or the biochemistry occurring at a single synapse, yes: completely mechanical processes. Well, you have the resemblance kind of backwards there. Neural networks were inspired by what scientists saw going on in the brain, so the resemblance is hardly "striking" since it was by design! Only at the most superficial levels is it the "same". Neural networks fail to capture the complexity going on inside of individual neurons, which are living creatures competing with each other for resources, and also leave out the important role of the biochemical soup neurons live in, which influences their firing patterns in important ways. I should add that there are also models which posit that considerable information processing/computing activity goes on even within individual neurons, not just in between them and in their networks. The mechanical circuits built by humans pale in complexity to their counterparts in the brain. Even the most complicated are extremely simple and crude when compared to biological systems. Remember, my motto in all this is that you have to take functionalism seriously, which means applying it all the way down and not stopping until you've instantiated every relevant cause and effect in your alternate medium. Only then can you say you've created a model that is truly substrate-neutral.
|
|
|
Post by koskiewicz on Nov 16, 2017 23:43:43 GMT
...this thread is filled with artificial intelligence...
|
|
|
Post by general313 on Nov 17, 2017 1:09:09 GMT
But there does seem to be a repetition of that tendency, oft-repeated throughout the history of science, that the closer we look, the less difference we see between the living and the non-living. When you are looking at the lowest levels, watching protein formation or the biochemistry occurring at a single synapse, yes: completely mechanical processes. So you think there is something special that happens at a higher level in the brain that is beyond the reach of machine learning designers? Some kind of emergent property? As I said before, given the history of observers imaging some fundamental and special difference between the biological and non-biological, I won't bet on it. It doesn't matter where the inspiration came from. If we can create artificial neural networks that exhibit the same behavior as brain structures it takes away some of the mystery of how the brain works. That the behavior can be replicated with inorganic machinery suggests that there's nothing special about biological brain cells. It's simply a factor of how we evolved, just as we locomote via muscles and bones, and not with wheels. Individual neurons functionally are not very different from neural network nodes. A brain neuron can have thousands of interconnects (dentritic connections) but so can neural network nodes. Would you have links to these models with "considerable information processing happening within a neuron"? It seems to me that the interior of a neuron is mostly the cellular machinery typical of other cells for metabolism (managing energy to operate). That is true so far, but that is mainly a question of the number of neurons that are in the human brain (100 billion or so I believe). Admittedly we are still in the infancy of machine learning, but I think there is every reason to expect great advances in our ability to make hardware powerful to rival the capabilities of the human brain in the not too distant future. Hence the concern of the main topic of this thread.
|
|
|
Post by faustus5 on Nov 17, 2017 18:08:26 GMT
So you think there is something special that happens at a higher level in the brain that is beyond the reach of machine learning designers? Nope. I don't think there is anything that is beyond the reach of machine learning designers, in principle. I just share a concern with folks like Patricia Churchland that it is foolish to think you can truly understand or model human consciousness without concentrating on the human brain. "Emergent property" is a loaded term in philosophy of mind, so I'm not willing to enthusiastically hook my cart up to that particular horse. What I was getting at is that while everything that happens in the brain is mechanical, it isn't as easy to see this at higher levels than it is when you are looking at the most simple of processes. But we can't. And that is in part because too many theorists dismiss out of hand some of the brain structures and processes that contribute to human consciousness. We can do this with only a tiny fraction of the things human brains can do, and not that well in many cases. It's too early to get excited about these kinds of results. Actually, they are very, very different. Neurotransmitters released in the brain are vital for changing and modulating the processes underlying consciousness. Where are they in a mechanical neural network? Well, the one model that gets the most press (Stuart Hammeroff's) only gets the press it does because he's fooled a famous physicist into endorsing his idiotic New Age quantum consciousness bullshit. He might be right that microtubules in neurons perform computations, even computations that utilize quantum physics, but the rest of his theory is just rubbish. Here's something more defensible: www.ucl.ac.uk/news/news-articles/1310/28102013-Smart-neurons-Single-neuronal-dendrites-can-perform-computations-Hausser
|
|