|
Hello Aurino,
I moved this topic to the top:
***" Science concerns itself with more than the 'non-physical' phenomena. In fact, from a certain philosophical perspective everything is 'non-physical'. " Hey, who's the anti-realist anyway? Sorry but I can't take this "everything is non-physical" seriously.***
Anti-realism means something different. Anti-realism deals with what is actually real. Are fields real? Are atoms real? The scientific anti-realist rejects the actual real-ness of such objects as elicited in any current scientific theory. I am a contextual realist in that I think our structures are reducible to real structures in the context in which we are talking (related to contextualism). For example, a family is real in the context of relationships, but is not-real or meaningless in terms of sub-atomic particles (and I'm not talking about families of particles). Hence, in the right context, you could view everything as non-physical depending on how you define the term.
***" We don't actually observe galaxies, we observe the light emitted from galaxies many thousands and somtimes millions and billions of years ago. We don't actually observe sub-atomic particles, we observe their statistical effects on measurable parameters. Similarly, with consciousness we are observing a phenomena, but we are looking at it in terms of measurable observables. " Your analogy is just wrong. The concept "galaxies" is required to explain the radiation captured by our telescopes. The concept "sub-atomic particle" is required to explain the readings of instruments in particle accelerators. As a concept, "consciousness" is neither required nor applicable to explain anything.***
It depends on what we are talking about when defining consciousness. It is just as 'real' as a particle in particular contexts. Science can study objects using models that apply to a certain context. Consciousness as a valid model, might apply, but in a materialist (reduction to material things), the term is an epiphenomena. This doesn't mean that science cannot study the phenomena, or even find causes to it, it just means that the phenomena itself has boundary conditions where it is valid to discuss these issues. Science defines those boundary conditions for practical purposes. Galaxies is really no different. If you think about it, galaxies are just collections of stars and dust. If you look at one major galaxy it might have a smaller galaxy that has recently collided. When does the minor galaxy cease to exist, and the two galaxies simply become one galaxy? The reality of the matter, is that humans refer to galaxies in a certain context (e.g., two galaxies have not merged recently, etc), and those set the boundary talk about such issues. If we don't look at context then the blurring of the lines leave no room for any talk of objects or phenomena.
***" True, we must define consciousness at least to a degree where observables are possible, but there are some observables of or related to consciousness that are deemed an important aspect of it. For example, self-awareness, self-examination, self-control, ... " Self, self, self... just exactly how do you observe a "self"?***
We study the observables that are considered to amount to these 'self' attributes. If a chimpanzie stares at itself in the mirror and responds with communication with a human that it recognizes itself, then we can attribute a certain self-awareness to the chimp. This has been done, and it is important since few primates have exhibited this kind of self-awareness.
***" tool making " Bees and termites are wonderful architects. Are you ready to claim they are conscious?***
As Kyle pointed out, there's no reason to surmise that consciousness is a hit or miss phenomena. There might be levels and branches and situations of conscious phenomena with different species exhibiting different levels and branches and situations. So, yes, a termite is more 'conscious' than an amoeba.
***" etc are often considered a measurable attribute either related to or a component of consciousness. We measure the development of many of these attributes in children, in chimpanzies, even in crows. " Are pre-linguistic children conscious? Alan says they are, do you agree with him?***
Where Alan is just ridiculous. He's attributing adult characteristics to children, not realizing that we develop more adult features on our way to adulthood. In the case of pre-linguistic children, there is a certain degree of awareness. The development of this awareness and furthering consciousness is well-documented and continues to be so.
***" In fact, in order for a word to have meaning, it must express some referencable phenomena so that we can say "aha, I know what you mean by that word". " Just exactly what phenomena do you reference when you use words like 'beautiful', 'elegant', 'joyful'? Exactly how do you think children figure out that the word 'thing' means 'any thing'? What is so common between elephants, birds, and earthworms that makes then all 'animals'?***
This is a problem for the philosophers of language. It is unsolved, and I won't pretend to have developed an answer. What I will say is that words do not have meaning of themselves since they are symbols. I remember learning to read in school (at least somewhat), and I remember those words being tied to objects that we could see, or pictures presented to us. Learning is at least somewhat ostensive otherwise teachers would not use this technique. Hence, trying to understand how we learn a first language is not the same as saying that words only have meaning unless they refer to something. Every word used has a reference, otherwise dictionaries would be in trouble when they failed to define a word.
***You are so familiar with what words mean that you completely overlook that learning a first language is so impossibly difficult that only a young child can do it.***
It's not what I'm addressing. I was careful to avoid this issue.
***"If I saw behaviors that gave me the idea that you were not conscious (e.g., you had a severe case of autism), then I might conclude by the way we use the word that you had a low level of consciousness." Exactly how do you know that people with severe cases of autism are not conscious?***
I don't, but if I saw behaviors of extreme autism, then I might conclude that you were missing certain faculties common in those whom we consider fully conscious. If you were talking in your sleep, and a person nearby might know you are not fully conscious because you show all the signs of sleeping and all the signs of talking in your sleep. Does this mean you are not fully conscious? No. You might be fooling that person by pretending to be talking in your sleep. This gets into what we think we know versus what we actually know (epistemology), and it is not really a major concern for science. Science tries to explain phenomena within certain contexts and certain assumptions (e.g., no one is pretending to talk in their sleep), and from that point it tries to explain a phenomena using a model. The issue of consciousness is no different. It's already true that neuroscience has started to look at consciousness (and related issues) in terms of specific observables and not some mystical all-encompassing phenomena.
***" The Chinese Room is not as directly related to the problem of consciousness as it is to the problem of understanding. " In order to understand you need to be conscious. Just like consciousness, 'understanding' is a subjective feeling with no counterpart in physical phenomena. Unless you want to argue that my telephone "understands" that when I dial number, that means I want to talk to a certain person.***
I see a lot of back and forthness between philosophy and science in your reply. Science deals with observables and how to account for those observables using certain models. Philosophy often tries to explain what exactly science is explaining, and discover what is unanswered in those explanations. In neuroscience, the problem of understanding is addressed by interviews and tests conducted during those interviews that relate the reply of the interviewee with the result of the test. The interviewer might provide a general description of what is an appropriate answer for a certain kind of question, and then proceed to question and observe the test results during the answering period. This is science.
Philosophers don't engage in this kind of interview/test behavior. It is not their interest. Philosophers are concerned with how understanding is possible, and what exactly scientific research of understanding actually tells us. It won't ignore the tests conducted, but it asks whether these tests actually tell us anything about understanding. For example, we are having a philosophical discussion on it.
Now, it would seem that philosophy has the upperhand in this debate since philosophers can skip all the fluff (e.g., non-sensical interviews that have nothing to do with understanding as a philosophical problem), and get to the heart of the matter, but historically science has surprised philosophy by coming up with novel solutions of problems that philosophers failed to consider. This progress in science, in effect, changes the philosophy debate to a point to where the older philosophy debates are out of step with what we know today. For example, science has showed us that chimps can use language to communicate with humans, and that chimps do recognize themselves in the mirror, etc. The ancient philosophical discussions that women don't have conscious minds are ridiculous now. Similarly, the more recent philosophical views that animals don't realize they exist is also faulty. This is why the science of consciousness and understanding is worth the effort. We have no idea on where this research will take us, or how it will improve our software, etc. In fairness to philosophers, it has also been philosophy that has assisted scientists in major breakthroughs. So, philosophy and science working hand in hand is effective, and it shouldn't be dismissed as you are doing. Sure it's primitive and even laughable in some respects, but progress is being made and Dennett et al are contributing to that progress.
***If the reductions are the same as your understanding, then you don't really "understand" anything. This sounds like Dick and his argument that the meaning of words is given exclusively by their relationships with other words. In other words, language is an abstract representation of a whole lot of nothing.***
Well, I am not saying anything about a theory of consciousness or understanding with regards to machines. What I'm saying is that I do not rule out the possibility that such theories won't be improved to make significant progress with machine intelligence, self-awareness, etc. The hypothetical that I threw out there is just one idea on how the term 'understanding' could eventually be viewed like the term 'space and time' are seen today. There might be much more to the story, and the science of understanding might lead into neurotransmitters and it might extend the reach of a computer being able to feel like it understands such as when we have this feeling. The meaning of words is not just relationships of words, it has to do with our pragmatic relation with the world, and that is something that is completely over Dick's pre-philosophical conceptions. He has very little interest in philosophy, so he approaches the world in a naive way. Of course, he has some interest in the philosophy of science otherwise he'd never have addressed those issues after grad school. He just had no appreciation for all the other great minds who addressed these issues.
***" With the computer, this is not the case. If a computer is asked "what does it mean for Harv to say he is hungry", the computer can recite all the same phrases as I, but it does so by looking in a table to translate my question into machine language, and then doing whatever the program says to do when it encounters this particular code, and then execute that code (e.g., look in a table that has the instructions to say all the things I just said). " What makes you think that someone looking at your brain might have a completely different impression?***
Since I have no other experience of understanding than what I assume other humans have, I conclude that understanding in humans is much different than for a computer. If God looked inside my brain, I'm assuming he would know the reason why I have my sense of understanding and why a computer does not possess this kind of understanding awareness. It's this reason that we want to find so that we can add it to existing computer software (if possible). Of course, after Jul/3 when Terminator 3 comes out I might feel differently of wanting to add this sense of understanding to a computer if such were possible.
***" It might be the case that the Chinese Room shows that 'understanding' might be an unsolvable phenomena to program into a computer " It's not unsolvable as much as it is completely irrelevant. From our perspective as observers, there's no difference whatsoever between a computer that is "really intelligent" from one that "appears to be really intelligent". To debate the issue is sophistry.***
No there is no difference. However, I think the Searle thought experiment is to show that understanding is not possible with computers and software. The challenge is to show that it is possible. To do that, it might be necessary to differentiate between 'acting as if a computer understands' and 'demonstrating that a computer does understand'. From a measurable scientific view it makes no difference, but from a philosophical view it makes a great deal of difference. I guarantee you that if in 100 years there were computer scientists who made a computer that exhibited all the signs of consciousness and understanding as a human, but they could not show in the code how such was possible other than pure sophisticated mimickry, then you get bet that philosophers would have a big problem with such kind of pseudo-solutions to the theory of consciousness and understanding.
***"For example, maybe our brains do not really understand something, but it fools us into thinking that we understand something by issuing a neurotransmitting chemical into our blood system that gives that 'feeling' of understanding " Just exactly what is the difference between "a feeling of understanding" and "understanding"?***
Well, philosophically speaking from my own view here, understanding means some form of holist theory that explains why our views are in some way isomorphic with our experiences without having to encounter every possible experience. In terms of a feeling of understanding, this skirts the need for such a theory by saying that in fact we do not need to possess such a holist theory, we only need to possess a certain amount of isomorphic matches that the neurotransmitter is elicited into our system. In the holist theory, we have a reason why we understand that is picture perfect, whereas in the neurochemical solution we never really understand per se, but just fool ourselves. Scientific anti-realism would get a big boost from a neurochemical 'solution' to understanding since one of the pro-arguments for realism is that good theories make for better understanding, therefore the theory is true. Whereas a neurochemical 'solution' would in fact say that a theory is chosen because it is sufficient to fool our brains to release the neurotransmitter, thereby making the sense of understanding possible - in other words, the understanding that comes from a good theory fits in with the anti-realist outlook of how theories are selected by their pragmatic benefits.
***Both Alan and Dick get a feeling that they understand their own stuff. If we got the same feeling they get, would we be "really understanding" them, or would we just be "thinking we're understanding"? What's the difference?***
If there is a neurotransmitter responsible for understanding, then Alan and Dick could be malicious and sneak the neurotransmitter into our coffee the moment we are reading their essay, and all of a sudden 'we would get it' and say that now we understand what they are talking about and how right they are. It would be a pretty nasty drug to have the power to use...
***" If that's the case, we might find that neurotransmitter someday and show that if you prohibit this neurotransmitter from getting into the blood, that humans lose their sense of understanding. " Just exactly how are you going to assert, in a scientific manner, that someone lost their sense of understanding? Asking them?***
Yes. Plus, you might do MRIs and see if the same areas light up with and without the neurotransmitter being applied. This is convincing enough if done in a statistical manner.
***" If that's the case, we might build future machines that have a sense of understanding as long as we block from their perceptual awareness the reason why they feel like they understand something. All we've done, in effect, is create a feeling of understanding in a machine that sees no reason to question why they cannot reduce their understanding to something more basic (e.g., a flow of electrons forming certain circuits). " I'm sorry but I can't make any sense of the above.***
If you knew you were taking a neurotransmitter that gave you that 'eureka' feeling the moment you read Dick or Alan, then you would know that the drug is what caused it. However, if you didn't know that the drug was given to you, then you would just assume that you finally understood them and you would agree with them. The same for a computer, if a computer knew the details of its circuitry and chemistry, it would know that its understanding is all phony baloney. So, to make the computer feel like it is understanding for real, you would need to block its ability to know how it came to understand something so that it wouldn't suspect another reason. It would be pretty hilarious if that's what our sense of understanding is based on. It would be the ultimate in cruel jokes on us, but then again, having the sense of understanding might be worth a joke or two. I'm sure there is a reason no matter what the cause.
***" maybe these advancements aren't too pathetic even though they are still eons away from any grand goal to understand consciousness...." "Understanding consciousness" is a pipe dream, not unlike perpetual engines or free energy. When did people ceased to understand that somethings are just impossible?***
[Chuckle] The line in the sand of what's impossible keeps moving back a notch.
***It's nice to disagree for a change, things were getting too warm and fuzzy between us lately :-) Cheers, Aurino***
I agree. Have a good weekend!
|