Philosophers Ever since Descartes said, “I think therefor I am” philosophers have been plagued by the problem of what it means to think and what or who is this ‘I’. The problem is how does the mind work, is the mind something tangible or intangible. Or, as a great philosopher put it “What is mind, doesn’t matter. What is matter, never mind.” This problem of where thought takes place and what the mind is. This is called mind-body dualism, which states: that mental states and brain states are one and the same. The mind-body problem is simple, bodies are physical things that are tangible and can be measured.
But, the mind or thoughts can have no quantitative qualities at all. Now if the mind is something tangible then we can know what it is like for other beings to think, and what it is like to think like another being. But, this is not so as Nagel argued this theory is not enough to understand how any animal or we think. Nagel presented this by arguing what is it like to be a bat and the simple answer is that we can never know because we are not bats.
Searle then argued that this theory is not enough to explain things like Artificial Intelligence, where machines like computers are supposed to be able to think just like human beings do. Thomas Nagel’s thesis when he presented his argument for the mind-body dualism theory was that this theory is not enough to explain how we think or what consciousness is. By asking “what is it like to be a bat” what he is really asking is what is it to have the same life experiences as a bat. According to the identity theory we should be able to know this through some form of subjective reasoning.
“Consciousness,” he says, “is what makes the mind-body problem really intractable.” Mind-body dualism does not take in to effect consciousness and because of this other philosophers have had a problem explaining it but Nagel sees that consciousness is a very important par of thinking and of the mind. “Conscious experience is a widespread phenomenon,” and “the most important and characteristic feature of conscious mental phenomena is very poorly understood.” If consciousness is so wide spread and it is at the same time not understood then this presents a huge problem with the identity theory. If we do not know what consciousness is then how can we possibly understand thought or the mind. With this basic premise Nagel proceeds to try to explain what it is like to be a bat.
He chooses a bat because it is simpler than a human yet at the same time it is not so primitive that we would have a hard time understanding this somewhat alien creature. “One person can know or say of another what the quality of the other’s experience is,” now if this is true then we could all know everything about how and what it is like to be or think or have anyone else’s experiences but this is an obvious paradox. We can not know what it is like to have another’s experiences so therefor we can not understand another’s experiences because they are not ours. So then experiences come from physical acts and then a mental state is created not vice versa. So this shows the problem with the mind-body dualism problem.
Mental states do not equal brain states, if they did then we could all know what it is like to be or have the same experiences as anything or anyone else. John Searle also attacks the mind-body dualism problem but he talks about Artificial Intelligence [AI]. His central thesis is that computers can think but that ‘strong AI’ (as he calls it) is not a mind, as most advocates of AI like to say it is. Searle begins to argue what it means to really think because if the mind-body dualism is correct then ‘strong AI’s could be able to act just like the human mind and not like a very sophisticated computer program.
“According to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer is a mind.” What he means here is that a very well programmed computer that is created to make decisions and to understand complex problems is not a mind, but it is just a very complexly programmed computer with highly advanced search functions and algorithms. “Now the claims made by strong AI [supporters] are that the programmed computer understands and that the program in some sense explains human understanding.” Now according to the identity theory the computer should be able to think and that the experiences that humans have the computer can understand but the problem is that a computer can not possibly understand what it is like to be a human at all. Searle argues by analogy the problem of the whole ‘strong AI’ conundrum. Searle sets up a thought experiment where he is trapped in a room and given some paper with Chinese writing on it. He is then given a paper with instructions on how to decode the Chinese but this paper is written in English. Searle then goes on to explain that after a while he has become very good at decoding the Chinese and that anytime he is given a ‘program’ he can decode it and reply in Chinese to give an ‘answer.’ Searle then examines the thought experiment and compares it to a strong AI.
“It seems to me quite obvious in the example that I do not understand a word of Chinese stories,” he says, “I have in-puts and outputs that are indistinguishable from those of the native Chinese speaker but I still understand nothing.” What he is saying is that although a computer can be given English input and give English output but it still does not understand what it is doing. A computer can only understand 1 s and 0 s it can not truly understand English input, in order to do so it has to first break down the English in to a language it understands then follow a set of directions, then give English output. “The program explains human understanding, we can see that the computer and its program do not provide sufficient conditions of understanding since the computer and the program are functioning, and there is no understanding.” When a living thing thinks or uses its mind there is an understanding but when a computer ‘thinks’ it is just following a set of directions and not learning, thinking, or doing anything other than what it was programmed to do. Searle shows through his thought experiment that ‘strong AI’ can not exist because a computer can only follow directions and never be like a human or any other living thing, and take a chance by making a choice or a decision.
A computer does not explain understanding because anyone can follow directions to do something and never truly know what they are doing and they may never grasp the full meaning of what they are doing. Yet if they are to repeat this task many times and become very good at it. Then if someone sees them doing this task the onlooker might think that the person understands what he or she are doing when in truth the task is not understood it is just executed like a computer executes a program. The mind-body problem or identity theory, which tries whether thought is a common thing that all can understand or whether, as both Nagel and Searle believe, that thought and understanding occurs in ones own mind and only the person or thing can know their own thoughts.
Both philosophers agree that the problem with the identity theory is consciousness. Both believe that in order to understand thought or to have an understanding of something then one must be conscious. If we try to take a subjective view of what it is like to be a bat or if we try to act like a computer and do a ‘program.’ Then, we never truly understand thought of the bat or are a thinking computer, because we do not posses the bat’s consciousness and because a computer can not have conscious thought. So it all comes back to the ‘I’ that Descartes first proposed. Nagel and Searle both show that this ‘I’ is consciousness, and that consciousness is not a collected thing that all have access to but rather it is something that each of us has and only contains experiences that we have undergone. The mind and body function, as together at different levels and that the brain, the large piece of gray mass in our heads, is not the same as our mind, which is where consciousness takes place.
Consciousness is understanding and if we take it away then we are just a computer, or if we have it but try to lessen it to try to be like an ‘alien’ life form then we fail because it is impossible to do so.