Computationationalism & The Chinese Room

Get Started. It's Free
or sign up with your email address
Rocket clouds
Computationationalism & The Chinese Room by Mind Map: Computationationalism & The Chinese Room

1. RTM and Computationalism combine to explain cognition Computationalism thinks that mental representations are symbols coded in a neurological language like a computer language. RTM/Comp can explain shared content among diff attitudes towards same proposition (Advantage over Functionalism) Also solves interface problem. Fodor hopes in the future there will be compelling story of how symbols in brain get meaning through natural process. Causal relations = extensional Content = intensional Foder has fallen into a trap - prima facie reason causal relations will never work. Trouble for computationalism - The view that cognition is just computation. What the brain does it just more complex than a computer. Searle said that cognition cannot be merely computation. He raised the problem of intentionality.

2. The Turing Test - Alan Turing Can machines think is too vague. Imagine 'imitation game' Revised Q: Could a machine (digital computer with as much storage and speed as you like) convince a human interrogator that it is a human? Interesting Q is. Should we regard the ability to pass the turing test as a sufficient condition for being a thinking thing? Can genuine understanding be ascribed to a machine = Strong AI Turing's definition of a computer: intended to carry out any operations that a human computer could. Follow fixed rules. Digital computer has following parts: 1) A store (large enough) 2) An executive unit (fast enough) 3) A control (fast enough) Could simulate human behaviour. 3 objection: 1) Godel's incompleteness theorem 2) Consciousness 3) Lovelady objection

3. The Chinese Room Nevertheless, such a machine could not possess genuine understanding. Used this thought experiment. His main point is that 'formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless. Computer that passes TT just manipulates symbols, but the symbols don't mean anything. Understanding cannot belong a machine. What would it mean to say that symbols meant something to the computer? Searle thinks the intrinsic meaning brains give to symbols is part of physio-chemical composition, but doesn't say much else. 3 objections: 1) The systems reply 2) The brain simulator reply 3) Future Machines Possibility But Searle is not saying what the missing element is. It's is all very well saying thought in brain have intrinsic meaning but how? His argument is circular: Premise 1: If machines can think a PICR can think. Incorrect. Premise 2: PICR can't think. Incorrect Conc: Machine's can't think. Assumtpion Is Searle's formulation of the manual coherant at all? He massively simplifies. What about creativity, imagination?