Week+9-2+The+Extended+Mind+PHIL250+W1+2024

.pdf
School
University of British Columbia**We aren't endorsed by this school
Course
PHIL 250
Subject
Philosophy
Date
Dec 19, 2024
Pages
71
Uploaded by LieutenantTankLion43
RecapPHIL 250 - Minds and Machines - YuiEmbodiment Brain in a vat or Body in a World Brainbound view Three reasons we can’t really imagine a brain in a vat Passive Externalism Twin Earth and Arthritis thought experiments1
Background image
OutlinePHIL 250 - Minds and Machines - YuiThe Extended Mind Inga and Otto Twin Otto Objections Reconstruction of the argument Elephants don’t play chess Symbol Grounding View vs. The Physical Grounding View2
Background image
PHIL 250 - Minds and Machines - YuiIs meaning in the head?3
Background image
PHIL 250 - Minds and Machines - YuiPassive ExternalismThe contents of thoughts are not (fully) in the head. Putnam’s Twin Earth thought experiment.!!water is h2owater is xyz4
Background image
PHIL 250 - Minds and Machines - YuiPassive ExternalismJane suspects she has arthritis in her thigh. She does not know that arthritis is a condition in the joints only. Consider someone with the same internal state and history, except in her linguistic community “arthritis” refers to a different disease which also induces thigh pains.5
Background image
PHIL 250 - Minds and Machines - YuiThe Extended Mind6
Background image
PHIL 250 - Minds and Machines - YuiAndy Clark and David Chalmers7
Background image
PHIL 250 - Minds and Machines - YuiThe central questionWhere does the mind stop and the rest of the world begin? Clark & Chalmers Active externalism: the environment plays an active role in driving cognitive processes.8
Background image
PHIL 250 - Minds and Machines - YuiExtended cognitionThree cases of problem-solving C1: Playing tetris while mentally rotating the pieces in your head. C2: Playing tetries while physically rotating the pieces with a physical button. There’s a speed advantage. C3: A neural implant allows for mental rotation at the speed of a computer. Is there a meaningful difference?9
Background image
PHIL 250 - Minds and Machines - YuiThe three cases are similarUsing physical rotation is faster. It takes 100ms and 200ms to select a button. Compare this to 1000ms for purely mental rotation. Epistemic action:the action is used to learn something.Pragmatic action:the action is used to get something done.Epistemic action demands the spreading of epistemic credit. “If a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process.”10
Background image
PHIL 250 - Minds and Machines - YuiPassive externalismSome philosophers (Burge, Putnam) have argued that the contents of thoughts are not (fully) in the head.!!water is h2owater is xyz11
Background image
PHIL 250 - Minds and Machines - YuiActive externalismCoupled system: when the human organism is linked in a two-way interaction. The external environment plays an active causal role in the cognitive system.12
Background image
PHIL 250 - Minds and Machines - YuiThe alternative: The naked mindThe Naked Mind: Cognitive processes require portability. It’s the package of resources we can bring to bear regardless of environment. Coupled systems are too easily decoupled. But, say Clark and Chalmers, That still lets in counting on fingers What if people always carried a pocket calculator13
Background image
PHIL 250 - Minds and Machines - YuiEvolved coupling“The extraordinary efficiency of the fish as a swimming device is partly due, it now seems, to an evolved capacity to couple its swimming behaviors to the pools of external kinetic energy found as swirls, eddies and vortices in its watery environment (see Triantafyllou and G. Triantafyllou 1995).” “Now consider a reliable feature of the human environment, such as the sea of words. This linguistic surround envelopes us from birth. Under such conditions, the plastic human brain will surely come to treat such structures as a reliable resource to be factored into the shaping of on-board cognitive routines.”14
Background image
PHIL 250 - Minds and Machines - YuiCognition to MindImagine Inga: "Inga hears from a friend that there is an exhibition at the Museum of Modern Art, and decides to go see it. She thinks for a moment and recalls that the museum is on 53rd Street, so she walks to 53rd Street and goes into the museum. It seems clear that Inga believes that the museum is on 53rd Street, and that she believed this even before she consulted her memory. It was not previously an occurrent belief, but then neither are most of our beliefs. The belief was sitting somewhere in memory, waiting to be accessed."15
Background image
PHIL 250 - Minds and Machines - YuiCognition to Mind16
Background image
PHIL 250 - Minds and Machines - YuiCognition to MindOtto: "Otto suffers from Alzheimer's disease, and like many Alzheimer's patients, he relies on information in the environment to help structure his life. Otto carries a notebook around with him everywhere he goes. When he learns new information, he writes it down. When he needs some old information, he looks it up. For Otto, his notebook plays the role usually played by a biological memory. Today, Otto hears about the exhibition at the Museum of Modern Art, and decides to go see it. He consults the notebook, which says that the museum is on 53rd Street, so he walks to 53rd Street and goes into the museum.”17
Background image
PHIL 250 - Minds and Machines - YuiCognition to MindInga and Otto’s beliefs seem to be on par. We couldclaim that Otto’s belief is that there are facts written in the notebook. Not that the contents of the notebook are literally the contents of his beliefs. But this is unnecessarily complicated. In an explanation, simplicity is power.18
Background image
PHIL 250 - Minds and Machines - YuiCognition to MindTwin Otto: "…Twin Otto, who is just like Otto except that a while ago he mistakenly wrote in his notebook that the Museum of Modern Art was on 51st Street. Today, Twin Otto is a physical duplicate of Otto from the skin in, but his notebook differs. Consequently, Twin Otto is best characterized as believing that the museum is on 51st Street, where Otto believes it is on 53rd. In these cases, a belief is simply not in the head."19
Background image
The case of Thad Starner20
Background image
PHIL 250 - Minds and Machines - YuiBut is that a belief?Some objections Reliability: Inga is more reliable! Better access: Inga has a higher-bandwith link to her information! Mere perceptual access is not enough: Otto doesn’t introspect! These aren’t occurrent beliefs: Otto’s notebook doesn’t contain conscious beliefs in the now.21
Background image
PHIL 250 - Minds and Machines - YuiBut is that a belief?Reliability: Inga is more reliable! Otto’s beliefs are pretty reliable. They’re not perfect but neither are Inga’s. Better access: Inga has a higher-bandwith link to her information! Consider Lucy who has bad biological memory. Mere perceptual access is not enough: Otto doesn’t introspect! Why does it matter that Otto has visual phenomenology and Inga doesn’t. These aren’t occurrent beliefs: Otto’s notebook doesn’t contain conscious beliefs in the now. If we only let in occurrent beliefs, then Inga doesn’t have a lot of beliefs either. If we just mean dispositional beliefs, then Otto’s beliefs fit the bill.22
Background image
PHIL 250 - Minds and Machines - YuiHow far do we go?Some hard cases The amnesiac villager from 100 years of solitude who labels everything. If someone tampers with Otto’s beliefs, does he believe the new information? Do I believe the contents of a page before I read it? Is my cognitive state somehow spread across the internet?23
Background image
PHIL 250 - Minds and Machines - YuiHow far do we go?In other words, what is our criteria? Constancy: the notebook is a constant in Otto’s life. Direct availability: the information in the notebook is directly available without difficulty. Automatic endorsement: Upon retrieving the information from the notebook, Otto automatically endorses it. (Maybe?) Past endorsement: The information in the notebook has been consciously endorsed at some point in the past. 24
Background image
PHIL 250 - Minds and Machines - YuiHow far do we go?Socially extended cognition In an unusually interdependent couple, they might play the same role for each other as the notebook plays for Otto. 25
Background image
PHIL 250 - Minds and Machines - YuiHow far do we go?The self The extended mind implies an extended self. We already think the self outstrips the conscious self.26
Background image
PHIL 250 - Minds and Machines - YuiHow far do we go?Do you know how to do a math problem if you need pencil and paper to do it? “I know how to sing along to the song.” “That person hit me!” vs. “That person’s car hit my car!”27
Background image
PHIL 250 - Minds and Machines - YuiSummary of C&CTwo arguments: 1.The mind’s cognitive processes can at least partially consist in processes performed by external devices. Tetris example 2.Standing beliefs can be partially constituted by factors external to the skin. Standing beliefs, unlike occurrent beliefs, can be partially constituted by factors external to the skin28
Background image
PHIL 250 - Minds and Machines - YuiReconstruction of C&C’s argument1.What makes some information count as a standing belief is the role it plays, i.e. its function. 2.The information in the notebook functions just like the information constituting an ordinary non-occurrent belief. 3.The information in Otto's notebook counts as standing beliefs. (from 1 and 2) 4.Otto's standing beliefs are part of his mind. 5.The information in Otto's notebook is part of Otto's mind. (from 3 and 4) 6.Otto's notebook belongs to the world external to Otto's skin, i.e. the external world. 7.The mind extends into the world (from 5 and 6).29
Background image
The Frame Problem30
Background image
PHIL 250 - Minds and Machines - YuiThe Frame ProblemText from Daniel C. Dennet, Cognitive Wheels: The Frame Problem of AI, 1987. Once upon a time there was a robot, named R1 by its creators. Its only task was to fend for itself. One day its designers arranged for it to learn that its spare battery, its precious energy supply, was locked in a room with a time bomb set to go off soon.3101:38R1
Background image
PHIL 250 - Minds and Machines - YuiThe Frame ProblemR1 located the room, and the key to the door, and formulated a plan to rescue its battery. There was a wagon in the room, and the battery was on the wagon, and R1 hypothesized that a certain action which it called PULLOUT (Wagon, Room, t) would result in the battery being removed from the room.32R1
Background image
PHIL 250 - Minds and Machines - YuiThe Frame ProblemStraightaway it acted, and did succeed in getting the battery out of the room before the bomb went off. Unfortunately, however, the bomb was also on the wagon. R1 knew that the bomb was on the wagon in the room, but didn’t realize that pulling the wagon would bring the bomb out along with the battery. Poor R1 had missed that obvious implication of its planned act.3301:38
Background image
PHIL 250 - Minds and Machines - YuiThe Frame ProblemBack to the drawing board. ‘The solution is obvious,’ said the designers. ‘Our next robot must be made to recognize not just the intended implications of its acts, but also the implications about their side-effects, by deducing these implications from the descriptions it uses in formulating its plans.’ 34R1D1
Background image
PHIL 250 - Minds and Machines - YuiThe Frame ProblemIt had just finished deducing that pulling the wagon out of the room would not change the colour of the room’s walls, and was embarking on a proof of the further implication that pulling the wagon out would cause its wheels to turn more revolutions than there were wheels on the wagon—when the bomb exploded.35R1D1
Background image
PHIL 250 - Minds and Machines - YuiThe Frame Problem‘We must teach it the difference between relevant implications and irrelevant implications,’ said the designers, ‘and teach it to ignore the irrelevant ones.’ 36R2D1But R2D1 does nothing.
Background image
PHIL 250 - Minds and Machines - YuiThe Frame Problem‘Do something!’ they yelled at it. ‘I am,’ it retorted. ‘I’m busily ignoring some thousands of implications I have determined to be irrelevant. Just as soon as I find an irrelevant implication, I put it on the list of those I must ignore, and...’ the bomb went off.37R2D1
Background image
PHIL 250 - Minds and Machines - YuiR1 - Calculate effects of action Didn’t infer that pulling wagon will also bring the bomb. Doesn’t infer enough! R1D1 - Calculate side-effects of action Started inferring about paint colour and wheel rotations. Infers needlessly! R2D1 - Ignore irrelevant implications Endlessly adds irrelevant implications38R1R1D1R2D1
Background image
The frame problem (in cognitive science): How does a cognitive system update its beliefs given some action it does? Or how does a cognitive system determine what beliefs are relevant in the context of a particular action?39
Background image
PHIL 250 - Minds and Machines - YuiBottom up vs Top down Cognition40
Background image
PHIL 250 - Minds and Machines - YuiRodney Brooks Australian roboticist at MIT Wanted to build insect robots, cat robots, chimp robots, and finally human robots. Went from insects straight to humans. 41
Background image
42
Background image
PHIL 250 - Minds and Machines - YuiThe two sidesThe symbol grounding view “Top down” The physical grounding view “Bottom up”43
Background image
PHIL 250 - Minds and Machines - YuiSymbol Systems Cognitive states and processes are constituted by the occurrence, transformation and storage (in the mind/brain) of information-bearing structures (representations) of one kind or another. (SEP) In other words Cognitive states/processes = information bearing states/processes44
Background image
PHIL 250 - Minds and Machines - YuiThe Symbol Grounding ViewSensors provide information to the machine. The machine stores the information about the world. The machine has goal states and a list of possible behaviors. The machine attempts to fulfill its goal state by calculating which behavior would be optimal considering the stored information.45
Background image
PHIL 250 - Minds and Machines - YuiThe Symbol Grounding ViewI want to satisfy my hunger. I know there’s a sandwich in front of me. I can eat the sandwich. Eating the sandwich will satisfy my hunger.46
Background image
PHIL 250 - Minds and Machines - YuiRepresentational Theory of Mind47objectrepresentationstimulussense organsbehaviour
Background image
PHIL 250 - Minds and Machines - YuiThe Physical Grounding view“This hypothesis states that to build a system that is intelligent it is necessary to have its representations grounded in the physical world. Our experience with this approach is that once this commitment is made, the need for traditional symbolic representations soon fades entirely. The key observation is that the world is its own best model. It is always exactly up to date. It always contains every detail there is to be known. The trick is to sense it appropriately and often enough.” Brooks48
Background image
49
Background image
PHIL 250 - Minds and Machines - YuiThe Physical Grounding viewThe mind is not disembodied. The body and the world are part of cognition.50
Background image
PHIL 250 - Minds and Machines - YuiBraitenberg vehiclesTwo sensors (light detectors) Two actuators (wheel motors) 51
Background image
PHIL 250 - Minds and Machines - YuiBraitenberg vehiclesExample: More light, more movement. Less light, less movement. What happens? 52
Background image
PHIL 250 - Minds and Machines - YuiBraitenberg vehiclesExample: More light, more movement. Less light, less movement. What happens? The robot scurries away from the light and tries to find a dark spot. 53
Background image
PHIL 250 - Minds and Machines - YuiEvolution and intelligenceHumans are pretty new to the scene in evolutionary time. But animals have been exhibiting intelligent behavior for millions of years before we came around. So the problem of intelligence is probably pretty simple. We’re overthinking things.54
Background image
PHIL 250 - Minds and Machines - YuiCriticismsGenerality. Can subsumption architecture really produce general intelligence? Brooks: Yeah yeah, you can look for weird exceptions, but this is counterproductive. The puzzling situations aren’t that likely. Why start from weird cases? But it can’t do X! There are lots of things that this approach is not good for and so we should resort to the symbol system hypothesis. Brooks: It’s unfair to claim that an elephant has no intelligence worth studying just because it does not play chess. Eventually we’ll solve the whole problem, but both my opponents and I only have a promisory note.55
Background image
PHIL 250 - Minds and Machines - YuiRepresentational Theory of MindThis view of the mind is the basis for computationalism. Computationalism: the mind is a computer. The mind stores and manipulates symbols about the world. Cognition is symbol manipulation.56
Background image
PHIL 250 - Minds and Machines - YuiSuccessful symbol systems57
Background image
PHIL 250 - Minds and Machines - YuiAction and Perception58
Background image
PHIL 250 - Minds and Machines - YuiThe Symbol Grounding ProblemHow are symbols connected to the things they refer to? The problem of intentionality. How are representations aboutanything? “How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads?” S.Harnad, The Symbol Grounding Problem, 1990. 59
Background image
PHIL 250 - Minds and Machines - YuiThe Symbol Grounding Problem60
Background image
PHIL 250 - Minds and Machines - YuiThe Chinese Room61
Background image
PHIL 250 - Minds and Machines - YuiBrie GertlerA philosopher of mind working on introspection, self-knowledge, consciousness. A mind-body dualist.62
Background image
PHIL 250 - Minds and Machines - YuiFirst consequence: Limits on introspectionIntrospection is a method that the subject uses, but other cannot use, to determine his or her own beliefs. Otto can’t introspect!Other people can look at his notebook.Introspectibility is crucial to the basic concept of the mind. So the lack of introspectibility is a problem for C&C’s view.63
Background image
PHIL 250 - Minds and Machines - YuiSecond consequence: Proliferation of actionsConsider: Otto uses an external computing device to store information. His desire to make banana bread. Belief that banana bread requires bananas. Belief that the corner store has bananas. Otto connects the device to a humanoid robot.64
Background image
PHIL 250 - Minds and Machines - YuiSecond consequence: Proliferation of actionsSo far, it is just as in the original Otto case. Otto’s robot has information that count as Otto’s standing beliefs. Now suppose that Otto falls asleep all Monday and the robot goes out to fulfill his desire to buy the ingredients. He sleeps most of Tuesday too and wakes up to find that the robot also fulfilled his desire to make the bread.65
Background image
PHIL 250 - Minds and Machines - YuiSecond consequence: Proliferation of actionsDid Otto make the bread? C&C would have to say yes. It fulfills the criteria that they lay out. Consistently available Readily accessible automatically endorsed Consciously endorsed in the past66
Background image
PHIL 250 - Minds and Machines - YuiSecond consequence: Proliferation of actionsNow suppose he makes an enormous fleet of robots. That would mean Otto is doing a huge number of things all over the world! We could require that the organic body be part of any genuine action. But this conflicts with C&C’s claim that there’s nothing special about the internal causal relations of the body. Actions so distant from the agent threaten the very distinction between mental and non-mental. So something else must distinguish these two. But what?67
Background image
AI Design ProjectQuestions to ask: Can the AI be trained to do that? Is there an existing dataset? What counts as a success or failure? Can the AI do it better than a human?68
Background image
69
Background image
Be careful about letting AI decide what is “best”.Normative claim: Sentences about value. How the world ought to be (or ought not be). Descriptive claim: Sentences about what is. A description of how the world is. Examples: 1. Rich people have more social status than poor people. 2. That song is beautiful. 3. Most people think that song is beautiful. 4. Murder should be punished.70
Background image
PHIL 250 - Minds and Machines - Yui71
Background image