Philosophy of Psychology

Topics in Philosophy of Psychology with Professor Frances Egan

Syllabus

Course Description

The topic of the seminar is psychological explanation. We will focus on two issues: (1) The role of representation in psychological explanation, considering recent challenges – by (among others) proponents of extended, embodied, and enactive cognition – to the traditional view that psychological processes are to be understood as operations on symbol structures. We will consider the requirements for a theoretical construct to count as a representation, and whether there are any distinctively mental representations. (2) The relation of psychology to neuroscience, considering the recent challenge – by proponents of the so-called ‘new mechanism’ view in philosophy of science – to the view that psychological explanation of human cognitive capacities (in particular, computational explanations of cognitive capacities) can be constructed and confirmed independently of an account of how these capacities are realized in the brain. The new mechanists argue that genuine explanations of cognition are mechanistic explanations – they must bear a transparent relationship to accounts of realizing neural mechanisms.

Readings

The role of representation in psychological explanation

  • The traditional view – strong representationalism
    • Jerry Fodor. Psychosemantics, ch.1, Appendix
    • Zenon Pylyshyn. “The Explanatory Role of Representation”
  • Implications and challenges
    • David Kirsh. “When is Information Explicitly Represented?”
    • William Ramsey. Representation Reconsidered (ch.3 and ch.4)
  • The challenge from extended, embodied, and enactive cognition
    • Rodney Brooks. “Intelligence without Representation”
    • John Haugeland. “Mind Embodied and Embedded”
    • Clark & Chalmers. “The Extended Mind”
    • Adams & Aizawa. “Defending the Bounds of Cognition”
    • Robert Rupert. “Challenges to the Hypothesis of Extended Cognition”
    • Lawrence Shapiro. Embodied Cognition (excerpt)
    • Andy Clark. “An Embodied Cognitive Science?”
    • Clark & Toribio. “Doing without Representing?”
    • Anthony Chemero. “Anti-representationalism and the Dynamical Stance”
    • Shaun Gallagher. “Are Minimal Representations Still Representations?”
    • Daniel Hutto. “Radically Enactive Cognition in our Grasp”
    • Mark Sprevak. “Fictionalism about Neural Representations”

The autonomy of psychology

  • The traditional view
    • Jerry Fodor. “Special Sciences”
    • Robert Cummins. “Functional Analysis”
    • John Haugeland. “The Nature and Plausibility of Cognitivism”
  • The ‘new mechanist’ challenge
    • David Michael Kaplan. “Explanation and Description in Computational Neuroscience”
    • Piccinini & Craver. “Integrating Psychology and Neuroscience: Functional Analyses as Mechanism Sketches”
    • Daniel Weiskopf. “Models and Mechanisms in Psychological Explanation”
    • Frances Egan. “Function-Theoretic Explanation and Neural Mechanisms”

Course Requirements

Students taking the course for credit will be expected to write a paper due at the end of the semester. Papers must be on topics related to the course materials; topics should be cleared with me beforehand. Graduate students enrolled in the course will be expected to lead the discussion of assigned materials for one session.

Attendance

Attendance at the seminar is mandatory. You should let me know if you have to miss a class.

Course Objectives

Like any advanced philosophy graduate seminar this seminar aims to give students an opportunity to explore a set of fundamental issues in depth give students opportunities to further develop and hone their analytical skills give students a chance to write and revise a major research paper that will hopefully be interesting, original, and important provide ample opportunities to further develop one’s oral skills during discussions

January 28th, 2013 Reading

Psychosemantics Jerry Fodor

Chapter 1 The Persistence of Attitudes

Introduction
  • Quotes A Midsummer Night’s Dream for an example of implicit, nondemonstrative, theoretical evidence.
  • Here is how the inference must have gone:
    • Hermia had reason to believe herself beloved of Lysander.
      • Because he told her.
    • If Lysander loves Hermia, then Lysander wishes Hermia well.
    • If Lysander wishes Hermia well, then Lysander does not voluntarily desert Hermia in the night in the woods.
    • But Hermia was deserted by Lysander.
    • Therefore, not voluntarily.
    • Therefore, it is plausible Lysander has come to harm, and plausibly by Demetrius’s hands, for Demetrius is Lysander’s rival for the love of Hermia.
  • Hermia believes (correctly) that: If x wants that P, and x believes that not-P unless Q, and x believes that x can bring about Q, then (ceteris paribus) x tries to being it about that Q.
  • But Hermia has it wrong about Demetrius.
    • The intricate theory that she relies on to make sense of her peers, that we rely on to make sense of Hermia, and what Shakespeare relies on to predict and manipulat our sympathies.
  • This theory, Fodor wants to emphasize:
    1. How often it goes right,
    2. How deep it is,
    3. How much we do depend on it.
How often it works
  • Applications of commonsense mediate our relations with one another, and when its predictions fail these relations break down.
    • Failures make for great theater.
    • Succeses are practically invisible and ubiquitous.
  • Commonsense psychology is like those mythical Rolls Royce cars whose engines are seals when they leave the factory.
    • Someone I don’t know calls me and asks me to lecture in Arizone on Tuesday.
    • Fodor responds, “Yes, thank, I’ll be at your airport on the 3 p.m. flight.”
    • That’s all that happens.
      • The theory is used to gap the bridge betwen utterances and actions.
      • If Fodor doesn’t show it, the theory makes predicts about the likelihood of why.
  • The point is that the theory from which we get this extrodinary predictive power is just good old commonsense belief/desire psychology.
    • It tells us how to infer people’s intentions from the sounds they make.
    • It tells us how to infer people’s behavior from their intentions.
    • And all of this works on your friends and your spouses and absolute strangers.
  • But what about all those ceteris paribuses?
    • Philosophers love the “false or vacuous” dilemma.
  • Consider the defeasibility of “if someone utters the form of words, ‘I’ll be at your airport on the 3 p.m. flight,’ then he intends to be at your aiport on the 3 p.m. flight.”
    • This generalization does not hold if:
      • The speaker is lying,
      • Monoligual speaker of Urdu who uttered by accident,
      • If the speaker is talking in his sleep,
      • Or … whatever.
    • Perhaps all that this means with a ceteris paribus clause is, if it doesn’t happen, then it still true because ceteris paribus.
  • A lot of philosophers are moved by this.
    • But the predictions often come out true!
      • So how could it be empty?
  • The reliance on uncashed ceteris paribus clauses is a general property of the explicit generalizations in all the special sciences.
    • For example, “A meandering river erodes its outside bank”
      • False or vacuous a philosopher may complain.
    • A ceteris paribus clause here cause “If p then q unless not p then q.”
  • Something must have gone wrong, surely this explanation is meaningful despite the big ceteris paribus clause.
    • Surely these statements are stronger than “P in any world where not not-P.”
  • There is a face similarity between implicit generalizations and commonsense psychology and explicity generalization in special sciences.
    • We can get rid of the ceteris paribus clauses by actually enumerating the conditions.
  • By this criterion, the only real science is basic physics.
    • We can’t enumarate all the conditions on geology by sticking to the vocabulary of geology.
    • The events in the ceteris paribus clause about the rivers aren’t geological events, it’s outside the domain. (Also hard.)
  • Exceptions to generalizations of a special science are typically inexplicable from the point of view of that science.
    • This is what makes it special.
    • It’s possible to enumerate them in the vocaubalry of another science.
      • You go “down” one or more levels and use the vocabulary of the more “basic” science.
  • If the world is describable at all in a closed causal system, it is with vocabulary of the most basic science.
    • But the psychologist and the geologist needn’t worry about this.
    • If you want to know where Fodor will be next Thursday, mechancins is no use to you at all.
The depth of the theory
  • It’s tempting to think that commonsense psychology is a toolkit of truisms one learns on Granny’s knee.
    • Like,
      • The burnt child fears the fire.
      • Money cannot buy happiness.
      • Reinforcment affect response rate.
      • A way to a man’s heart is through his stomach.
    • None of these are worth save, but commonsense psychology is not like this.
    • There are two parts to this:
      1. The theory’s underlying generalizations are defined over unobservables.
      2. They lead to its predictions by iterating and interacting rather than being directly instantiated.
  • Behavior is casued by mental states and this causation is intricate.
    • Roughly, If x is y’s rival, the x prefers y’s discomfiture, all else being equal.
      • Doesn’t mention behavior.
      • It leads to behvaioral predictions.
  • It is a deep fact about the world that the most powerful etiological generalizations hold of unobservable causes.
    • Meterology is not deep because it’s genralization are of the form, “Red at night, sailors delight.”
    • Psychology is a deep theory, because we do not have access to mental states.
      • We’re born mentalisms and realists, and we stay that way until common sense is driven out by bad philosophy.
Its indispensability
  • We have no alternative to the vocabulary of commonsense psychological explanation.
    • There is no other way of describing our behaviors and their causes if we want our behaviors and their causes to be subsumed by any counterfactual-supporting generalizations that we know about.
  • Without commonsense psychological generalizations, we cannot even describe the utterances as forms of words.
    • Word is a psychological category.
    • There are non acoustic properties that all and only fully intelligble tokens of the same word type share.
      • Which is why our best technology cannot build a typewriter you can dictate to.
  • We have no vocabulary for describing event types with these conditions:
    1. My behvior in uttering, “I’ll be there on Thursday” counts as an event of type $T_i$.
    2. My arriving there on Thursday counts as an event of type $T_j$.
    3. ‘Events of type $T_j$ are consequent upon events of type $T_i$’ is even roughly true and counterfactual supporting.
    4. Categories $T_i$ and $T_j$ are other than irreducibly psychological.
  • Physics describes organisms qua motions, but not qua organismic.
    • It dissolves the behaver and the behavior into atoms in the void.
  • Even if psychology was dispensible in principle, it’s not argument for dispensing with it in practice.
The essence of the attitudes

How do we tell whether a psychology is a belief/desire psychology? How, in general, do we know if propositional attitudes are among the entities that the ontology of a theory acknowledges? … How do you distinguish elimination from reduction and reconstruction?

  • Fodor will view psyhology as being commensensical about the attitudes just in case it postulates states satisfying:
    1. Thy are semnatically evaluable.
    2. The have causal powers.
    3. The implicit generalizations of commonsense belief/desire psychology are largely true of them.

Squabbling about intuitions strikes me as vulgar.

RTM
  • Thesis: We have no reason to doubt – indeed, we have substantial reason to believe – that it is possible to have a scientific psychology that vindicates commonsense belief/desire explanation.
  • Fodor will argue that the sorts of objections philosophers have recently raised against belief/desire explanation are not conclusive against the best vindicating theory currently available.
  • At the heart of RTM is the postulation of a LOT, an infinite set of ‘mental representations’ which function both as immediate objects of propositional attitudes and as the domains of mental processes.
  • The two claims he’s making:
    1. The nature of propositional attitudes:
      • For any organism O, and any attitude A towards the proposition P, there is a (computational/functional) relation R and a mental representation MP such that:
        • MP means that P.
        • O has A just in case O bears R to MP.
    2. The nature of mental processes:
      • Mental processes are causal sequences of tokenings of mental representations.
  • A train of thought is a causal sequence of tokenings of mental representations which express the propositions that are the objects of the thoughts.
  • RTM underlies practicaly all current psychological research on mentation, and the best science is ipso fact the best estimate of what there is and what it’s made of.
    • Philosophers do not think this convincing, and Fodor is blushing for them.

There is a stricking parallelism between the causal relations among mental states, on the one hand, and the semantic relations that hold among propositional objects, on the other.

  • Trains of thought are largely truth preserving1.
  • The “trick” is to combine the postulation of mental representations with the “computer metaphor.”
    • Computers show us how to connect semantics with causal properties for symbols.
      • If having propositional attitude involves tokening a symbol, then we can get some leverage on connection semantical properties with causal ones for thoughts.

January 28th, 2014 Seminar

On representation

  • Advanced “beings” like us have evolved with our environment, this is with representation.
    • These somehow causally affect behavior.
    • This is an “inference to the best explanation.”
  • To the degree which organisms can react to a changing circumstance causes the representation to change, and the changed representations change behavior.
  • Some people think that dynamic behavior requires representationalism.
Representation
A capacity of an organism.
Representations
Concrete objects in the brain.
Some properties: – Physically realized. – They have causal powers, can have straightforward causal roles.

  • They have content, they are meaningful.
    • This requires a distinction between a vehicle of representation, they thing physically realized.
  • Vision resonates with the environment like a tuning fork. Gibson’s theory of perception.
  • Our brains or us might have the structure for representations, but we can’t “poke at” any representation.
  • One way that representation could get cached out is dispositionally.
    • Some theories might posit capacities that are best characterized as representational.
    • There might not be an isolable state at the computational level that’s like, “There’s the representation!”
      • A real property of an organism, but diffuse.
      • Like mass.
  • Many theories posit something analogous to “sentences in the head.”
    • But there are other models of cognition.
      • Connectionist
      • Dynamical

Vehicle-side

Symbols
Little hooks on which you can hook meaning. Ripe for having content attributed to them, representation paradigmatically.
Connectionists models
Don’t posit anything like “sentences in the head.” The carriers of meaning are networks of nodes that are connected in various ways. Do get interpreted, but do individual nodes get interpreted? That node refers to red in a network.
Typically are not construed as a strong representational view.
Dynamical models
Model the behavior of the system over time.
  • What are the vehicles of meaning in non-symbol cognitive models?
    • Is there anything in here that could be plausible construed as a representation?
  • In classical models, what the vehicles of representation are, symbols are hooks on which to hang interpretation. Waving red flags, interpret me.

Content-side

  • Content has satisfaction conditions.
  • The problem of intentionality: How do mental states get their meaning? What is it about mental representations that given them their meaning or content.
    • Public representation
      • Public language content get their meaning by convention.
      • Icons and images get meaning by their resemblance and convention.
    • Private representation
      • Some relation that isn’t already presuming meaning or intentionality.
      • The “meaning in the head” or “state of affairs” which it representations, the naturalism semantics project.
      • “Complex causal relations” represented in my head.
      • Teleological function of the brain, dealing with objects and properties I’m used, and my ancestors are, used to.
  • No one has be successful in naturalistic conditions on vehicle in the head having determinate content.
  • One constraint on all of these account is that mental states can not only represent but also misrepresent.
    • The contrast is with Grice’s notion of natural meaning.
      • The presence of smoke cannot misrepresent the presence of fire unless an agent misinterprets the smoke or something.

Strong representationalism

Strong representationalism
Posits structure entities that have content.
Mental processes are defined over the structures.
  • An important issue: is there anything in non-representational views that could count as a vehicle.
Extended cognition
The mind extends into the environment in some sense.
The extended thesis is not centrally a critique of representationalism, but it does have theses that bring to bear on representationalism.
Embodied cognition
Human cognition is necessarily embedded, the mind-brain is the executive that controls the body.
Dynamical cognition
Cognition consists in a dynamical interaction between a subject and an environment, and it’s wrong to characterize the interaction as contentful.

Jerry Fodor’s “Psychosemantics”

  • He gives two arguments for the view known as “representational theory of mind.”
    • Folk psychology is indispendable.
      • The only way to vindicate it is if something like RTM is true.
    • Eliminativism would be an absolute disaster.
  • The second is the striking parrallelism between trains of thought and inferences.
  • The argument focuses on prediction, the only way to get around in the world is to ask someone something and predict that, in general, you do what you say and similar elsewhere.
  • Intentional realism is the only appropriate attitude to take towards folk psychology.
Intentional realism
1. They are semantically evaluable. 2. They have causal powers. 3. The implicit generalizations of commonsense belief/desire psychology are largely true of them.
RTM
The best hope for realizing intentional realism.
  1. For any organism O, and any attitude A toward the proposition P, there is a (‘computational’/’functional’) relation R and amental representation MP such that MP means that PI and O has A iff O bears R to MP.
  2. Mental processes are causal sequences of tokenings of mental representations.
There are representation in a full-blooded sense.
  • If the LOT is true for propoisitonal attitudes, then the RTM is true because it’s the weaker view.
    • There’s a big project on mental logic.
  • Oftentimes, people’s reason doesn’t follow logic.
    • People will more often affirm the consequent, more common in mental reasoning than modus tollens.
  • “I believe there is beer in the refridgerator.”
    • What was causally efficaious was that I wanted beer and believed there was beer, so I got up to get beer.
    • This is the “belief box.”2
  • There has to be a pretty good argument for the moving from property of one structure to the structure of the other. Specifically, they share semantics, but we need a good argument about syntax. Moving from the logical scheme to the empirical scheme.
    • An argument for this is the tempurature case.
  • We’ll pick up next time with the Dennett counter-example.

February 4th, 2014 Seminar

On Fodor’s Psychosemantics, Chapter 1 “The Persistence of Attitudes”

  • Fodor is looking for a theory for states that interprets attitudes as states with propositions, beliefs and desires at least.
    • Chomsky: “We cognize the grammar of English.”
    • “We believe the grammar of our language.”
  • This is the “last gasp” of narrow content for Fodor.
  • On Dennet’s counterexample.
    • There is a distinction between core and derivative.
      • Those that explicitly represented and those that are implicitly represented.
      • Our heads just aren’t big enough.

February 11th, 2014 Reading

Zenon Pylyshyn. “The Explanatory Role of Representation”

Introduction

  • The hardest puzzle is consciousness.
    • Second hardest is meaning, which this work explains.
    • Does not solve the puzzle of meaning.
    • The author aims to describe how the idea of the semantic content of representations is implicitly viewed within the field of cognitive science, and discuss why this view is justifiable.
Representations
Generalizations stated over the contents of representations are not mere functional generalization in the usual sense.
Function generalizations
A theory that does not refer to physical properties of the particular system in question, only how it operates.
  • There will be a representational level and a symbol-processing level.

The Appeal to Representations

  • Law-like generalization and explanations can differ in several ways, consider:
    1. A certain object accelerated at a meters per second per second because a steady force was applied that was equal to ma.
    2. A certain neuron fired because a potential of v millivolts was applied along two of its sentries and that it had been inactive during the previous t milliseconds
    3. A bit pattern of certain computer register came to have a particular configuration because of the particular contents present in the instruction register and the program counter, and because the system is wired according to a certain transfer protocol.
    4. The computer printed numbers 2, 4, 6, because it started with the number 2 and added 2 repeatedly or because it applied the successor function repeatedly and double the value before printing.
    5. The pedestrian dialed 911 because he believed it to be the emergency number and had recognized the urgent need for assistance.
  • Accounts (1), (2), and (3), all the terms refer to properties of objects within the closed system.3
  • Accounts (4) and (6) are different in this important respect: Both make substantive reference to entities or properties that are not an intrinsic part of their state description, that is numbers and need for assistance.

How is it possible for properties of the world to determine behavior when the properties are not causally related in the required sense to to the functional states of the system? — Brentano’s problem

  • The notion of representation is necessary only in the context of explanation.
  • Behavior is being caused by certain states of one’s brain, and so mental states themselves are related to agent’s actions.
  • Brain states are not causally connected in appropriate ways to walking or mountains.
    • The relationship is one of content, a semantic, not causal, relationship.
      • The notion of content is roughly that of what the states are about.
    • Brain states cause certain movements. If these movements are view as members of equivalence classes described as “writing a sentence about walking in the Santa Cruz mountains” the brains states must be treated as embodying representations of these codes by certain rules.
  • Contrast the brain to a watch — a watch’s “behavior” is considered coextensive with the set of movements corresponding to the physical description of behavior.
    • Two ways of explaining human behavior capture extremely different generalizations.

Representational and Functional Levels

  • This shows that a functional description of mental processes is not enough, there must also be content.

If the content makes a difference to behavior, is it not also a functional difference?

  • To be in a certain representational state is to have a certain symbolic expression in some part of memory.
    • The expression encodes the semantic interpretation and the combinatorial structure of the expression encodes the relation among the contents of the subexpressions, much as in the combinatorial system of predicate calculus.
  • The reason there must be symbolic codes is that they can enter in causal relations.4
  • If there is a unique symbolic expression corresponding to each content, one might expect functional states and representational states to once again be one-to-one relation. Not so, because:
    1. There may be codes with the same semantic content which are functionally but not semantically distinguishable.
    2. Merely possessing a certain symbolic expression that encodes semantic content is insufficient to produce behavior.
      • You need to interpret the symbols.
Semantic-level generalization
Generalizations expressible in terms of the semantic content of representations.
Newell calls this “knowledge-level.”
Symbol-level generalizations
Generalizations expressible in terms of functional properties of the functional architecture.

Representational Content as Defining a Level of Description

  • We abandoned a biological vocabulary because of arbitrarily large disjunctions corresponding to processes like “thinking.”
    • Functional generalizations cannot be captured in a finite neurophysiological description.
    • There is a vocabulary in between “n fired at t with v” and “He called 911 because he believed he was in an emergency.”
      • And it’s functional.
      • And it’s in semantic terms.
  • “The principal of rationality is a major reason for our belief that a purely functional account will fail to capture certain generalizations, hence, that a distinct new level is required.”
Levels and Constraints on Realizability
  • For a description, that description might be compatible with other levels.
    • Newtons laws “are compatible with” biological taxonomy.

Kirsh, “When is Information Explicitly Represented?”

Introduction

  • Computation is the process of making explicit what was implicit.
  • We know what is explicit, the problem is what information is implicit.
  • Suppose: To understand a computation it is necessary to track the trajectory of informational state the computing system follows as it winds its way to an explicit answer.
  • Different kinds of computational mechanisms:
    • PDP systems
    • Massive cellular automata
    • Analog relaxation systems
  • How far do you have to remove “explicitness” until it’s implicit? That is, intuitively.
  • Suppose a system has highly ambigious encodings and must deliberate to choose the right interpretation.

Computer and cognitive scientists talk as if they have a precise idea of these concepts (of implicit and explicit,) but that do not.

  • Intent: Articulate a particular conception of explicit information that at least may serve as a stable base for subsequent inquiries into the meaning of implicit information.
    1. Will show why notions of explicit and implicit need elucidation, that they are not consistent.
    2. Will explore efforts to identify explicit information with syntatically and semantically well-defined representations.
    3. Will mention some implications of the view.

Our intuitions about explictness are inconsistent

  • Perhaps the intuition is that “if it’s there for all to see”, then it’s explicit.
    • Four tempting properties of explicitness:
      1. Locality: They are visible structures with a definite defintion.
      2. Movability: No matter where in a book a word is found or where in the library a book is stored, that word retains its meaning and retains its explicitness.
      3. Meaning: Words have a definite information content.
      4. Availability: The information content of a word is directly available to the system reading it, no elaborotate transition or interpretation process is necessary to extract the information it represents.

The trouble with using immediate grapability, or better immediate readability as the mark of explicitness is that we run into problems as soon as we ask whether to count accessing time as part of the reading process.

  • Are elements in large sets immediate readable?
    • This takes computational energy but is somehow constant/
  • From a process perspective information is explicit only when it is ready to be used
    • No computation necessary.
  • Explicitness is tied to usability.

Our intuitions about implcitness are inconsistent

  • Our concept of implicit runs into problems when we try to pin down what “in principle, recoverable” means.
    • Is it how much effort is required?
    • Our all lemmas, “nearby” or “far”, equally as implicit?
  • To make it more natural, perhaps the conception of that which is not explicit but which could be made so.

Why it matters whether our intuitions are unsetteled

Towards a theory of explicitness

Four condition on explicitness
Locality
They states, structures, or processes – henceforth symbols – which explcitly encode information must be easily seperable from each other.
Movability
An ambigious language may explicitly encode information only if it is trivial to indetify the syntatic and semantic indentity of the symbol.
Immediately readable
Symbols explicitly encode information if they are either: – Readable in constant time. – Sufficiently small to fall in the attention span of an operator.
Meaning
The information which a symbol explictly encodes is given by the set of associated states, structures, or processes it activates in constant time.

Implications

  • One of the following is false:
    • The LOT is the best level of analysis to represent perspicuously the episodes of in our mental life.
    • The events in our mental life are identical with operations on explicit representations.
    • The LOT perspicously describes human information processing.

February 11th, 2013 Seminar

Eliminativism
Unless there’s a transparent relationship between folk psychology and scientific psychology, then the attitude to hold towards folk psychology is elimination.
  • Ramsey still agrees with Fodor in eliminativism.
    • Stich, in the 80s and 90s, was an eliminativist.
    • Especially, “The Case Against Belief” didn’t think there’d be a transparent relationship.
    • Egan thinks there both wrong, that is the eliminativists.
      • The paper on this is “Folk Psychology and Cognitive Architechure”

On Pylyshyn

  • He wants to justify and vindicate justification with regards to representation.
    • Why content?
  • The argument for a certain way of construing vehicles pervades the whole book.
    • We cannot eliminate appeal to content.
  • Stich argued that if we really are symbol systems, like Fodor thinks, then why do we need content?
    • Pylyshyn is trying to justify the appeal to content.

The First Puzzle

  • Brain states are not causally connected to …
    • “I believe there’s beer in the refrigerator.”
    • There’s not causal connection between the object and my brain states.
      • There needn’t be a causal connection.
    • I’m thinking about Paris and that might cause me to book a vacation to Paris.
  • The solution is how mental states fit into the physical world.

Why does he think we need content if it is the physical stuff that is realized?

  • Pylyshyn uses the capturing language as a way of “getting the generalization.”
  • The difference between us and watches.
    • We get convinience from “thermometer gets the tempurature.”
    • We really need do need generalizations to explain our behavior, it’s only convinient for everything else.

The Scheme

$$ \lbrace I_1 … I_n \rbrace $$ $$ f_I $$ $$ \lbrace S_1 … S_n \rbrace $$ $$ f_R $$ $$ \lbrace P_1 … P_n \rbrace $$

  • The three levels:
    1. Interpretations level
    2. Interpretation function
    3. Syntatitic level
    4. Realization function
      • Maps physical state to symbol (numeral) 2 or 3 …
      • Nothing stops it from a bizzare representation.
      • Indepedant from meaning
    5. Physical states
      • These have many causal functions.
      • These can bring about states, but those states can bring about cognitively important states.
  • Fodor think that we need an LOT, so he wants to identify a level where structures that function as words and they have constituency relationships.

Niko: If you want to be representational about this, you can say that the mapping is just useful to us …

 2   3     5
 ^   ^     ^
 |   |     |
 |   |     |
S1  S2 -> S3
 ^   ^     ^
 |   |     |
 |   |     |
P1  P2 -> P3

An "adding machine" on this model.

What’s the role of content?

  • If the Is and the Ps are in a one-one relationship, then you get not explanatory leverage.
    • Why can’t we just explain the systems behvior in terms of the causally efficacious states.

What is implicit in this is not that there is a one-to-one function, but that there are arbitrarily many disjunctions

  • The rationality principle says that these transition states need to be truth-preserving.
  • There is an implicit assumption about explaining cognition that we are doing something inherently rational already.
  • Here’s something that is paradigm of cognitive but not rational:
    • If we can succesfully characterize someone in this way, if they have the p and the p to q
    • These models are used to describe data that is. “Cognitive function.”
  • This kind of explantory projects is going to be applied to sub-cognitive functions and phenomena.
    • Not just the personal level of behavior.
    • Open up the principle of rationality
  • Fodor in the appendix appeals to theory or processing, these theories are commited to mental representations, so this provides support for the RTM and by commitment the LOT.
    • A lot of these examples, the successes do not fit the model the content of attitudes, beliefs, desires.

Niko: Is the argument that you want to get rid of the interpretation level but that’s silly because that’s where we began to explain.

  • No, that’s Prof. Egan’s argument.
    • The argument here is that there’s going to be a series of unprincipled generalizations without principles of rationality.
    • These are understood as generalizations.
      • Unless we can understand these truth-preserving.

Ben: Couldn’t you weaken it, the more plausible claim is if we lose the generalizations, we lose these great explanations.

  • Derivability isn’t a basic notion.
    • We can reconstruct it to be truth preserving.
Derivability
What goes on in the middle level, the higher-level causation.

Egan’s View

  • Unless what the system is doing is recognizabily cognitive, this isn’t worth doing.
    • It can add, it can compute, it can speak, it can see, it can recover the 3D structure of the scene.
      • To do this, we need content.
    • Unless you can attribute content, you can talk about causal stories, you cannot explain Mary’s behavior.
      • The folk psychological explaination makes it rational.
  • This point will come up in Ramsey’s IO representations.
    • Unless we can construe the inputs and outputs of the system, as cognitive function, then the project doesn’t make any sense.
    • We’re trying to explain a cognitive competence. We at least need to interpret the inputs and outputs of the system as contentful.

On the third argument, pg. 27

  • My brain states are not causally connected to Paris, or walking, or mountains.
  • The relationship must be in terms in content.

On Kirsch

  • The main point is that intuitions about explicitness and implicitness are very broken, but that does not stop philosophers and others from using these notions fairly heavily.
    • Our concept on explicit representation rests pretty heavily on the printed word.
    • Construing explicit representation on the printed word, litterally sentences in the head, assumes that explicit representation can be understood in purely functional terms.
      • The big point he wants to get across is that this has to come across in procedural terms.
  • It’s often assumed that we know what explicit is, but not what implicit is, and so implicit is defined in terms of not-explicit.
  • He distinguishes between exploiting regularities in the environment and representing regularities in the environment.

On Marr

So why are we reading Marr? A lot of the work he’s done has been falsified.

  • Not interested in the details, interested in the methodology

Marr’s Methodology

  • He thought that an account of a cognitive system has three different levels.
    • Those are:
      1. Theory of computation: Specification of the function computed. That is, the what that the system is doing. This is the level at which the competence of the system is, knowledge, processing details.
      2. Representation & Algorithm: This level specific the alogirthm which computes the function speficied in the theory of computation (level 1). And it specifies structures over which the alogirthm which it is defined. Ex. A function can be added, the algorithm can be Roman or Arabic.
      3. Neural interpretation: This is, in some sense, the how.
  • The big picture:
    1. Gray level array
    2. Primal sketch
    3. Has multiple:
      • SFM
      • Stereoscopic
      • Texture
      • Shading
    4. $2 \frac{1}{2}$D
    5. 3D
  • Next time: SFM

February 18th, 2014 Reading

Ramsey, Representation Reconsidered

Section 1.2 The job description challenge

  • Operating on the assumption: By reflecting a bit on ordinary notions of representation, we could gain a better understanding of what it is that scientists are referring to when they claim the brain uses such states.
    • Worry: Why need to look at commonsense notions of representation at all.
      • A theoretical notion is introduced in scientific theories in such a manner that give the posit its specific explantory role.
      • For example, position genens involes a specification of the diferent relations and causal roles that we think genes perform.
      • With this is mind, we go a look for the physical thing to fit the bill.
  • For representation, things are not so simple.
    • We already have a notion of representation that is “home” in non-scientifc contexts, and this contrains what representation can qualify as representational.
    • Want: These notions of representation to provide a specification of the essential features of representation.
    • Want: A job description for representation like genes or protons have.
  • Problem: Our commonsense notions do not do this.
    • These notions have core features and offer job descriptions representational states.
    • Problem clearer: Stems from sort of features and roles that are associated with these notions.
      • The relevant roles include things like:
        1. Informing
        2. Denoting
        3. Standing for something else
      • How are these suppose to be cashed out naturalistically.

      Many scientific theories of the mind attempt to explain cognition in neurological or computaional terms. But our ordinary undersntadning of representation involves features and roles that can’t be translated into such terms in any obvious way.

  • Consider our ordinary notions of mental representation.
    • Claim: Common sense understnading of beliefs, desires, and other folk-representational states assigns them some sort of “underived or intrinsic intentionalality.”
      • Intentionality clearly isn’t basic, functionally or causally.
      • So when we look at a system for these states, we don’t really know what we’re looking for.
  • A similar problem arises with regard to the commonsense understanding of non-mental representation.
    • Every day examples of non-mental represenatations, like:
      • Road signs
      • Pieces of written text,
      • Warning signs, so on
    • These all invole an agents who use the representations to stand for something else.
    • The folk notions of representations, therefore, will not suffice if transplanted directly into cognitive science.
Job description challenge
There needs to be some unique role or set of causal relations that warrants our saying some structure or states serves a representational function.

What might a successful job description for cognitive representation look like?

  • Depends on: What notion of representation we use.
    • Reductive theories cannot use representation as an explantory primitive.
    • If we understand processes as representational, then we need an account of representation in computational, mechanical, or causal physical terms.
    • Conclusion: Positing inner representations needs to include some sort of story about how the structure or state in question actually plays a representational role.
  • Analogy: Someone offers an account of some organic process, and the account needs a structure called a pump.
    • The advocate of this account would need to offer an explnanation of how this thing actually serves as a pump, as opposed to say a sponge.
    • And this explnanation will be a functional one.
  • Conclusion: Cognitive researchers who invoke a notion of inner representation in their reductive accounts must provide some explanation of how the thing they positi serves as a representation.
    • This is the problem of pan-representationalism.
    • Goal: Argue that this hypothetical situation is in fact the actual sitation in a wide range of newer cognitive theories.
  • The meeting of the job description is not the same as providing a naturalistic account of content.
    • This would present the set of physical or causal conditions that ground the content of the representation.
    • Different version of naturalistic content:
      • Nomic dependency relations
      • Causal links to the world
      • Evolutionary function
      • Conceptual roles within a given system
    • These explain a certain relation, but not that relations function as a representation in a physical system.
  • Analogy: A compass.
    • One type of question: How the compass actually functions as a representational device, how it informs a cognitive agent.
    • Another type of question: What conditions are responsible for the representational content of the compass.
  • Why this is pressing:

    “Look, I’m not completely sure how state X comes to the content it has, but in my explanation of cogintive, there needs to be a state X that serves as a representation in the following way.”

  • The crux of the job description challenge
    • If conditions on representation are too strong, something “left explained”
    • If condition on represestation are too week, it’s ambiguous and ubiquitous.
  • Millikan attempts to provide something that applies to mental and non-mental representation.
    • To function as a representation is to be “consumed” by an interpreter that treats the state in question as indicating some condition.
      • Fine for non-mental.
      • Unclear for mental, consumption inside a system.
    • This account is “under-reduced.”
  • Dretske, as it will be shown, over-redcuded.
    • Intutively, his conditions have nothing to do with representation at all.
  • It may seem impossible to meet the job description challenge, but no, certain theories in the CCTC paradigm have done it.
    • Claim:the difference between the two corresponds to the division classical theories of computation and non-classical accounts of cognition.
Job description challenge, improved
There are two parts:

  1. Is it possible to describe physical or computational processes in representational terms?
  2. Is it absolutely necessary to describe physical or computational processes in representational terms?

Even a rock can be described as acting on the belief that it needs to sit very still.

  • The reason this fails is because the notion of representation is too weak.
  • In the “other way”, we can described biological systems in terms of molecules and toms, it is nevery necessary to invoke representational langauge in theory characterization of a representational syste,
Job Description Challenge, *improved … again *
Three questions:

  1. Is there some explanotory benefit in described an internal elements of a physical or computational process in representational terms?
  2. Is there an element of a proposed process or architecture that is functioning as a representation in a sufficienitly robust or recognizable manner, and if so, how does it doe this?
  3. Given that theory X invokes internal representations in its account of process Y, are the internal states playing this sort of role, and if so, how?
  • Unfortunately, “explanatory benefit” and “sufficiently robust” are not as robust as we would like.
    • Requires a “judgement call.”

Section 3.1 IO-representation

  • Previously: Marr’s model of cognitive science involved three level of
    • The top level involved the specification of a function that defines the sort of cognitive capacity we want explained.

      Consider again a simple operation like multiplication. Although we say various mechanical devices do multiplication, the transformation of numbers into products is something that, strictly speaking, no physical system could ever do. Numbers and products are abstract entities, and physical systems can’t perform operations on abstract entities. So at the algorithmic level we positi symbolic representations of numbers as inputs to the system and symbolic representations of products as outputs. We re-define the task of multiplication as the task of transforming numerals of one sort into numerals of another sort.

    • The input to a cognitive system: say, “faces”.
      • The output of a cognitive system: say, “That’s so-and-so.”
  • How do IO-representation concerns meet the job description challenge? Two responses:
    1. Avoid the question altogether. “Outside the domain of cogntive theorizing.”5
      • We’re actual in the business of explaing the porcesses and operations that convert input representations into output representations6
    2. Say: Minds do certain things, and one of the main things they do is “perform cogitive tasks properly described as types of representations.”
      • It is a fact of nature.
Interior IO representations
Interior input–output representations are a sub-system’s own inputs and outputs that are internal to the larger super-system’s explanatory framework.

Section 3.2 S-representation

Section 3.3 Two objects and their replies

Challenge 2: IO-representation and S-representation aren’t sufficiently real
  • Made appeals to the explantory benefit and explanatory pay-off of IO and S notions of representation.
    • And that there is a payoff.
    • Is it a useful fiction? Do it real?
  • Dennet on the topics:
    • Physical stance
      Use an understanding of the physical inner workins of the system to explain and predict how it responds to different inputs.
    • Design stance
      Predict behavior by using what we know about the osrt of tasks that the system was designed to perform.
    • Intetional stance
      The intentional stance involves treating a system as a rational agent with beliefs, desires, and other folk-representationl states.
    True believer
    Little more than being a system whose behavior can be succesfully explained and predicted through the ascription of beliefs and other propositional attitudes.
  • Challenge: If the CCTC is a theory that invokes real representations, then it needs objectively realy representations.
  • Response: They are real, and here’s why.
    • There’s a sense in which most things are observer-dependant.
      • It’s always possible to view a system as a series of interacting atoms.
      • If IO and S representations are unreal only in the sens which trees, minerals, hearts, mountains, and species are unreal, then a reality about representation should be able to live with that sort of “anti-realism.”

Review of RR by Mark Sprevak

  • Structure of Ramsey’s negative argument:
    1. Argue that in order for something to be an X, it must satisfy a description D.
      • This is the job description or minimal conditions for representation.
    2. Argue that to the best of our knowledge, nothing satisfies D.
      • “How do our best psychological theories use the notion of representation?”
    3. Conclude that since nothing satifies D, there are no Xs.
      • Ramsey finds all theories unsatisfactory, there are no representations.
  • On step 1:
    • For a state to be a representation:
      1. Non-derived intentionality
        Be capable of having original intentional content.
      2. Causality
        Interact causally with other cognitive states.
      3. Connection
        (1) and (2) must be linked: The causal role that representation plus should be determined by it’s intentional content.
  • On step 2:
    • The exception: CCTV
      • CCTV is commited to representation in 2 ways:
        1. IO-representation
        2. S-representations
    • On IO-representation:
      • Representations are needed as the gross-inputs and outputs of cognitive agents.
      • But also the steps of a computation need representations, so internal representations.
    • On S-representation
      • CCTC is commited to positive interinal representations. (I’m not sure why this or if this is bad?)

February 18th, 2014 Seminar

Structure for motion

  • The visual system is able to recover the 3D dimension of a scene with on 2D input.
    • Some of the sources of information:
      • Stereoptics: The visual system uses disparity to get a vantage point on a scene.
      • Motion: Relative motion, what the visual system gets is a series of images that are different in certain respects. A series of retinal images.
    • One thing that Ullman and Hildreth point out is that there are an infinite number of 3D representations out of any 2D input.
      • The visual system is aided by physical constraints and assumptions.
  • The relevant assumption for this mechanism: rigitidy.
Rigidity
Objects in the environment when they’re moving is that they are solids.

“[T]hey assume that if it is possible to interpret the changing 2-D images as the projection of a rigid 3-D object in motion, then such an interpretation should be chosen.”

SFM Theorem
Three distinct views of 4 non-coplanar points is sufficient to determine a rigid configuration for the points.
  • If you see the object long enough and in good enough conditions, then when the SFM theorem is saying is that there is only one rigid configuration of points.
    • There’s a lot more non-rigid configuration, but assuming rigidity, it is much easier to compute a representation. That is, there are less configurations.

Niko: Why think that points are primitive, why not plains?7

  • An organism that SFM with minimum input will react to the world better than an organism that does not have SFM, rigidity, etc. More plausible for this reason?
  • If an object is getting bigger with regards to a visual field, what are the possible interpretations?
    • It’s getting bigger.
    • It’s getting closer.
    • A mix of the two.
  • This type of adaptation is only going to useful in an environment where most objects are rigid in translation8.
  • The environmental assumptions are motivated by adaptation.
    1. Powerful, rigidity forces a unique solution
    2. True, doesn’t make the organism “screw up”
    3. Unspecific, true in most cases, general claim about the environment, will support a lot of counterfactuals.

What is the status of the rigidity assumption? Is it innate?

  • Exploits regularities but doesn’t represent regularities.
    • We can count it as innate knowledge if we want, but it isn’t explicitly represented and almost certainly not explicitly represented.
  • The environment can be exploited without be represented.
    • First step for the theorist is to look at the environment.
      • Spiky?
      • Squishy?
      • Solid?
  • Steps:
    1. Commonsense characterization of the problem, explandadum.
    2. Put on lab coat and look at exactly what the competence is.
      • Is this a competence for determining 3-D structure simpliciter?
      • No, 3-D rigid structure.
      • “Looking for assumptions.”
    3. Look for algorithms that explain the phenomena

Ramsey

Chapter 1

What is it for a state in the head to be a representation?

  • Couple of things to distinguish:
    • As a question “what it is for a thing in the head to have meaning or conent? What is it to have content? What determines contents of internal representations?” Problem of intentionality side
      • The answer for external representations is convention, we agree on meanings for items in the language.
      • This cannot be the answer for internal representation.
      • Cannot specificy little “agents in the head.” Because regress? No agents.
      • There’s going to be a naturalistic account of how internal structures and states, naturalistic hope, relation between states and structures in the head and the objects in the world.
      • To say that it’s naturalistic is to discount, say, semantic relations.
    • Personal and sub-personal processes.
      • Like a map, people use maps, we use them to navigate, etc.
      • This kind of answer isn’t going to work for mental interpretations.
      • When theories posit these states or structures, it’s only appropriate to think of these, they’re not used by people, it’s a category mistake to think of Prof. Egan as using the primal sketch to navigate the world, it’s her system that uses the primal sketch.

Ben: I don’t see any job for it to do. What he said led me to believe that it not a necessary condition on representation that is has semantic content.

  • This is a possibility
    • A theory might posit a symbol structure

What Ramsey is thinking about is being a representation and not about what it is to have content.

  • Remember the distinction between personal and sub-personal representation.
    • Question: What is it for a physical thing to represent another?
      • It should be used in a characteristcally representational way. Useless.
      • Content is in som way relevant to how it functions, or it’s effects. So the representation is causally efficacious, causes effects in cognition, and the content is relevant to those effect in some way, I say in in some way because it isn’t at all clear how

Egan: Content is a way of summarizes what is causally relevant or the causal role of a belief9.

  • Dretske, on the other hand, thinks that content has to get its hands on the wheel.
    • This can’t be right, content is not driving the car.
  • It cannot be a consequence of doing that job that too many thing count as as representational.
  • The theorizing we’re doing do no presuppose propositional attitudes or representational capacities.
    • A theory that is doing this may under-explain.
  • If you describe cognition in terms of atoms, quarks, molecules, …
    • This is over-explaining.
    • It’s damning to say, “Hey, where is the intentionality?”
      • We have to be careful to not fall into this trap.
  • Avoid positing un-explained intelligence, but posit intelligence.
    • How can intelligence arise, or emerge, from non-intelligent.
  • It would illegimate to complain about they’re being nothing conscious in an explanation of consciousness.
    • You look at an organism in it’s environment and that is to interpret what it’s doing cognitively.
      • It’s computing some cognitive function.
    • Egan: Using this is a really robust way, which systems are appropriate to explain psychological.

Niko: Maybe it’s true that we have evidence that something is cognitive if we attribute to a system doing a computation on representations. Is it intentional?

  • At the moment, all (Egan) is doing is trying to put our cognitive attributions in context.
    • When do we attribute cognition?

Chapter 3

  • To call mething cognitive is to call something an IO system
  • We go around attributing cognitive capacities to people, and for the animals, we do it on the basis of behavior.
    • What we attribute is a competence and this goes beyond the behavior.
    • How would the system behave if it was given different inputs?
    • For Marr, the competence is at the top-level of the theory.
  • Right now, this defintion of cognition is related to Haugeland’s IBB.
    • Only when we have behvaioral event that something is doing something cognitive.
  • How do you say something plays chess?
    • You treat it as a representing thing as representing chess moves, or something.
    • You have to treat the sub components, the subprocesses, these have to be operating on chess moves as well.

Ben: A requirement on a sub-representation is the same … as a super- representation (or something). So is it syntatic and representational all-the way down?

  • In some of these decompostional reductions, whenever you’re doing this, whether it be the big or small process, whenever you’re invoking representations, you need to invoke contents.
    • He’s not saying that you always have to do it in this way.
    • Each, eventualy, these IBS IBB will “bottom out”
  • As long as you’re invoking representations, you have to go “whole hog” on it.
  • Can’t we treat them as uninterpreted symbols?

We could, of course, employ a syntactic type of task-decompositional explanation. We could track the causal roles of the syntactically individuated symbols, and thereby divide the internal processes into syntactic sub-processes. But we wouldn’t be able to make sense of these operations as computationally pertinent stages of the larger task being explained. Ramsey, pg. 76

  • He’s putting off the big question: Is it really an adder?
    • He falls back on the wholism response.
    • We’re attributing the intentinos and representaitons together.
  • Are intuitions important for cognitive theorizing?10
    • In calling it seeing, we’re pretty much committed to representation.
  • There’s certain structural similarities between entities on a map of NJ and NJ itsel
  • There’s stccertain structural similarities between entities on a map of NJ and New JJ itself.
  • It’s not an isomorphism, it’s a morphism.
    • What breaks the symmetry?
  • Next time: Chapter 4, and the two objections

February 20th, 2014 Office hours

Questions

  1. At first, there’s the physical states. It then moves to the biological states. And then the cognitive states.

    Is this a good roadmap of the ontology? Of where we’re trying to describe and ascribe representation? If not, what’s better?

  2. The debate
    • Ramsey: elimitavist about the mind on grounds of the best cognitive models not having it
      1. CCTV have representations, and their good ones (answer job description)
      2. Neither CCTV is true nor are representations possible?
      3. Representation and content – isomorphism – capable of content
    • Fodor:
      1. Realist about FP
      2. FP $\to$ LOT $\to$ RTM
      3. RTM and LOT have representations.
      4. ???
  3. Circularity and striking parellelism

February 25th, 2013 Reading

Dretske, “Misrepresentation” (background)

  • How do we manage to get things wrong?
    • How is it possible for a physical system to misrepresent the state of their surroundings?
  • Assumption: Belief is a non-derived representational capacity the exercise of which can yield a representation.
  • The capacity to misrepresent is only a part of the general problem of meaning and intentionality.
    • Once you have meaning, “lavish” it on to your descriptions.
    • Once you have intentionality, you can “adopt the intentional stance.”

Natural Signs

  • Naturally occuring signs mean something, without assistance from us.

    Water does not flow uphill; hence, a northerly flowing rivers means there is a downward gradient in that direction.

    • The power of these events or conditions to mean what they do is independant of the way we interpret them.
      • Or whether we interpret them at all.
    • There was meaning before intlligent organisms, capable of exploiting meaning.
    • If we are looking for meaning, misrpresentation is a promising place to begin.
  • Natural signs are indicators, more or less reliable, and what they mean is what they indicate to be so.
    • The power of a natural sign to mean something emerges from objective constraints, lawful relations, between the sign and the condition that constitutes its meaning.
    • It’s usually causal or lawful.
    • It’s counterfactual supporting.

Ramsey, Representation Reconsidered, ch. 4

Introduction

  • Will argue: The notions of representation explored in the net two chapters do not mean the job description challenge.
    • In fact, it also leads to deep misconceptions.
    • The receptor notions of representation.
      • Alternatively, detector.
    • Not a theoretically useful notion.
    • Will deny: Structures that do recepting are serving as representations.
  • Structure
    1. Spell out the basic idea of receptor representation.
    2. Will ask how well this notion fares with regard to the job description challenge.
    3. Will show: Dretske’s own account of representation overlaps a great deal with the receptor notion, yet is more sophiticated and robust.
    4. Will argue: Neither representation nor misrepresneetation is without serioues flaw.
    5. The receptor notion should be abandoned.

The receptor notion

Receptor notion
Because a given neural or computational structure is regularly and reliably activated by some distal condition, it should be regarded as having the role of representation (indicating, signaling) that condition.

Such structures are viewed as representations because of the way they are triggered to go into particular states by other conditions.

Example: Certain neurons should be viewed as “detectors” precisely because they reliably respond to certain stimuli.

Just a though a frog has “convexity detectors” in their brain.

[I]f a cell is claimed to represent a face, then it is necessary to

show that it fires at a certain rate nearly every time a face is present and only very rarely reaches that rate at other times.

  • The receptor notion of representation is found in connectionist models in the similarity of “internal units” to “neural receptors”
    • It’s suggested they provide a more representational role than computational symbols.
  • Three examples of the receptor notion in simple organisms:
    1. Certain neurons in a bugs brain.
    2. “Face cells” in monkeys
    3. Magnetosomes “compass-like” represenetation which tell where to go.
  • Whatever the state is that the so called receptor brings about, it is “viewed as having the role of representing the external condition because of this causual or nomic dependency relation.”
  • Philosophers have developed a notion “natural meaning” or “information content” that supposedly results from the way a state reliably co-varies with some other state of affairs.
    • This makes people say “this smoke means that fire.”

The receptor notion and the job description challenge

  • The reception notion faces a prima facie difficulty:
    • Problem: The reception notion does not provide an account that reveals why a given state or structure should be seen as serving as a representation.
      • Counterexample: There are several non-representational internal states that must in their proper functioning reliably covary with various states of affairs. For instnace, our immunse system consistently reacts to infectious insults to our body.
    • Fix: While nomic dependency may be an important element of representation, it is not sufficient to be representation.
  • This problem leads to a dilemma:
    • If to serve as a representation just is to serve as a state that reliably responds, then you get overly reduced accounts of representation that lead to pan-representationalism because of the immune-system objection.
    • Or, if to serve as representation includes factors that go substantially beyond the mere fact that they reliably respond to specific stimuli, then we have no clear sense of how states reliably responding to certain stimuli are supposed to function as representations.
  • This is closely related to the problem of pansemanticism, that is that “meaning is just about everywhere” and “is a natural conclusion to draw from informational analyses of content.”

Dretske to the rescue?

  • Dretske offers an ambitious account of mental represnetation that is designed to be naturalistic about content, and show how content can produce behavior.
    • What’s wrong with Dretske’s account will be illuminating with regaurds to a fundamental problem for the receptor notion of representation in general.
  • Central to the account is the notion of indication, a relation based on law-like dependency.
Indication
For condition C to indicate another condition F, C must stand > in relation to F characterized by subjunctives like the > following: If F had not occured, C would not have occured.
  • Already saw: that mere nomic dependency is insufficient to bestow full-blown representational status on cognitive structures.
    • This is the disjunction problem, that concerns the difficulty in account for misrepresentation when a state’s content is based upon the way it is triggered by distal conditions.
    • The response to this: “the core of any thoery of representation must contains an explanation of how misrepresentation can occur.”
  • The response is to offer teleological components, placing tighter restrictions on the sort of causal relations that matter.

Dretske: Internal indicators are mental representations when they are recruited as a cause of certain motor output because of their relevant nomic dependancies.

  • This does two things for Dretske:
    1. Enables him to handle the problem of misrepresentation
      • Error is no possible because we can say that whenever an indicator is triggered by something other than what it was supposed to indicate.
    2. Gives him a way of showing how informational content can be explanatorily relevant.
      • Structures are recruited as causes to motor output because they indicate certain conditions.
      • Being an indicator is a causally relevant feature of a strcture, so being a type of meaning is causally relevant.

Dretske: To serve as a representation is necessarilly to:

  1. Stand in some sort of nomic dependancy relation to some distal state of affairs
  2. Becoming incorporated into the processing because of this dependancy, thereby acquiring the function of indicating those states of affairs.
Does Dretske’s account of representation function help?
  • Dretske is committed to: If neural structures are actually recruited as causes of bug-catching movements because they are reliably caused to fire by the presence of bugs, then it certainly seems tempting to assume that they are serves as “bug representations.”
    • Question: Does this arragment suffice for representation?
  • To answer, distinguish between:
    1. The puraly causal/physical or nomic dependencies that are thought to “underlie” the indication relation
    2. The quasi-semantic, information relation often said to be “carried by” these dependancies.

If A carries information about B, (1) is this supposed to be distinct from A‘s being nomically dependant on B? (2) Or are those claims indentical?

  • On (1), the infromation carried by A is somehow seperate and distinc from properities of A. It A carries information about B, it does so whether or not anyone exploits it.
    • This is ambigous, refering to either:
      • The non-semantic nomic depancy
      • Something more semantically charged, like information
    • This is the realist interpretation of information and indication.
  • On (2), it’s unambigous. This is the deflationary interpretation of information and indication.
  • The central problem: Dretske’s account of representation appears to assume that if a given structures is incorporated into a system’s processing because it nomically depends on a certain state of affairs, it automatically follows that it being used to stand for that state of affairs.
    • This unsupported and almost certainly false.

Counterexample to (1): The firing pin in a gun similarly bridges a causal gap between the pulling of the trigger and the discharge of the round. It also serves to reliably mediate between two distinct states of affairs – to reliably go into a specific state when and only when a certain condition obtains. However, no one thinks the firing pin serves as some sort of representational device.

Counter example to (2): For example, if A is always larger than B, then, in the deflationary sense we are now using the term, A carries information about the size of B; that is, the size of A could be used to tell someone something about the size of B. If A is heavier than B, or if A is always within a certain distance of B, then the weight or position of A can serve to inform a cognitive agent about the weight or position of B. In all of these cases, specific types of law-like relations between two objects (larger than, heavier than, close to, etc.) can and sometimes are exploited by cognitive systems like ourselves such that our knowledge of the status of one of the objects can generate knowledge of the status of the other one as well. When this happens, one of the objects is serving as a type of representational device.

  • Conclusion
    • If we equate being an indicator to being a nomic dependant, then Dretske cannot establish that a strcture is a representation by showing that it functions as an indicator because, trivially, functioning as an indicator just means functioning as a nomic dependant
      • And: There are all sorts of ways to function as a nomic dependant without being a representation.

Further dimensions of the receptor notion

Question: How do we demarcate between:

  1. Cases where the nomic regularity is relevant to a non-representational function
  2. Cases wehere it makes sense to say that nomic dependency helps serve as a representation.
  • Contast:
    1. The firing pin in a gun
      • There is no sense in which the information carried by the firing pin is exploited in its normal functioning.
    2. The mercury in a thermometer
      • There is a clear sense in which it serves to inform people who want to learn about the temperature.

Does it really matter?

  • People often compain that Ramsey is just being stingy with the word “representation.”
    • Why not treat receptor states as some low-level type of representation?
  • Taken as a point about langauge and our ability to call things what we want, it’s correct, but silly.

Question: If we are so inclined, and nothing is really at stake, then why not go ahead and do so?

  • Answer: There’s a fair bit at stake.
    • “Representation” and “information” is increasingly just being use to mean “reactive neuron” or “causal activity.”
  • Here he paints a picture about a real-world case where misconceptions about representation derailed progressed and blurred understanding in a study by Freeman and Skarda.

Summary

  • Argued: One of the most popular ways of thinking about representation in cognitive science is confused and should be discontinued.
    • Receptor notions of representation have jobs that have little to do with representation.
  • In an effort to supplment the failure, Ramsey brought in Dretske.
    • Dretske focused on on an indication functioning as a representation.
    • He suggest that receptor type states qualify as representation by virtue of the way in which they are incorporated into cognitive architecture.
      • But the functional role is a reliable causal mediator, and not a representation.
  • If the receptor notions of representation are in fact the right account of representation in connectionism, then RTM is simply false.

Commentary

I want to be a realist about the external world and I have this premonition without proof that my inner representation of the external world reliably covaries with at least those states important for my continued success. If the way I characterize this inner representation is with “indication” and Ramsey’s argument that indication isn’t represnetation, then I reject indication as being able to tell the whole story about representations in the head.

A way that I think indication could be vindicated is by making it a part of inner representation instead of claiming that it’s all there is. For instance, perhaps my vision is an indication of what’s directly in front of my eyes, my ears indicate what sounds are in my immediate surroundings, etc, and these indications are amalgamated by my mind in higher-order mental processing, and it is from this and these indications that an internal representation emerges.

February 25th, 2014 Seminar

S-Representation

  • Two key ideas:
    1. Structural similarity between the vehicles of representation and the things that they’re about, the represented domain.
    2. The use of the representation.
      • Why is this important?
      • If we don’t bring in use, what problem are we left with?
      • The state represents the map, the map is used in certain ways to represent the state.
      • What kind of use? Surrogative reasoning, it’s use to say things about, make claims about the domain. Doormat not relevant cognitive use.
      • But, you could use the landscape of New Jersey to reason about the map. But it does break the symettry because you’re using the one about the other. Minor bullet-biting.
  • The map example, paradigm of this representation
    • A map of New Jersey represents some properties and relations, dots represent cities, some features of the dots represent relative size.
      • There are other features of the map that don’t have any function at all, like perhaps colors.
      • Colors might make it more salient to see the county divisions, etc.
      • It’s not representing any “shared feature” of these counties.
      • Not every feature maps to a representation, morphism and isomorphism.
    • Isomorphic
      • 1 to 1
      • Bijection
    • Representatino on this model is holistic.

./img/family-tree.png

  • When Bob is using this, he has to use the connections correctly for surrogatively reasoning.
    • The map is the clear case, but the other examples include more and more levels of abstraction. The interpretation of the representation needs more levels of abstract.

./img/broken-tree.png

How is this going to work for mental representation? What would justify using “AAAAAA” to mean “John”?

  • Use.

Shank and Abelson

  • Given very general information, and a story.
    • Like how to take a bus to the city, how to get food in a restaurant.
    • Lots of information about the situation.
    • Then, a story about John and Mary, and the program is asked about the specific story given the general facts.

    John and Mary when to some resaturant and they both ordered them medium-rare and got burnt burgers and so they stood up and walked out. To the machine: Did they leave a tip?

If the symbols in the Chinese Room constitute scripts of this sort, then they serve as representations, not because there is some conscious interpreter who understands them as such, or because the people who designed the system intended them that way, but because the overall system succeeds by exploiting the organizational symmetry that exists between its internal states and some chunk of the real world.

  • “All Ramsey needs” is some morphism that is exploited11.
    • This isomorophism idea isn’t as general as Ramsey would like to claim.

Two Objections and Their Replies

Challenge 1: indeterminacy in S-representation (and IO-representation) content

  • Pragmatic considerations will come out to privelege on interpretation over the other.
    • Ramsey’s constraints are doing a lot of work,
      1. that is the explantory agenda and
      2. how the system is embedded in the environment.
    • Having two or three candidates left is pretty good.
  • It’s often said that “task description” are indeterminate.
    • The first thing you have to do is consider how the system is working it it’s environment.
    • Then working out what it succesful at.
      • “All it has to do is get the unique rigid interpretation of the sense data.”
  • It’s not clear how this account will get misrepresentation.
  • There’s a distinction between theorists attributing incorrectly and agents mispresenting radically.
    • Egan is a theorist of vision, she knows we’re pretty good at getting a 3-D representation of a scene, and except for the dark, we’re very good at succeeding at this task.
      • This is compatible with making some mistakes, it’s not easily compatible with making systematic mistakes.
      • But this is going to be a problem regardless of whether you assume success or not (disputed).
      • People are representing things when they make systematic errors, but how would this explantory account explain this possible phenomena?
    • The project is to take a phenomena, human conciousness, and explain it. If there is systematic failure, then there is nothing to explain (???). It’s not performing a task (???). It’s not succeeding at a task.
      • Success is a motivation for looking for representation.
      • Endgoal: Psychological capacties explanation.

Challenge 2: IO-representation and S-representation aren’t sufficiently real

  • “You’re just imposing your way of understanding on the machine or person, it isn’t that it’s there, you’re assigning it to physics.12
    • Grouping together physical states is interest-relative, but it is objective that the states are there.
  • Identifying real properties of the device, just a real as genes, but if we don’t care about diseases then we don’t care about biochemical engineering (or similar statements about other fields).
    • So atoms are just not relevant to what we’re trying to explain.
    • Unless you’re also instrumentalist about gene descriptions and genetics, then you should be a realist about representation in the way it’s been built.

Presentation by Ben

Background: Ramsey’s Question

  • The original question: “Is this thing reallyu functioning in a way that is reconizably representation in nature?”
    • Ramsey asks if various theoretical posits called “representations” accords with out pre-theoretical concept of a representation.
    • The worry: Science might start out with commonsense notions (mass, energy, representation), but science quickly leaves common sense behind.
      • Compare Ramsey’s question to the following: “Is the physicist’s energy really functioning in a recognizably energy-like in nature?”
      • “Matter, motion, energy, work, liquid, and other common sense notions … are abandoned as naturalistic inquiry proceeds; a physicist asking whether a pile of sand is a solid, liquid, or gas … spends no time asking how the terms are used in ordinary discourse, and would not expect the answer to the latter question to have anything to do with natural kinds, if these are kinds in nature” (Noam Chomsky, New Horizons in the Study of Language and Mind, pg. 21).
      • So if our goal is to understand the mind, why worry about Ramsey’s question?
    • A slightly modified question: How do mental representation compare to paradigm representations such as sentences and maps.
      • Answer this question might help us to better understand the nature of the mental representation posited by our theories.
      • Ramsey can be read as answering this question.

Ramsey on Dretske’s receptor representations

  • Ramsey argues that neither Dretske’s account of misrepresentation nor his account of representational function gives us reason to believe that receptor “representations” are representations in any interesting sense.
  • Misrepresentation
    • The problem: When a cell in the frog’s eye respond to a flying BB, why is it misrepresenting that there is fly rather than accuratly representing that there is either a fly or a BB?
    • Dretske’s answer: The function of the cell is to activate in the presence of flies. So the cell (mis)represents that there is a fly.
    • Ramsey’s response: Dretske’s answer only work if we already know that the cell in question representation something or other and we’re just trying to figure out what it represents.
  • Representational function
    • Roughly, according to Dretske a representation is a structure

February 27th, 2014 Office hourse

  1. Why is it that with regards to content, that it is implicit that if you can get some sort of notational “picking out” of the unique object in the world you get “objective/naturalistic/determinant” content? Specifically, with regards to representation, why is it important that we “pick out” the unique interpretation of a representation? I’m reminded of Frege trying to uniquely pick out zero for his foundation of algebra. More pointedly: Why is content determinancy content important? We seem to have some algorithm which is more or less succesful.
  2. Indication must be some part of representation. For instance, my vision is undoutedly indicates to me what is directly in from of me. It’s what happens after or with the indication, however. So: How does inditication fit in with a full story about cognition?
  3. On the scripts and Shank and Abelson. The discussion during seminar made it seem like people were searching between a morphism between “the code” and human norms about restaurants. But I think that this is confusing for a number of reasons. First, in naturalistic terms, these human norms are “located” in our shared, repeated experience over time, but more crucially they’re not a physical or fixed object that we can point to. Second, the morphism we’re really looking for is with “the code” and whatever the scientists decided were the present norms, or present norms sufficiently in-stone to merit inclusion in their script, perhaps their in the form of sentences like “first appetizers, second entree, third dessert, etc.” Third, there is a level of abstraction between those sorts of sentences and the running machine, namely that just like “4” is translate to “100”, “first appetizers” will have some bit-code representation. The relevant morphism, I think, is between the machine’s code and the written down norms, the written down norms and the “actual norms.”

March 4th, 2014 Reading

Brooks, “Intelligence without Representation”

Introduction

  • Artificial intellgience started as a field whose goal was to replicate human-level intelligence in a machine.
  • Early hopes fell as the magnitiude of the problem came to bear.
  • No one tries to replicate the full gamut of human intelligence anymore.
    • “Specialized subproblems.”
      • Representing knowledge
      • Natural language processing
      • Vision
      • Truth maintenance
      • Plan verification
    • The hope is that these systems “all fall in to place” to see a truly intelligent system emerge.
  • Brooks believes that human-level intelligence is too complex and little understood to be correctly decomposed into subpieces at the moment.
    • And even if we knew the subpieces we still wouldn’t know the interface between them.
    • And we will never understand how to decompose human intelligence until we practice on lesser intelligences.
  • In this paper, Brookes argues for a different approach to creating AI:
    • We must incrementally build up capabilities of intelligent systems, having complete systems at each step of the way.
      • Thus automatically ensure that the pieces and their interfaces are valid.
    • At each step we should build complete intelligent system that we let loose in the real world with real sensing and real action.
      • Anything less provides a candidate with which we can delude ourselves.
  • Using this approach, they have come to an unexpected conclusion (C) and a radical hypothesis (H):

    (C): When we examine simple level intelligence we find that explicit representations and models of the world simply get in the way. It turns out to be better to use the world as its own model.

    (H): Representation is the wrong unit of abstraction in building the bulkiest parts of intelligent systems.

  • Representatino has been the central ussue in AI for the last 15 years only because it has provided an interface between otherwise isolated modules and conference papers.

The evolution of intelligence

  • We already have the proof of intelligent beings: human beings.
    • Some animals are intelligent to some degree.
    • It’s taken 4.6 billion year history.
  • It in instructive to reflect on the way in which earth-based biological evoltuion spent its time.

    Timeline |Evolutionary event


    3.5 BYA |Single-cell entities arose out of primordial soup. 2.5 BYA |First photosynthetic cells appeared. 550 MYA |The first fish a vertabrates. 450 MYA |Insects 370 MYA |Reptiles 330 MYA |Dinosaurs 250 MYA |Mammals 120 MYA |Primates 18 MYA |Great apes 2.5 MYA |Humans in current form 10,000 years |Agricultures 5000 years |Writing 100 years |”Expert” knowledge

  • This suggests that problem solving behavior, language, expert knowledge, application, and reason are all pretty simple once the essence of being and reacting are available.
    • This part of intelligence took evolution much longer, so it is much harder.
  • Brooks believes that mobility, acute vision, and the ability to carry out survival related tasks in a dynamic environemtn provide a necessary basis for the development of true intelligence.
A story
  • Suppose it is the 1890s, where artificial flight is a glamor topic in science.
    • A few artificial flight researches are magically transported to the 1980s for a medium duration flight on a Boeing 747.
  • Returning to the 1980s, they feel vigorated, knowing that flight is possible on a grand scale.
    • They immediately set to work on duplicate what they have seen.
      • Pitched seats
      • Double-pane windows
      • “Plastics”

Abstraction as a dengerous weapon

  • AI researchers are fond of pointing out that AI is denied its rightful successes.
    • If nobody has any good idea of how to solve a particular problem, it becomes known as an AI problem.
      • When an algorithm developed by AI researches succesfully tackles the problem, AI detractors claim that since it was solvable by algorithm, it isn’t an AI problem.
  • Brooks claims that AI researchers are guilt of the same self-deception.
    • They partition the problems into two parts:
      • The AI problems, which they solve.
      • The non-AI problems, which they don’t.
    • Typically, AI “succeeds” by labelling the parts of problem they solve as “AI problems.”
    • “Abstraction” is used to discount problems of perception and motor skills.
  • Early work on AI concentrated on games, geometrical problems, symbolic algebra, theorem proving, and other formal systems.
    • In each case, the semantics of the domains were fairly simple.
  • Thus, because we perform all the abstractions for our programs, AI work is still being done in block world.
    • The blocks have slightly different shapes and colors.
  • It could be argued that performing perceptual abstraction is merely the normal reductionist use of abstraction common in all good science. Two objections:
    1. Each animal species will have different Merkwelt.
      • The human-assumed Merkwelt may not be valid.
      Merkwelt
      The merkwelt is a concept in robotics, ethology and biology that describes a creature or android’s capacity to view things, manipulate information and synthesize to make meaning out of the universe. In biology, for example, a shark’s merkwelt for instance is dominated by smell due to its enlarged olfactory lobes whilst a bat’s is dominated by its hearing, especially at ultrasonic frequencies.
    2. It is by no means clear that such a Merkwelt is anything like what we actually use internally.

Incremental intelligence

  • Requirements on Creatures:
    • Must cope appropriately to a dynamic environment.
    • Must be robust (minor change in the world doesn’t lead to total collapse).
    • Must maintain multiple goals.
    • Must do something, purpose in being.
Decomposition by function
  • Hardly anyone has ever connected a vision system to an intelligent central system.
  • One needs a long chain of modules to connect perception to action.
Decomposition by activity
  • Makes no distinction between peripheral and central systems.
    • The fundamental slicing is in the “orthogonal direction” dividing it into activity producing subsystems.

The methodology, in practice

  • In order to build systems based on an activity decomposition, we must folow a careful methodology.
Methodological maxim
  • First, test the Creatures in the real world.
    • It is very easy to accidentally build a submodile of the system which happens to rely on some of those simplified properties.
  • Second, the system must interact with the real world over extended period.
An instatiation of the methodology
  • Layers
    1. Avoid hitting objections
      • Sonar
      • Collide
      • Feelforce
      • Runaway
      • Turn
      • Forward
    2. Wander
    3. Try to explore, distant places
      • Whenlook
      • Pathplan
      • Integrate

What this is not

  1. Connectionism
  2. Neural networks
  3. Production rules
  4. Blackboard
  5. German philosophy

Limits to growth

  • These machines operate completely autonomously in complex dynamic environments at the flick of their on switches, and continue until the batteries are drained.
    • We believe they operate at alevel closer to simple insect level intelligence than to bacteria level intelligence.
  • Serious questions
    1. How many layers can be built in the subsumption architecture before the interactions between layers become too complex to continue?
    2. How complex can the behaviors be that are developed without the aid of central representation?
    3. Can higher-level functions such as learning occur in the fixed topology networks of simple FSM.

 

March 4th, 2014 Seminar

Wrapping up R-Representation

  • These are some examples of receptors that he mentions
    • Structures in the frog’s visual systems
      • Bugs
      • Flies
      • Food
      • BBs?
    • These are nomically dependant on flies.
      • When they’re “triggered”, it causes the flies tounge to pop out.
    • Assuming they are representation, a good candidate for the content of the representation is fly.
      • But any small dark and moving spot will cause this same effect.
    • Also, edge-detectors in the visual system.
    • Magnetosomes in the bacteria represent maybe:
      • Magentic north
      • Direction of oxygen-free water
      • Direction of the nearest magent
    • It’s not clear what these represent because of content-indeterminancy.
      • Usually theorists of this kind want to privilege a special kind of content.
      • What’s really important is that food gets into the frog’s stomach.
      • Maybe it’s a fly, but it doesn’t really matter so long as it’s nutritious.
    • What’s the biological function?
      • This doesn’t seem to cut any more finely.
  • Suppose that the idea is you get food, there are many paths to getting food.
    • People talking about getting the trout shodows, which, low and behold, you get the trout.
    • You get the food by representing BBs.
  • There are all these issues about content.
    • Are these even representatinos at all?
  • Dretske is trying to narrow down the content,

On Ben’s presentation

  • The idea is our receptor representation and structural representation aren’t very different, if one is a representation then why can’t the other?
    • S-representation is representation because:
      1. Structural similarity, “isomorphism”
      2. Use in doing something cognitive
    • What are R-representations?
      • Ramsey points out that these are mere relays and they’re serving in the system only as relays, nomic dependancy.
  • There’s no reason for these structures in the frog’s brain or the magentosome’s magnets, in these theories, these are playing an important role in doing something cognitive, something interpretation, it’s “use that picks up the slack.”
    • Is this difference on the basis of content, is this sufficient to make on representatino and the other not. Couple of things to notice:
      1. Are these really so different?
        • Structural similarity vs. nomic dependancy.
        • Take an internal map. Doesn’t it have to be some sort of nomic dependancy? Various uses by the intelligent agent. It can’t be an accident that some structure is structually similar to the thing it represents. That “causal embedding” is going to favor one interpretation over another.
        • If you think about edge detectors, these are not attributed singularly, they’re in systems. Typically, they’ll be systems of representatinos underly systems of (something).
  • If you remember Ramsey’s argument, a lot of his examples are mechanical causal relays.
    • If we do focus on mental representation cases, if we say that the receptor plays a role in a representation cognitive system, then it’s playing a representation causal role.
      • Ramsey’s account of mental represnetation, to do something cognitive just is to interpret its inputs and outs as mental representations.
    • Notice that this move takes IO-representation as basic.
      • Then internal states that play a nomic dependancy role play a representational role.
      • Nothing in the stomach or liver is doing something recognizably cognitive. This just isn’t to treat as a cognitive system.
    • This would be one way to go, piggyback R-representation on IO-representation.

What would differentiate a cognitive system from a non-cognitive system, perhaps some criteria? A liver can be treated as having IO, etc.

  • If the thing is an adder, then it has to be that it’s inputs are addends and outputs are sums.

Liver couldn’t be interpreted as cognitive, because it doesn’t have structure or use.

  • There’s very little conceptual space as saying it is doing something cognitive and that’s its inputs and outputs are interprebable.

If you watch a bumblebee, you’ll see it going about it’s business, flowers, but the outputs are movements – how are these outputs cognitive? It seems like one explanation, maybe the best, is that it’s computing 3-D space, but when you look at what’s going on is that it’s learning, avoid predators, just to characterizing it’s behavior as cogntive might not be to interpret it.

  • When I say I’m characterzing it as doing something cognitive, I’m talking precisely, it’s able to interpret speech, interpret 3-D scene, division, add.

If we say that the bee can perceive, would you say that this getting the inputs of representatinos and outputs of represnetation?

  • If you characterize exactly what the system is, you’ll get representation that just are inputs and outputs of representations that are interpreble.
    • You see some system doing something fairly succesfully, the first job of the theorist is to characterize the competence.
      • You have to see under what condition it fails. What exactly is the function it is computing.

How is this different from Dennet’s pragmatism? What’s the idea that for any system we can characterize it as intentional or we don’t and whether we do or don’t is just about it being pragmatic to predict behavior. When you say, “The difference between the liver and the honeybee is that it isn’t helpful to call it cognitive because there’s no structure we look at it as an adder.” Do you think there are objective facts about the capabilities of a cognitive (or non-cognitive) system. A few weeks ago we have a conversation where you (Egan) if you gerrymander things you can get something that looks like an adder.

  • We attribute capacties to systems
  • Rocks and walls do not have the necessary structure to represente addition, unless you add indices for time.
  • Another way, go to the quantum level, there’s lots of structure there.
  • How to understand the realization function, the way to think about that is specifying to causal
  • To push this a bit further, remember that a gene is certain physical structure in the body and what’s crucial is that it causes certain phenotypical effects, they are responsible for producing the realization’s function.

So you diverge from Dennet in that you don’t think that Dennet has some underlying … Two position:

  1. There are objective facts about a systeming doing something cogntive?
    • Pretty much any system can be contrived to have systems which are doing something cognitive.
  • You slipped in pragmatic, there are the phenomena we choose to explain, it seems to be doing something cogntive, I’m trying to interpret it, Searle’s given me a big book on how to do this.
  • It’s important for us to develop a theory about what’s cognitive, that’s pragmatic.
    • But what isn’t is what a system needs to have the relevant features.

Facts are relevant once we fix interests.

  • Dennet doesn’t think this. The Dennet of the 70s is to systematically predict a system’s behavior on the basis of intetion.
    • The Dennet of the 80s and 90s has patterns in behavior, not patterns in the phenonmena.
      • He thinks that beliefs and desires are abstracta.
      • He denies the instrumentalist label, he rejects it.
      • This is what the computational theory is c

The facts are fixed once you have interests, and the relevant facts are causal structure.

  • Physics describes the fundamental way of things, but it cannot distinguish between those things that can think and *cannot thing.

Niko’s presentation

The Polemical Prologue

  • The general point is that Brooks believes that something has gone very wrong in the research of AI.
    • It might have had more success if it didn’t do this.
    • He’s discovered a better way.
  • In making in his case, he does a lot of things, he:
    • Describes his robot
    • German words
    • Ant parable
A Dubious Parable
  • There are these artificial flight researches from the 1890s and their transported to the 1980s in a commercil airliner for the duration of a flight.
  • Upon returning to the 1980s, they focus their efforts on replicatin the seats and windows they observed on the airliner.
    • Because the task of replicating the airliner seems so overwhelming, they become specialists in different areas.
  • The groups of researchers do not communcat well.
    • They give up the study of aerodynamics entirely. Their propject is doomed to fail – they will never replicate the airliner.
A Potted History
  • We have this proof of concept, namely ourselves.
  • But this kind of specialiation, because it requires abstraction, is a form of “self-delusion”
    • AI researchers design program swhich can only operate on highly abstracted input represnetations.
    • This approach ignores the really difficult problem of perfomring the abstractions in the first place.
    • Brooks illustrates the failre of this kind of approach with the story of work on the “blocks world” in the 1960s and 1970s.
    • This block world made it much easier for the machines to move around in this world, these robots would not function in real-life situations.
  • There is, moreover, an in-principle objection to abstraction: we have no reason to assume that the abstraction which seem most natural to us are anything like abstractions twhcih will be useful for a robotic system with very different sensory modaility; indeed, we have little reason to assume that our own cognition employs just the abstraction which seem most natural to us.

What’s the relatinship between theorizing and abstraction?

  • The way we come to abstractions we use is in designing these modules, he thinks basically “What data is salient to us” is a raw visual data stream, we think, “Oh, it’s salient that there’s a chair and it’s folded.”
    • That this is introspectivesly relevant is important.
An Evolutionary Argument
  • Premise: Human-level intelligences have existed for a comparatively small proposition of the history of life, whereas simpler intelligences have not.
  • Conclusion: “This suggests that problem solving behavior, lagnauge, expert knowloedge and application, and reason, are all pretty simple once the essence of being and reacting are available.”

This is supposed to be relevant because designing low-level intelligence is important for investigating high-level systems.

A Diagnoises
  • The fundamental problem, Brooks suggests, is the approach to AI research by decomposition by function.
    • This apprach is characterized by the “traditional notion of … a central system, with perceptual modules as inputs and action modules as outputs. The perceptual modules deliver a symbolic description of the world and the action modules take a symbolic descrption of deisred acition and make sure they happen in the world. The central system is then a symbolic information processor.”

What’s a decomposition supposed to do? This is supposed to be a better way to do.

  • Ben: For him, isn’t he just trying to build intelligent creatures in building creatures that go around the office.
    • His goal in finding a good decomposition, he’s not trying to illuminate, he’s trying to build a system that is intelligent.
  • Ths model allows groups working of differnt problems to make differnt assumption about the “shape of the symbolic interfaces”, which in teun makes those inferfaces “subject to intellectual abuse”. Brooks think this apporach is unlikely to result in the development of integrated intelligent systems.
Questions
  1. Does Brook’s history of AI research point on in-principle problem with the decomposition by function approach? If not, does it support his case? Is his parable, which seems almost fatalistic, entirely fair?

    How is it not going to scale up? It’s not going to scale up to the messiness of the dynamic, changing world. This is the problem with traiditional AI. So it’s a disanalogy with the AI researchers.

    Maybe these guys a decomposing the functional subsystems badly.

  2. What of the alleged in-principle objection to the decomposition by function approach? Whose cares whether there might be some intelligent systems which couldn’t function using the abstractions most natural to us? Might it still be the case fhat some intelligent system can, and that the decomposition by function apporach could allows us to deisgn them? Why think there might be some tight connection between senory modalities useful abstraction to the first place?
  3. Is the evolutionary argument any good? Compare: Space flight has existed for compataively small proportion of the history of homo sapiens, whereas agriculture has not. This suggest spaceflight is easy once you have agriculture.
  4. What does any of this have to do with prescnec or absence of representation?

Meet the Robots

Decomposition by Activity

This is what Brooks proposes. The basic idea of this is unit of investigation is a capacity of the cognitive system. He calls the capacities layers, which would allow the creature to avoid objects in its vicinity.

Instead of having a representations sent to a CPU, you have a parallel architecture where there are no central computational system, just different capacities which interact with one another to some extent.

This hope is that you can implement one layer and continue to implement layers so it can do more and more complicated behavior without a CPU.

  • To replace the problmatic apporach of decomposition by function, Brooks suggests an approach by decompistion by activity.
    • The idea is that each activity of a system should be controlled by an indepedant subsystem.
    • One such layer might cause a robot to avoid colliding with objects.
  • Once one system is in place, further, indepedant systems are added which interact in such a way as to produce desirable behavior.
    • The systems are not entirely autonomous, since theye have to coordinate the robot’s overall behavior, but there is no central control system.
No Representations
  • Brooks claims that a robot designed according to his principles does not need to employ any representations

    We do claim however, that there need be no explicit representation of either the world or the intentions of the system to generate intelligent behaviors for a Creature. Without such explicit representations, and when viewed locally, the interactions may indeed seem chaotic and without purpose.

Description of a Model Robot

Each layer is made of a bunch of FSMs which each operate semi-indepedantly and has access to a central processor, and it’s arranged of three layers.

  1. The simplest
    • Avoids colliding with things
  2. Also looks at the data (actually not sure), but its purpose is to make the robot wander aound.
  3. Seems to always override the second layer, and this layer takes in the data from the sonar and finds places that are distant from the robot and finds the path from the place, but when there is an object, the third layer recalculates a new path, the capacity of the third level is to explore.
  • The different layers of the model robot are composed of networks of FSM.
    • Each such FSM has timers and access to computational machines “which can compute things such a vector sums.”
  • The model robot has a ring of sonars around its circuference which serve as it sensors.
    • The lowest layer of the robot makes it avoid hitting objects.
      • It does this by running its sonors “and every second emitting an instantaneous map with the readings converted to polar coordinates.”
    • The map gets passed on to some other machines, which determine whether an object is too close, and if so, cause the robot to move in the opposite direction
  • The second layer of the robot makes it wander waround by generating random headings and interfacing with the first layer.
    • The third layer makes the robot try to reach distant places by choosing locations and continually calculating paths for the robot it is forced to avoid an obstacle.
Questions
  • Is Brook’s claim that his robots do not employ any explicit representations plausible? If he is correct, how can he help himself to descriptions of the layers as computing vecto sums, emitting a map witht e readins converted to polar coordinates, etc.?

Response

  • There as to be enviromental richness.
  • There were two problem that stymied research:
    1. Knowledge-representation problem : If you’re building a system for the real-world, it’s got to know a lot.
      • Last time we talked about about Shank scripts, the device has a paradigm of what it’s like to go to a restaurant, and it’s supposed to answer questions about people in this scenario, and there was a lot that it couldn’t answer, like people sit on chairs and not on the floor, that wasn’t in it’s knowledge base, and of course we know that.
      • The problem of structure or representing in some way all of the knowledge that the system has to have to get around in the world and the knowledge has to be represented in a way the system can get it.
      • This is related to the problem of designing a machine that can pass the Turing-test, and any bit of esoteric information can become relevant by a shift in context, and this machine has to have this information accesible.
      • If you got an artificial, well-defined, then scaling up to the real world requires a lot of information.
    2. Frame or relevance problem : The world changes from one minute to the next, but it really only has to notice the relvant changes, and the system would be overwhelmed if it took account for every possible change.
      For instance, the lighting in the room is affecting our shadows and were
      affecting the ambient tempurature, and the system has to represent the
      relevance from irrelevant changes. We do this.
      
      Combinatorial explosion.
      
  • Brooks has some suggestions for building things in the real world. He does say that his robots have no representations at all. What seems to be the case is theres no explicit, general, context-free representation, that’s the kind of representation that it doesn’t have.
    • What are perceived, but doesn’t stress to much in this paper, because each layer is intended to have it’s own proprietary sensory mechanisms, these are tied to the goals of the system, what’s perceived by each layer are “oppurtunities for action.”
      • The analogy here, the father here of what the system perceives things which invite doing certain things, this idea comes from JJ Gibson’s Affordances.
      • He thought what the visual system detects are complex properties which are salient to speicific kinds of organisms.
      • When you perceive a knife, you see something which can cut. Food, we see it as something which is ripe for eating.
      • Gibson thought that these were the objects for perception.
      • This idea is going to be pretty influential in embodied cognition, people talking about representations that are action-oriented, not a passive conditino of the world but as serving some sort of goal.

To be clear, we have this perceptual system and one reason it makes the disctinctions it does make is that it partitions the world in such a way that is useful, or, that we perceive things as purposeful, which is adding content to the perception. It’s like sayin “Why do we lump objects in the ways we do?”

  • Gibson thought that properties like affording cutting, like the knife, is that what we see is an object for cutting, and this structures the light in certain ways. We literraly perceive them, it isn’t categorizing.
    • Food, for humans, there are higher-order invariance in the light, and that food strctures the light in a certain way that our visual system was built to detect.
  • The idea is then that this is how these layers “cut things differently” and it’s partly perceptual, partly cognitive.
    • What it’s perceiving affordances in the environment, oppurtunities to do something in the world.

How does this play out in terms in human being using their fingers? Does this come up in Herbert, what in the machine which he’s talking about has both the perception and the cognition?

  • The lowest level

They have sensors which are “relvant for action”, this seems like too much of a disconnect, the way we’re talking about things being intermingled.

  • They’re sensing properties which are then processing in the FSMs, and this is directing behavior by the things detecting things by those sensors.

How tight and how exactly?

  • All I can do is repeat.
  • If you think about what is being perceived by these various sensors are properties of this thing that is relevant to the action by the layer that this action is going to produce.
    • The bottom layer sensors are not going to sense the coke cans, because that’s not relevant to the activity of avoiding obstacles.
    • Each sense organ at a particular layer is going to detect things which are relevant to that layer.
  • The take-away message is that what is perceived is not bare properties of things but stuff that is instricinally linked to oppurtunties for behavior.
  • Using the world as its own model (third point, unexpected conclusion)

March 12th, 2014 Reading

Notions of components, systems, and interfaces as a starting framework to discuss. These definitions specifies a decomposition:

  1. What are the components?
  2. What are the interfaces?
  3. What are the differences between everyone’s conceptions.

Clark and Chalmers, “The Extended Mind”;

Introduction

Where does the mind stop and rest of the world begin?

  • There are two replies
    • The “skin and skull.”
    • It “just ain’t in the head.”
  • Clark and Chalmers propose a very different sort of externalism, active externalism
    • It’s “based on the active role of the environment in driving cognitive processes.”

Extended Cognitive

  • Consider three cases of human problem solving:
    1. A person mentally rotates a two-dimensional shapes and is asked question concerning fit.
    2. A person can mentally rotate the shape or press the rotate button.
    3. A person can mentally rotate the shape or let their cognitive implant do it.
  • How much cognition is present in these case?
    • If the rotation in case (3) is cognitive, by what right do we count case (2) as fundamentally different?
  • We often rely on external computing resources.
    • Pen and paper long multiplication.
    • Physical re-arragnments of letter tiles to prompt word recall.
    • Instruments such as the nautical slide rule.
    • General paraphenrnalia of:
      • Language
      • Books
      • Diagrams
      • Culture
  • In fact, these cases can be very real. (1) and (2) are options in the computer game Tetris.
    • Two types of actions
      Epistemic action
      Alter the world so as to aid and augment cognitive processes such as recognition and search.
      Demands spread of epistemic credit.
      Pragmatic action
      Alter the world because some physical change is desirable for

Active Externalism

  • The human organism is linked with an external entity in a two-way interaction, creating a coupled system that can be seen as a cogntive system in its own right.
    • Al the components in the system play an active causal role.

Aizawa, “Extended Cognition”

March 11th, 2014 Seminar

Set up of Clark & Chalmers

  • A lot of diversity, but this is only on Clark and Chalmers.
HEC
Cognitive processes litterally extends into the enviroment surrounding the organism, and human cognitive states literally comporse.
Cognitive processes are implemented/realized/constituted by processes in the brain, body, and world. (Note not can be.)
HEMC
Cognitive processes depend on external …

There is a difference between human cognition and cognition simpliciter.

  • Two things we need for discussion:
    1. A distinction between causation and constitution.
    2. Some understanding of “cognitive processes.”
  • A standard picture of contiion is that it’s among the factors that drive behavior.
    • Another part is physical capacities.
  • These guys are calling behavior is extended, but of course everyone thinks that.
    • We want to know wehther cognition is extended.
  • If you mean by cognitive the term “information processing”, then of course its extended as well, because you can gather information from your environment.

Why do you think that the game is over when cognition becomes behavior?

  • Take-away: This is not old-fashioned, bad methodology, this is just, what do you mean when you talk about cognition.
  • There are two kinds of arguments:
    1. Cogntive equivilence arguments
    2. Coupling arguments
  1. What are psychologists doing?
    • Should they be doing that?
  2. What do the folk think?
  3. What are the metaphysical joints?

March 25th, 2014 Seminar

The proper object of study

  • Should we interview psychologists about what they’re up to? A sociology of psychology.
  • Or should be legislate from the armchair? Capture the real psychological joints.

April 1st, 2014 Reading

Andy Clark, “An Embodied Cognitive Science”

Introduction

  • Embodiment and situatedness has become important in:
    • Philosophy
    • Psychology
    • Neuroscience
    • Robotics
    • Education
    • Cognitive anthropology
    • Linguistics
    • And dynamical systems approaches to behavior and thought
Fish
  • Bluefish tuna are too weak, by a factor of 7, the swim as fast as they do without some other factors being at play.
  • These factors are that it is exploiting its environment to apply pressure at certain points to gain speed, currents.
  • The prodigious swimming capcpies of the Bluefin tuna is thus the “fish-as-embedded-in” its local environment
Robots

…instead of thinking about [the] control system as a center for commands to be executed by actuators, the body and its movements are taken as a system with its own dynamic characteristics

  • To understand the robot’s brain, a shift towards embodied is required.
Vision
Pure vision
The idea that vision is largely a means of creating a world model rich enough to let us “throw the world away”, allowing reason and thought to be focuses upon the inner model instead.
  • Under this model, real world action functions as a means of implementing solutions arrived at by pure cognition.
  • Economy and efficiency is purchased by:
    1. The use of cheap, easy-to-detect environmental cues;
    2. The use of active sensing.
    3. The use of repeated consultation of the outside world in place of rich, detailed inner models.
  • “Just-in-time-representation”
  • “The world is its own best model.”
Action and affordance
  • “Ecological psychology”:
    • Bodily movement
    • Ecological context
    • Action relevant information available in the perceptual array
  • For example, diving birds ingeniously close their wings at exactly the right moment before hitting the surface of the water.
  • This approach can also help explain how an outfielder in baseball positions themselves to catch the ball.
Traditional model
The brain takes the data, performs complex computation, and then solves the problem and instructs the body where to go.
Second model
The problem is not solved ahead of time. Instead, the task is to maintain by multiple real-time adjustments to the run, a coordination between inner and outer worlds.
  • Replacing the notion of rich internal representation with the notion of less expensive strategies.
  • Clear: Tuning to higher-order invariants can help explain a wide array of adaptive responses, like:
    • Visually guided locomotion
    • Rhythmic movement
    • Capacity to grasp and wield objections

Beyond adaptive coupling?

  • Number of unresolved questions in these approaches.

Can the embodied, embedded approach contribute to our understanding of so-called “representation-hungry” problem-solving?

Adaptive coupling
Occurs when a system evolves a mechanism that allows it to track the behavior of another system.
  • The sunflower has evolved to track the daily motion of the sun.
    • The sunflower reliably covaries with solar position, and this is what it is evolutionarily meant to do.

Does it have internal representations?

  • Intution: No.
    • There is nothing cognitive occurring.
  • The mark of the cognitive then, is the capacity to engage in something like “off-line reason”, reasoning in the absence of that which our thoughts concern.
    • TOTO, a robot, does something like, where sonar inputs maps a locations and landmarks. METATOTO uses simulated sonar to explore a virtual world.

How different is this account from more traditional solutions? Will it work for all kinds of off-line reasoning or only some?

  • It treats reasoning as something that already exists on the other model.
  • Consider questions about “Should US gun manufacturers be held liable for manufacturing more than could be consumed by the legal gun market?”

Simple versus radical embodiment

Simple embodiment
Constraints upon a theory of inner organization and processing.
Radical embodiment
Profoundly altering the subject matter and theoretical framework of cognitive science.
  • Radical embodiment all involve one or more of these claims:
    1. That understanding the complex interplay of brain, body, and world requires new analytic tolls and methods, such as those of dynmaical system theory.
    2. That traditional notions of internal representations and computation are inadequate and uncessary.
    3. That the typical decomposition of the cognitive system into a variety of inner neural or functional subsystems is often misleading, and blinds us to the possibility of alternative, and more explanatory, decompositions that cut across the traditional brain-body-world divisions.
  • Support for claims (1) and (2):
    • Work on infant motor development, adult motor actions, and mobile robotics.
    • Suspend judgement until empirical advances.
  • Clark: As tasks become more repesentation-hungry we will see more evidence of some kinds of internal representations.
  • Supporting claim three can be happen with research that blends the two modes:
    • Ballard et al. use of the notion of “deictic pointers”.
    Pointer
    In artifical intelligence is an inner state which can act both as an object of computation and as a ‘key’ for retrieving additional data structures or information.
    • The external world is analogous to computer memory
    • Changing gaze is analogoes to chaning the memory reference in silicon computers.

Conclusions

  • Embodied and embedded approaches have a lot to offer.
  • The major challenge is representation hungry problems.
  • The gulf between the embodied and embedded skills of the tuna and the de-coupled skills of the moralist and mathematician remain.

April 15th, 2014 Seminar

  • According to Noway, one of the proponents of intellectualism, where there is some connection to what Gilbert Ryle is up to.
    • It takes rational deliberation to be the basic kind of cognitive operation.
    • The thesis of radical enactivism that Hutto is pushing in his paper is that minds and all basic cognitive activity, except those involving language.
  • Radical Enactivists are clear about what their thesis is not, but it isn’t clear what their thesis is.
    • It appeals to the same body of work we’ve been reading the last few weeks, where Brooks, Beer, the type of work done by “lower level cognitive activity.”
    • The proponents of these EEE movements focus on very low level activities, not representation hungry cognitive activity.
      • The pro-representationalists will not scale up.
      • “Damage control.” Interestingly cognitive stuff will require representation.
  • Central to the argument, cognitive processes will not decompose.
    • There will be no way to break the system up into a computational, representational analysis tractable or illuminating.
  • The close coupling of ant and beach is the “close coupledness” of the ant and the beach and its path.
    • There is some experimental evidence that bears on this, it came in agter Haugeland wrote the paper, ant navigation can forage for food sources.
    • The theories are that the ant finds the food source by counting the number of steps.
      • What’s interesting about this case is that it’s really low-level, but it’s representational.
  • Humans are really good at moving things and reaching for things and manipulating things with our hands.
    • “Manual skills and dexterity can be explained entirely by previous developments.”
  • What distinguishes representation from the signifying or another information carrying sense is that representation can misrepresent, there are accuracy conditions.
    • Determinancy, not something “near”, but this thing here.

April 22nd, 2014 Seminar

Egan

  • Egan’s considered view:
    • IO contents are indispensable.
    • They characterize what the system is doing in cognitive terms.
    • This is all that IO are, a characterization of what the system is doing.
    • Theories need these.

    Big system or little subsystem?

    • Both.

    It might just be a biological account?

    • An explicit explanation of a cognitive capacity is indispensable to cognitive science.
  • What Egan is claiming is that all of those things together, is enough to see why a system is successful at doing something cognitive.
    • These things facilitate our ability to characterize what is going on.
  • Why do we have this mechanism? This might require an evolutionary story.
    • What is it doing? This is part of the computational story of input/output representations. The computational story will say how it’s doing it.

Two possible moves:

  1. Neurological story does not count as a explanandum to explanans.
  2. It does count as an explanation, but there’s something distinctly cognitive.
  • Egan endorses (2), it is unifying or helpful.
  • Ramsey thinks that cognitive science has to trace a fine line behind under-explaining and over-explaining.
    • It might be snuck in.
    • Egan suggested last week that we should not be concerned with over explaining.
      • It’ll look like it’s disappeared if it’s reduced, Egan thinks.
  • What’s Nagel’s point about reductive explanations?
    • Subjective vs. objective, Nagel argue that consciousness is necessarily subjective and so there will never be an objective account of something necessarily subjective.
  • Very often, when we have reductions that span, like, “very interesting reductions”, we need a gloss.
    • Is the difference between the consciousness problem and the intentionality problem in the gloss?
  • Content doesn’t play a causal role, it’s abstract. Someone who claims they are a realist about content, they cannot be talking about anything causal.
    • They might be saying that the causally efficacious property of some state is the property that grounds the attribution of content.
    • The idea that content can “get its hands on the wheel” is a different sense of content.
      • They might think that a structure or property has a relation which grounds attribution of content.
  • What does it mean to be realist about content?
    • Lots of philosophers are trying to work out what’s going on with mental mental representation, it’s not clear what they’re claiming, but does content play an individuation role.
    • Someone whose a realist about content isn’t saying that witches or phlogiston is real.
    • Does content play and individuative role in a theory? Whether content is an intrinsic property of system, except for mathematical properties (Egan thinks).
    • Mathematical content is a type of IO content …
    • Some structure gets posited and it gets assigned a content.
  • Egan thinks that insofar as, her view is that mechanisms, let’s postpone it …
    • Whether IO contents are individuative or not.
    • If the system or structure or state didn’t have that content, it would be something else.
  • Last week, there was some discussion about whether her view was realist about content, but maybe this isn’t the best way to characterize the positions.
    • Nobody, when they’re being careful, thinks that content is playing a causal role, are you realist or anti-realist about content? It’s not the same as being a realist about, say, a scientific construct.
    • Egan is saying that thinking about views about content in realist and anti-realist ways is not really that useful. Whose on the side that we might think of as realist?
      • Fodor, Milligan, Searle.
  • We talked about measurement construals.
    • In reductive cognitive science, content is not individuative.
    • Well, the measurement construal of attitudes and attributions
  • On that view, content isn’t individuative, it isn’t even essential in ordinary practice, but “put this aside.” Content isn’t essential.
  • To distinguish some questions:
    1. Is mental representation real? Well, yes, it is a capacity of people and organisms. A whole-system capacity.
    2. Are representations real?
    3. Does a particular theory or family of theories posit representations?
  • Do these things essentially have content?
    • They get to causally efficacious properties by attributing content to underlying causal role.

Sprevak

Formulation

  • There are two points here:
    1. Being a realist about mental representation that has its content essentially, then you better have a story that naturalizes content or that content could be naturalised.
    2. Being an eliminativist about content.
  • Sprevak proposes a third option he calls Fictionalism.
Fictionalism
Fictionalism about a course of discourse claims that while the propositions are fact-stating, they do not aim towards truth, they aim towards some other goal.
  • The key idea is that the fictionalist is not ontologically committed to anything.
    • Some examples on page 4,
      1. Mathematical objects, $2 + 2 = 4$
      2. Torture is wrong.
      3. The course of biological evolution could have been different.
      4. Phlogiston does not exist.
  • For Sherlock Holmes, the story is somehow a part of the world.
  • On page 5 he goes through possible uses of figurative language, reasons why you might be a fictionalize.
  • Turning to neurorepresentational fictionalism, cognitive science widespread use of neural representations. So this raises the question on page 8 of: How do neurorepresentats of the external world? A couple of examples:
    1. Cannot be social conventions.
    2. Not mere response selectivity.

Objections

  • The whole motivation of going to this route is to avoid the problem of naturalization, but he also says that it also doesn’t even do this.
    • Allegedly, all forms of factionalism

Questions for next time

  1. If the best scientific theory of the mind, cognitive science, posits nothing recognizable as representations, then would folk psychology be false?
  2. Is folk psychology itself committed to internal representations?

April 29th, 2014 Seminar

Ramsey, W. M. (2007). “Implications of a Non-Representational Psychology.” In *Representation Reconsidered (p. 222-233). Cambridge: Cambridge University Press.

 


  1. Are the rules of logic the norms of reasoning? Are the rules of logic pre-existing intuitions systemitized? 
  2. If we map the structural, syntatical, and semantical features of the way we talk about beliefs states and agents and doxastic attitudes, the former is accesible to us and the latter is inaccesible. But if we find a feature of the former we can reasonable map it to the latter.

    Often logic is called “the norms of believing.” Does this pose a problem because logic is a norm of reasoning and we commonly don’t reason well, via the affirming the consequent result, then maybe we can’t move from one to the other.

    If we discover a mental structure which conflicts with logical structure, like for instance affirming the consequent being so common, should we affirm the consequent more often as a result? Alternatively, does it raise the perceived reliability of the seeming invalid (from a logical point-of-view) mental structure? If folk psychology is generally valid and we want to vindicate it, and actual, empirical mental structures follow invalid logical syntax, does that mean we want to vindicate invalid logical syntax too? 

  3. I don’t understand when you’re “supposed to stop describing” for a closed system. I see that account (4) stops describing when the internal state of the computer comes to be required to continue explaining, but I don’t understand why in account (3), supposedly “closed”, you don’t say that bits come to be charged via electricity, for instance.

    I think that even account (1), (2), and (3) really require an arbitrarily large conjunction of state description to be “closed.” 

  4. I want to know what a biological or neurological taxonomy of types would look like — our ability to type artifacts and concepts seems sufficiently general that “anything can fit in the bin”, that is we can store tokens of mountains and transcendental idealism — if token couldn’t be contained in any of our brain’s possible types, would we be in a position to know? 
  5. “We don’t care what if you understand what representation is, we do, and that’s one.” When scientists do this, or where scientists could possibly do this in other fields, you can still ask them where the representation is and how it represents.

    The computer scientist can tell you that the letter on a screen is represented by the computer’s software as an ASCII character with the numbers 1 through 64 (or something) and that those numbers are represented on the computers hardware with many, many on-off switches. On-off switches do no equal represented numbers and represented numbers are not represented letters, but they all share an resemblance that can be exploited.

    What I want to know is can the cognitive scientist take a cognizers utterance of some state, say “I am hungry”, and use folk psychology to say that the utterance is a represents the utterer’s inner state, and if you look at the inner state you’ll find the processes by which the brain finds out and stores facts about the body (but more generally the non-mind world). 

  6. There’s a paradox of analysis here. Where on one hand, cogntive theorists and any competent user of English knows what a representation is, we use them all the time. There are only two possibilities: We know what representation is or we do not. If we do, then we need to “check what’s in the head” and see if it’s a representation. If we don’t, what’s the use of writing a book on it? To clarify what we could possibly mean by a “mental model” or “mental representation”? We don’t know what representation is in general, nevermind whether they can be in the head. 
  7. There is no “primtive”, I think, in geometry. A point can be expressed in terms of a line or a plain. A line can be expressed in terms of two points or a part of a plain. A plain can be expressed in terms of 3 lines or 3 points. 
  8. Only with regards to fitness
  9. I think that this may relate to debates in epistemology about brains in vats. So, here’s an argument for thinking that they are related issues, and then I’ll give an additional argument for how one might give insight to the other.

    Assume that the brain represents its environment. Cummins thinks that a representation can represent all content with which it is isomorphic. Ramsey thinks that the content that matters is what this brain is using now. What’s at issue in brain-in-vat skeptical arguments is that the brain’s internal representation is isomorphic with both what the realist wants to call objective reality and what the skeptic wants to call “brain-in-vat land” (or the images that the vat is giving the brain or something). What’s problematic about the brain-in-vat skeptical concerns is that the internal represnetation is isomorophic with both pictures.

    So how is this insightful? Well. Actually. In hindsight, I’m not sure, I’d have to think about it. I think this succesfully relates the issues. Ramsey’s solution to BIV challenges might be that the most explantory useful theory, because objective reality is what this brain is using now or something … 

  10. It’s not that we don’t know what representation is or that there is no representation, but rather, it’s that representation with regards to cognizing if a fuzzy and poorly understood. 
  11. I think what’s confusing here is that the isomorphism being exploited is not a relation between the data structures and human norms about restauraunts, but rather between the input stories and the “background knowledge” data structures. 
  12. Of course (I think) it’s real and we’re not just assigning something to physics. You can raise skeptical or linguistic or Wittgensteinian worries about words and how we use them, and in fact I think within the realm of those worries you might be right. But the present project is not trying to resolve worries about reductionism or scientism or realism – I think that just as someone who studies grains of sand can grant that there is some ambiguity about heaps, but when you see a heap of sand you very well know it’s a heap of sand. By the same reasoning, I’m okay with being unsure if I can escape that with regards to the ontology of conciousness whether or not there is a brain with a mind or just a series of atoms, but when I’m reasoning about the mind I can put that worry aside. “Keep your metaphysics and linguistics out of my philosophy of mind.”

    (Doesn’t sound very strong, I know.)