# Thursday, January 25, 2024
- This class could also be considered an "intermediate philosophical introduction to cognitive science."
- Why "The Language Instinct"? Because one, it's a good book that covers all of the main topics we're going to cover in class. But also, because it's an older book (published 1994) and so it's outdated and Pinker's main thesis has been challenged since then.
- Pinker's main thesis is that language is primarily an innate process. (Hence, language _instinct_.) He tends towards nature rather than nature.
- Most popular science / philosophy books tend to simplify things for a broader audience, ignoring or obscuring objections.
- A big part of this class will be about learning how to efficiently & critically engage with scientific & philosophical literature: critiquing & comparing them, and refining our beliefs.
# Hume's (1711-1776) empiricism
#### Monday, January 29, 2024
- **The influence of Newtown and the scientific revolution**: Hume was responding to the intellectual environment of the time which was heavily influenced by Isaac Newton and his laws of motion. To Hume, the mind was knowable in the same way the physical world was. That is, while we're limited in our understanding, one could observe and make inferences about the mind.
- **Empiricism as the Origin of Ideas**: Hume argued that *all knowledge originates in sensory experience*. Hume's empiricism was a response and challenge to Descartes', and others, rationalism. While Descartes emphasized innate ideas and deductive reasoning, Hume emphasized sensory experience and inductive reasoning.
- **Problem of Induction**: Hume challenged that causation was never directly observable. Instead, we only observe certain sequences of events and form habits of expectation. For example, when a billiard ball hits another, we expect the other to move but we don't directly observe the inherent causal force. We therefore cannot justify inductive inferences (like expecting the sun to rise tomorrow because it always has) with certainty, as they are based on habit and not logical necessity.
- **Innate mental faculties or algorithms**: Hume posited that the human mind came equipped with certain faculties or algorithms. He believed the mind naturally forms associations between ideas based on principles like similarity, contiguity, and cause and effect.
- **Theory of Passions**: Hume's work on emotions or passions is significant and explores how they arise in the mind and influence behavior. For Hume, emotions are impressions in the same way vision and hearing are. This contrasted with other philosophers who considered emotions as secondary to rational thought.
- **Hume's Fork**: Hume makes a distinction between 'relations of ideas' (necessary truths, like maths and logic) and 'matters of fact' (empirical knowledge about the world, which can be otherwise). For matters of fact, Hume introduces his problem of induction. This is a critical challenge to the certainty which we can claim empirical knowledge and highlights the limits of human understanding.
- **Skepticism about Personal Identity**: Hume questioned the idea of a permanent, unchanging self. He proposed that what we call the 'self' is nothing more than a bundle of perceptions, which are in a constant state of flux.
# Rationalism: Plato & Descartes
#### Thursday, February 1, 2024
Rationalism: The most basic foundation of human knowledge is reasoning, not sensation.
Descartes (1596-1650)
- Sense perception is unreliable.
- Our reasoning ability must be innate.
- Examines and reasons about candle wax.
- I exist > I am a thinking thing > I can trust my "clear and distinct ideas" > God exists > I can (mostly) trust my senses
Plato (428 BC - 348 BC)
- Our senses can't tell us general and perfect geometric truths.
- We don't actually learn things, like math theorems, we "recollect" them. ("Learn things" means get things outside of ourselves rather than inside ourselves.)
- Socrates' square diagram helps to stimulate the slave boy's reasoning.
# Pinker: The language instinct & generative linguistics
#### Monday, 2/5/2024
#### Lecture notes
Generative Linguistics is often informally called Chomsky Linguistics.
- Linguistics is a branch of cognitive psychology.
- It is the study of a psychological system—a "mental organ" called the "language faculty."
- This system allows us to produce and understand complex and precise language.
- It's also the system that allows us to even acquire language.
- Pinker became well-known after writing *The Language Instinct* and decided to become a best-seller who drifted from his specialty
- Chomsky calls it "Universal Grammar", Pinker just calls it "language instinct"
Pinker: Why should we believe in a specialized, innate language faculty?
- Natural language is universal to humans but absent from all other creatures.
- The rules governing language are incredibly more complicated than most rules small children can normally learn.
- Children acquire language in ways that are surprisingly independent of environmental factors. (In some cases they don't encounter any language, at most a proto-pidgin: Nicaraguan Sign Language, a fully developed creole, emerged spontaneously among deaf children in Nicaragua during the 1980s.)
- Language develops in children in ways that seem connected to their general-purposes learning abilities (e.g. the "critical period")
- Language is doubly dissociable from other aspects of cognition: someone can be a fluent speaker but mentally disabled in other ways, or have a selective linguistic impairment while being otherwise psychologically normal.
- Lesions to specific brain areas are associated with specific kinds of language impairment. (e.g. Broca's aphasia)
- Some specific genetic conditions are associated with specific kinds of language impairments.
Genie Wiley
Genie was discovered in 1970, severely isolated and abused for the first thirteen years of her life. Despite intensive language instruction, Genie never achieved full language mastery. This suggests that there is a critical period for language learning.
Poverty of Stimulus/Input (PoS) Arguments
- The general form: People come to know things, and behave in ways, that they could not have learned entirely from their senses / environment. *Therefore, at least some of what they come to know must be innate.*
- Many of Pinker's / Chomsky's arguments are instances of this form.
- Pinker gives many examples:
- Children and irregular verbs: "The child holded the baby rabbit." (This is an instance of children *over*-generating. They all do, at the same age.)
- Nicaraguan Sign Language, and in general, the signing of dead children born to deaf parents who never fully learned sign language.
- Pidgens/creoles
- Chomsky gives many PoS arguments that depend on precise descriptions of specific grammatical phenomena.
- PoS arguments given by rationalists as well: Descartes & melting wax. Socrates & slave boy.
#### Mind dump next day
Steven Pinker is a linguist and author of *The Language Instinct*, which argues that language faculty is innate within humans. Pinker is part of the generative grammar branch of linguistics which was fathered by Noam Chomsky. Chomsky's linguistics work, starting in the 1950's, was revolutionary and challenged the popular Behaviorist school of thought at the time. Chomsky illustrated that humans had an in-built capacity for language, a Universal Grammar, which was reflected in the structure of all languages. His work inspired an entire generation of thinkers, even opening entire fields of academic inquiry. In *The Language Instinct*, Pinker will present an argument for the innateness of human language faculty. In particular, Pinker lists the following arguments:
- Poverty of Stimulus/Input: Children learn language all on their own despite a lack of formal education or examples. For example, in the 1970's deaf Nicaraguan children were gathered and, despite having virtually no prior language instruction nor shared language, developed their own full language: Nicaraguan Sign Language.
- Universal Grammar: A full language has developed in every human society and each shares deep common structures, even despite great distances and relative isolation.
- Brain structure: People who have damage to specific areas of the brain, for example Broca's aphasia, have difficulty with language. Additionally, people with damage to other parts of the brain keep their full language faculty in tact. Lastly, people with intellectual disabilities often have advanced language abilities despite loss in other intellectual abilities.
Pinker's work is heavily influenced by Chomsky but challenges and expands parts of it. In particular...
- Pinker is more convinced by a Darwinian evolutionary explanation for the language instinct.
- Pinker charges that Chomsky's work is academic and abstruse, lacking a more comprehensive approach. Pinker draws from other fields such as anthropology and biology.
# Language Instinct Cont.; Basic Research Skills
#### Thursday, 2/8/2024
#### Class Notes
- If language is innate, why do so many neurotypical people struggle with communication?
- Language is one part of communication. There are other cognitive resources like memory, planning, mind-reading, etc.
- When you play "phone game" with autistic people vs allistic people, the game seems to go well with inter-group communication versus intra-group communication.
- A good thing to do for assignment #1 is to find a case where Pinker exaggerates on someone's research. **This is not hard to do.**
- Pinker says, "virtually every sentence that a person utters or understands is a brand-new combination of words..." This might seem unbelievable but once you get past the formulaic sentences and include things like intonation, body language, and so on, researchers find this is true.
- Sometimes we use the same string with different meaning: "Can you hand me *that*?"
- Pinker is trying to illustrate that there must be some mechanism, some recipe or program, rather than "a repertoire of responses," to figure out the meaning.
- Isn't Pinker under-emphasizing social learning? For example, learning conjugation and verb placement.
- Pinker is more of a descriptive linguist rather than a prescriptive linguist.
#### Class Notes on Assignment #1
- *The Language Instinct* has a "Notes" section where Pinker gives references.
- Unfortunately, the Kindle version of the book doesn't have the book page.
- Then, you can go to the "References" section to find more information.
- Then, you can start with simply Googling the source. That's often easiest.
- If you can't Google the book, go the Hunter College Library search portal.
- Go to Journals, type in the journal you're looking for, then it will send you to the journal page.
- Once you're there, you can search for the title of the resource.
- If that doesn't work, check sci-hub.
- Algorithm in brief:
1. Search sci-hub
2. Search Google
3. Search Hunter College library
# Elizabeth Spelke on Core Knowledge
#### Thursday, 2/15/2024
#### Class notes
- Spelke is trying to use precise experimental evidence to make concrete progress on exactly which things babies have in their minds and how they're able to build on that at later life stages.
- There is significant evidence that humans and some other animals possess systems of "core knowledge". Some properties of these systems:
- They give us extremely basic information about how the world works, which would be difficult to get without them.
- They are shared by adult humans, infants, and some animals.
- They seem to be innate: they are already functioning very early in life.
- They are "domain specific": they are used to think about specific kinds of things or information
- They are "task specific": some situations elicit their operations, but not others
- They are "encapsulated": they work largely independently of each other and of other parts of the mind
- They form part of the basis from which we develop more advanced, culturally specific kind of non-innate knowledge.
- The Object System
- Allows us to represent objects that are "complete, connected, solid bodies that persist over occlusion and maintain their identity through time."
- Works even when objects change some of their superficial properties.
- Allows us to distinguish different objects from each other
- Allows us to do addition and subtraction on very small numbers (M 3).
- The Numerosity System
- "serves to represent approximate numerical magnitudes"
- Used to represent sets rather than individual things (and doesn't work for very small sets, which are handled by the object system).
- It allows us to distinguish large sets that stand in approximately 2:1 numerosity ratios, but not smaller ratios (e.g. 3:2).
- It works independently of the size and area of the objects, and independently of whether they are (e.g.) seen or heard.
- Learning to Count
- We start out using the object system to understand the meaning of "one" and the numerosity system for larger numbers
- 2, 3, 4, etc. (All start out meaning "some things")
- Slowly, with help from natural language, we learn to combine representations from both systems as we give each number its own meaning.
- This gives rise to a new body of knowledge of natural numbers
# Justin Garson: Nature/Nurture Incoherence; Proposing Robust/Plastic
#### Thursday, 2/22/2024
#### Lecture notes
- The question of whether a psychological trait is "innate" is meaningless. We should stop asking such questions.
- There is no good definition of "innate," and no good way to clarify questions about nature vs. nurture.
- The same goes for attempts to decide which traits are caused by genetics and which are caused by environment.
- These aren't just difficult in practice to separate, according to Garson. They are conceptually incoherent distinctions.
- We should shift to a different way of talking:
"To what extent is a trait robust or plastic with respect to given genetic and environmental factors?"
#### Lecture discussion
- The discussion of genes versus environment is further complicated when considering our ability to edit genetic information.
- The answers to the questions involving the causal mechanisms under question are really complicated and varies. So, Garson offers the robust/plastic distinction as an alternative describing statistical phenomenon that says nothing with respect to that but still offers something that touches on what we're trying to get at with the innate/acquired distinction.
# Muhammad Ali Khalidi: Innateness as a natural cognitive kind
#### Monday, 2/26/2024
#### Lecture notes
- Innateness is a "natural cognitive kind"
- A natural kind is a category or concept that is indispensable for scientific purposes.
- A natural cognitive kind is needed for cognitive science in particular.
- Innateness is a "cluster concept"
- A cluster concept is one that is made up of a cluster of other related concepts. ("polythetic" as opposed to "monothetic")
- It is true of an object if the object possesses enough of the properties in the cluster.
- Examples of cluster concepts that are natural kinds: cancer, species concepts (e.g. homo sapiens), many mental illness categories (e.g. depression).
- Specifically, innateness is a concept that clusters together the following properties. (Traits that have a lot of these can be considered innate.)
- Triggering (or more properly, triggerability). Can be acquired in conditions of relative informational impoverishment.
- Lack of learning. Need not be acquired as a result of processes such as inference, memorization, conditioning, association, exploration, experimentation, repeated observation, and imitation.
- Early onset. Acquired relatively early in ontogeny.
- Invariance. Acquired across a broad range of environments.
- Canalization. Buffered against environmental variation. (For example, by means of "cognitive impenetrability.")
- Pancultural. Present in all cultures, even though it may not be universal or monomorphic. (e.g. language)
- Informational encapsulation. Insulated from other cognitive content, functions independently of other cognitive systems. (e.g. language, aspects of visual perception)
- Cognitive impenetrability. Resists modification by other cognitive capacities.
- Critical period. Acquired only or most effectively within a developmental window.
#### Lecture discussion
- Khalidi is describing how things at a more abstract level of science (biology, psychology). Often the lines between concepts can be blurry (e.g. cancer, mental illnesses, species). That means we need to have an explanation that accounts for that. However, unfortunately, there will be limitations to such an account.
- Khalidi on natural kinds: "On this view, categories can defeasibly be considered to correspond to natural kinds if they pertain to our settled scientific theories, are projectable, and are genuinely explanatory."
- They correspond with our scientific theories and make predictions and explanations.
- "Canalized" basically means this trait is shielded from environmental variation.
- The diagram in Khalidi's paper: He thinks the things on the list are causally related to each other. They are non-accidentally occuring because some of them are responsible for the others.
- So for example, Canalization can be causally explained by the critical period, cognitive impenetrability, and/or information encapsulation.
# Benjamin Lee Whorf: The Relation of Habitual Thought and Behavior to Language
#### Monday, 3/4/2024
#### Lecture notes
- Lera Boroditsky (1976–) is a cognitive science professor at UC San Diego and a prominent neo-Whorfian. (She gave the TED talk, "How language shapes the way we think.")
**Whorf**
- "How we describe a situation" → "how we behave in it"
- e.g. Using the ambiguous word "empty" ("nothing inside" vs. "nothing solid or liquid inside") to describe some steel drums led to a fire that Whorf had investigated.
- Thinking of a glow heater as a "coat hanger" and its switch as a "light switch" led to another fire.
- Descriptions of time → understanding of the nature of time
- European languages use plural noun phrases to describe measurable units of time: "he stayed for *ten days*"
- Hopi doesn't allow this; a Hopi speaker would instead say: "he left *after the tenth day*"
- European languages use ordinary nouns to refer to times of day, year, etc: "They met in *spring*."
- Hopi speakers use something more like adverbs for this: "Springly, they met." (No exact translation.)
- For Europeans, "Concepts of time lose contact with the subjective experience of 'becoming later' and are objectified as counted quantities, especially as lengths, made up of units as a length can be visibly marked off into inches." Implied: it's not like this for speakers of Hopi.
- As a result of differences in grammar, SAE (Standard Average European) speakers think of the world as being made up of things, Hopi speakers think of it as being made up of "eventings" (207)
- This shows up in European vs. Hopi "habitual behavior" e.g. how we prepare for future events.
# Bloom and Keil: Thinking Through Language (A nuanced take on Sapir-Whorf)
#### Thursday, 3/14/2024
#### Lecture notes
- The idea that using language effects how we think is interesting and plausible.
- But we should be way more careful in how we think about it.
- And most of the evidence we currently have about particular claims is inconclusive.
- We should think of ourselves as still being in the early stages of this intellectual project.
- Still, Bloom and Keil think that the most dramatic versions of Sapir-Whorf are probably false, even if some less exciting variants will turn out to be true.
**Four important distinctions**
1. Language general effects vs. language-specific effects.
Is it having a language at all that matters, or do different languages also have specific effects?
2. Words vs Syntax?
Is the influence coming just from which words we have, or from the grammatical rules about how words are put together?
3. Big effects vs. small effects.
Are we talking about "7000 Universes" here, or just "slight changes in how subjects perform certain tasks"
4. Trivial effects vs. interesting effects
For example, everyone agrees that people learn new things using language, and this is a way that language affects thought. but we don't need cognitive scientists to tell us that it happens.
**Another distinction, outside the text, worth mentioning**
5. On-line effect vs off-line effect
- Is it an on-line (in the moment) effect of using language to do the task itself?
- Or, is it the result of long-term training of how people think and perceive? (Off-line)
# Jerry Fodor, The Mind-Body Problem
#### Monday, 3/18/2024
#### Lecture notes
**The contents of our minds: some important distinctions**
- States vs processes
- Intentional (or representational) vs non-intentional
- Conscious vs non-conscious
- Folk psychological vs scientific posits
**Propositional Attitudes** (a.k.a intentional mental states, thoughts)
- E.g. beliefs, desires, intentions, hopes, fears,...
- Each one has an attitude (e.g. belief, hope, etc.) and a content (e.g., that Snoop Dog will be elected president)
- Their content components are representations of a possible (but not necessarily actual) state of the world
- Their attitude components concern the agent's relationship to this possible state of the world
- They participate in various thought processes
- Question: What is the nature of these states? What is it for them to represent the world?
###### What are mental states and processes?
Logical Behaviorism (e.g. Carnap, Ryle, Wittgenstein?)
- A mental state is just a cluster of behavioral dispositions. Mental words ("belief", "pain" are just convenient abbreviations for complicated descriptions of behavioral dispositions.
- Advantages: No non-mental stuff, but we still get to use mental words.
- Problem: Nobody can give a single example of a translation; it doesn't make sense of mental processes, causal interactions between mental states.
Central-State Identity Theory (1950's, e.g. J.J.C. Smart, etc.)
- Each type of mental state is identical to a certain type of brain state.
- Focuses on "hardware" (brain)
- Advantage: Nothing non-physical, and it promises to capture the obvious connection between the mind and the brain.
- Problem: It seems that, at least in principle, a single mental state could be made out of different kinds of physical (or non-physical?) stuff, just as software can run on different hardware.
Functionalism (e.g. Fodor, most cognitive scientists)
- A mental state is defined by its causal role: it can be caused in certain mental and non-mental things, and it has certain mental and non-mental effects.
- Focuses on "software"
- Advantage: Combines the advantages of identity theory and logical behaviorism
- Advantage: Predicts that the same mental state could be realized in different underlying physical systems (or non-physical systems, for that matter)./
- Advantage: Fits with our most successful research in psychology.
- Problem: It predicts that a lot of things could have mental states.
- Problem: What about qualitative consciousness?
# Daniel Dennett, The Intentional Stance
#### Thursday, 3/21/2024
#### Lecture notes
##### What are mental states and processes? Beliefs, for example?
###### Fodor and Pinker
- A belief is a line of code in the software that's running in our brain.
- It has linguistic structure. To have a belief, you have to have a language of thought.
- What makes it the belief that it is is its functional role: the relationships it has to our perceptual processes, our other thoughts, and our actions.
- It has these relations because of its linguistic structure.
- Eventually, we might be able to reverse engineer the brain well enough that we can find where the beliefs are and read them.
###### Dennett
- A belief is not (always) any particular thing in someone's mind. It needn't be "a sentence in the head".
- Rather, it is a holistic pattern of a complex system that shows up in the way that the system thinks and behaves.
- The beliefs of a system are only scrutable from the "intentional stance," which is to predict and explain the system's behavior by treating it as a rational agent.
- (Contrast the "physical stance" and the "design stance.")
- Any system toward which one can usefully take the intentional stance has beliefs and other mental states, not just humans or even animals. (E.g, corporations, legislatures, Al systems, etc.)
# Paul Churchland, Eliminative Materialism
#### Monday, 3/25/2024
#### Lecture notes
- Dennett and Fodor believe in beliefs is because they tend to be the best explanation for how our minds work. In the same way atoms, even though we can't observe them, are the best explanation we have for physics.
- On the other hand, Paul Churchland doesn't believe beliefs ("folk psychology") are real.
- Folk psychology is a theory-a collection of generalizations ("laws") by means of which ordinary people derive explanations and predictions of their own and other peoples' thoughts and behavior.
- It is not a very good theory. Three arguments:
1. There are a lot of mental and behavioral phenomena that it doesn't tell us much of anything about, and so it's not doing its job. (e.g. sleep, mental illness, addiction)
2. Good theories get better over time as we collect new data, but folk psychology hasn't gotten better for thousands of years.
3. Good theories fit with our other good theories of related subject matters, but folk psychology doesn't fit with our best scientific theories of human beings.
- Just like other failed theories before it (including other folk theories), folk psychology should (and will) be abandoned in favor of a more scientific approach to understanding the mind and human behavior.
# Elisabeth Camp, Thinking with Maps
#### Monday, 4.1.2024
#### Lecture notes
- There are different kinds of representations, which differ in their "format"
- Examples: Sentences, images, maps, diagrams.
(Question: What other representational formats are there?)
- The standard argument for the language of thought doesn't distinguish between language-like and map-like representations.
- Maps are distinctive in that they represent things by combining both arbitrary conventional relations between symbols and contents (like language) and also spatial isomorphism (like pictures).
- Maps are useful for storing and reasoning about some kinds of information that it would be very inconvenient to store and reason with in language. (E.g. complex spatial relationships.)
- But language is also more convenient for storing and reasoning about some kinds of information (e.g. negations, disjunctions, conditionals, representations of what other people think, quantificational information).
- It is a plausible empirical hypothesis that humans and other animals use cartographic mental representations.
###### The Standard Argument for the Language of Thought Hypothesis (According to Camp)
1. There are systematic relations among the contents that a thinker can represent and reason about.
2. Systematic relations in content must be reflected by correlative structure in a thinker's representational and reasoning abilities.
3. Structured representational abilities require a system of representational vehicles which are composed of recurring discrete parts combined according to systematic rules.
4. Any system of representational vehicles composed of recurring discrete parts combined according to systematic rules is a language.
5. Therefore: there must be a language of thought.
# Steven Pinker on language processing
#### Thursday, 4.4.2024
#### Lecture notes
- What bridges the gap between hearing someone speak and understanding what they meant?
- Maybe we do language processing a bit like language-using artificial intelligences?
- Pinker gives some reasons to think not.
- (But this part of the chapter his hilariously out of date. Language-using Al has come a long way since 1994, and even since 2021, when I recorded the video lecture.)
- Instead, Pinker sketches the outline of the model of sentence processing that he and other generative psycholinguists helped to build.
- One crucial property of this kind of model is that it breaks the process down into steps, building a series of intermediate representations.
- Another property of this kind of model is that each processing step has limited background information and uses heuristics to build these representations.
- Another property of this kind of model is that each processing step has limited background information and uses heuristics to build these representations.
- We can sometimes learn about these limitations by looking at some of the ways that the language-processing system tends to break down.
- One key piece of evidence of this kind has to do with how we interpret ambiguous speech.
- Accompanying [YouTube lecture](https://www.youtube.com/watch?v=3PHiQScLUgw&t=2s) by professor Harris.
![[sequential-speech-processing.png|Sequential speech processing]]
| Ambiguity at syntactic processing | Ambiguity at semantic processing |
| -------------------------------------------------- | ------------------------------------------------- |
| ![[linguistic-syntactic-processing-ambiguity.png]] | ![[linguistic-semantic-processing-ambiguity.png]] |
# Alan Turing, Computing Machinery and Intelligence
#### Monday, 4.8.2024
#### Lecture notes
###### Al: Things to keep in mind
- The current limits of Al are changing fast, and most predictions about how Al will work tend to turn out wrong.
- Good Old Fashioned Al (GOFAI) vs. Connectionism / Neural Networks / Machine Learning / Deep Learning (They're all talking about basically the same thing.)
###### Three goals with AI ⭐️
1. To create artificial ways of doing things that we previously needed intelligent humans to do. ("Applied Al")
2. To study the human mind by creating computer simulations of its functions ("Weak Al")
3. To create artificial beings that would literally be thinking, conscious things. ("Strong Al")
###### Computing Machinery and Intelligence
- Opening Q: Can machines think?
- Turing suggests replacing this Q with a more precise one: Can a machine perform as well as a human at "the imitation game"?
- Turing thinks (in 1950) that this will be possible by around 2000.
- The machines he has in mind are digital computers.
- He gives an incredibly influential definition of a digital computer.
- We now refer to his imaginary device as a Turing Machine.
- We say that a computer system or programming language is "Turing complete" if it can do anything that a Turing machine can do (with enough time and memory).
- Turing then responds to a bunch of philosophical objections to the idea that his test measures genuine intelligence.
###### The Imitation Game
- C's goal, the interrogator, is to discover which of A and B is human, and which is a computer.
- B's goal is to make C guess correctly.
- A's goal is to make C guess incorrectly.
- C can ask A and B questions, and they can answer however they want.
# John Searle, "Minds, Brains, and Programs"
#### Thursday, 4.11.2024
#### Lecture notes
- Searle's argument is entirely based on our intuitions about his "Chinese-Room" thought experiment.
- Strong Al is a misguided idea: even an excellent computer simulation of a mind would not itself be a mind.
- Thinking is something over and above manipulating symbols (which is what computers do).
- To think, you also have to understand the symbols, to know what they mean.
- The computational theory of mind (and functionalism) leaves out something important about the nature of our psychology.
- So what's the missing ingredient? Searle doesn't say, but seems to think that there's something special about brains, over and above their functional/computational properties, that makes them capable of genuine thought and understanding.
###### Some replies to Searle
- The systems reply: "Sure, the guy in the room doesn't understand Chinese, but the entire system (the room + the guy + the rule set) does understand Chinese."
- The robot reply: "If you built a system that could actually move around the world, make decisions, and do things with the information it got through language, then it would count as understanding."
- The brain simulator reply: "Surely if we built a machine that simulated the brain of a Chinese speaker, down to the neuron, that would understand Chinese!?"
- Ned Block: There are many examples in scientific history when our intuitions and imagination have led us to be incapable of believing scientific advances, and this might just be one of them.
# Gary Marcus, "AGI: Why Aren't We There Yet?"
#### Monday, 4.15.2024
YouTube: 3Blue1Brown, [But what is a neural network?](https://www.youtube.com/watch?v=aircAruvnKk)
###### Why do LLMs need so much more data?
- They're missing innate knowledge
- They're missing the rich, multimodal sensory experience that normally accompanies human language learning
- They're missing the structured social interaction and feedback in the context of which human children learn.
Note: the interesting question isn't which of these things is the right explanation, but how much each one matters!
# David Chalmers, "Are LLMs Sentient?"
#### Thursday, 4.18.2024
What is sentience?
- Chalmers' definition: to be sentient is to have phenomenal consciousness. There is "something that it's like" to be you.
- This includes pain, pleasure, and other affective states, but also other kinds of subjective experiences.
- This does not require self-consciousness, human-level intelligence, or an advanced capacity for reasoning.
- Lots of sentient beings (animals, babies) don't have human-level intelligence.
Evidence for Al sentience?
- Self report: weak
- Seems sentient: weak
- Conversational ability: worth thinking about
- Domain-general abilities: worth thinking about
Evidence against Al sentience?
- Most current Als differ from other sentient beings in lots of ways: biology, sensory perception, embodiment, possession of a world model, human-level reasoning, a global workspace, recurrent processing, and unified agency.
- The relevance of some of these to sentience is highly contentious (biology, long-term memory, global workspace, a capacity for higher-order thoughts)
- Others are already rapidly being built into newer Al systems, and so are just a matter of time (sensory perception, embodiment)
- Of these, Chalmers seems to think that unified agency is the most important, and the most lacking in current LLMs.
# Pinker on language evolution
#### Monday, 5.6.2024
###### Reasons to doubt Savage-Rumbaugh's conclusions
- Scientists usually aren't supposed to live with their subjects and treat them as family members.
- (Question: why might this matter?)
- Usually we want to do research on lots of subjects to make sure that any one case isn't a fluke.
- It's usually important that researchers' observations and results can be replicated by other researchers.
- We have seen some possible examples of Kanzi apparently combining symbols ("slow lettuce," "water bird". But maybe these are coincidences?
How often does Kanzi combine symbols in nonsensical ways?
- What exactly is the reason for thinking that Kanzi is combining lexigrams grammatically, as opposed to just saying two or three related things in a row?
- Even if Kanzi can do some grammar, it seems that certain other species can too, including some breeds of dogs, honey bees, etc. Should we think that bonobo abilities are homologous with human language in ways that these others presumably aren't?
###### Pinker, The Language Instinct, Ch.11, Main Theses:
- Language is unique to humans.
- At least some of the cognitive mechanisms required for genuine grammar emerged only after our most recent common ancestor with chimpanzees and bonobos.
- There is very little continuity between human language and anything in other great apes.
- All of the experiments purporting to show that other great apes can acquire some grammatical capabilities are misleading, bad science.
- Our capacity for language was not the result of a "saltational event" (such as a mutation), as Chomsky thinks.
- Rather, 6 million years is plenty of time for human language to have evolved gradually, by natural selection, in our pre-human ancestors.
- In fact, language isn't a single adaptation, but a collection of many adaptations that work together.
Pinker on the adaptive complexity of the language instinct:
>Every discussion in this book has underscored the adaptive complexity of the language instinct. It is composed of many parts: syntax, with its discrete combinatorial system building phrase structures; morphology, a second combinatorial system building words; a capacious lexicon; a revamped vocal tract; phonological rules and structures; speech perception; parsing algorithms; learning algorithms. Those parts are physically realized as intricately structured neural circuits, laid down by a cascade of precisely timed genetic events. What these circuits make possible is an extraordinary gift: the ability to dispatch an infinite number of precisely structured thoughts from head to head by modulating exhaled breath.