# Chapter 3: Mentalese, tl;dr
- Pinker argues against linguistic relativism, calling it a "conventional absurdity."
- Pinker plans to debunk the linguistic notion that language dramatically shapes thinking and to show advances in cognitive science that make linguistic relativism implausible.
- Pinker critiques Benjamin Lee Whorf's interpretation of language and thought, especially regarding color perception, the concept of time in the Hopi culture, and the exaggerated number of Eskimo words for snow.
- He points out methodological flaws and misunderstandings in Whorf's arguments, suggesting that Whorf's claims about linguistic influence on thought are overstated.
- Pinker states that decades of psychological research have shown little to no support for the Whorfian hypothesis.
- Pinker introduces the idea of "mentalese," an internal language of thought that operates independently of any spoken language, arguing that it's a more plausible explanation for how thought processes work.
- He discusses examples of thinking without language, including aphasia, inventions of language by deaf children, and non-verbal reasoning abilities in languageless beings and babies.
- Discusses philosophical debates on non-verbal thoughts and introduces the Turing machine as a model for understanding how mentalese might operate, laying the groundwork for the computational theory of mind.
- Pinker presents the physical symbol system hypothesis or the computational theory of mind as foundational to cognitive science, arguing that thoughts are represented in the brain as symbolic arrangements.
- He concludes that the complexities, ambiguities, and context-dependencies of any spoken language, like English, make it unsuitable for direct internal computation, reinforcing the idea that thought and language operate on fundamentally different levels.
# Chapter 3: Mentalese, notes
Pinker opens chapter 3 by discussing George Orwell's 1949 cautionary novel _Nineteen Eighty-Four_, wherein Orwell describes how the language "Newspeak" was used to control citizens thoughts. Pinker asks critically:
>Is thought dependent on words? Do people literally think in English, Cherokee, Kivunjo, or, by 2050, Newspeak? Or are our thoughts couched in some silent medium of the brain—a language of thought, or “mentalese”—and merely clothed in words whenever we need to communicate them to a listener? No question could be more central to understanding the language instinct.
Pinker notes the Sapir-Whorf hypothesis as a famous "scientific" basis for **linguistic determinism, and it's weaker version, linguistic relativity. Both stating that languages cause differences in the thoughts of their speakers.**
> \[...\]the famous Sapir-Whorf hypothesis of linguistic determinism, stating that people’s thoughts are determined by the categories made available by their language, and its weaker version, linguistic relativity, stating that differences among languages cause differences in the thoughts of their speakers. People who remember little else from their college education can rattle off the factoids: the languages that carve the spectrum into color words at different places, the fundamentally different Hopi concept of time, the dozens of Eskimo words for snow. The implication is heavy: the foundational categories of reality are not “in” the world but are imposed by one’s culture (and hence can be challenged, perhaps accounting for the perennial appeal of the hypothesis to undergraduate sensibilities).
But Pinker argues strongly that linguistic relativism is "wrong, all wrong."
>The idea that thought is the same thing as language is an example of what can be called a conventional absurdity: a statement that goes against all common sense but that everyone believes because they dimly recall having heard it somewhere and because it is so pregnant with implications.
Pinker calls linguistic relativism a "conventional absurdity" and tries to give a counter-intuition by describing everyday counter-examples: having a feeling of what you meant to say as opposed to what you actually said, struggling to find words to express a thought, remembering only the gist of a passage we read, and "if thoughts depended on words, how could a new word ever be coined? How could a child learn a word to begin with?"
Pinker states that this chapter will show:
- **there is no scientific evidence that languages dramatically shape their speakers’ ways of thinking.**
- how **advances in cognitive science have clarified how the mind works and therefore made linguistic relativism implausible.**
Pinker goes on to critique Whorf.
>Take the story about the worker and the “empty” drum. The seeds of disaster supposedly lay in the semantics of empty, which, Whorf claimed, means both “without its usual contents” and “null and void, empty, inert.” The hapless worker, his conception of reality molded by his linguistic categories, did not distinguish between the “drained” and “inert” senses, hence, flick…boom! But wait. Gasoline vapor is invisible. A drum with nothing but vapor in it looks just like a drum with nothing in it at all. Surely this walking catastrophe was fooled by his eyes, not by the English language.
Pinker critiques Whorf for making inferences about Apache psychology. First, Whorf makes a circular argument: Apaches speak differently, so they must think differently. How do we know they speak differently? Listen to how they speak. Second, Whorf made clumsy, word-for-word translations. "Turning the tables, I could take the English sentence 'He walks' and render it 'As solitary masculinity, legedness proceeds.'"
Pinker argues against Whorf’s idea on language constraining color perception. Yes, languages differ in how they demarcate colors. However, they do so in a consistent way and that’s reflective of the physiology and psychology of color perception that is the same in every person, regardless of native language.
Pinker challenges Whorf's claim that the Hopi language and culture lack a concept of time, including words or expressions for time and a perception of time as a continuum. Anthropologist Ekkehart Malotki's extensive research showed that the Hopi do use tense, have metaphors for time, recognize time units (days, seasons, lunar phases, etc.), and employ sophisticated methods for keeping track of time, including calendars and sundials.
Pinker calls out the "Great Eskimo Vocabulary Hoax," clarifying that Eskimos do not have dozens or hundreds of words for snow but rather two at most. He notes how it was like an exaggerated urban legend, helped along by Whorf's embellishment.
Pinker discusses the irony that **linguistic relativity meant to show non-literate cultures were complex and sophisticated but ironically produced patronizing, exoticizing psychologizing.**
**After dispensing with anthropological explanation, Pinker states that thirty-five years of psychology research has also shown little evidence for the Whorfian hypothesis.**
>If the anthropological anecdotes are bunk, what about controlled studies? The thirty-five years of research from the psychology laboratory is distinguished by how little it has shown. Most of the experiments have tested banal “weak” versions of the Whorfian hypothesis, namely that words can have some effect on memory or categorization.
Pinker critiques an experiment where people better show slightly better memory for the color of paint chips that have readily available names in their language.
>All it shows is that subjects remembered the chips in two forms, a nonverbal visual image and a verbal label, presumably because two kinds of memory, each one fallible, are better than one.
And another experiment where subjects have to put together two of three paint chips together and often put the ones together that have the same name in their language.
>Again, no surprise. I can imagine the subjects thinking to themselves, “Now how on earth does this guy expect me to pick two chips to put together? He didn’t give me any hints, and they’re all pretty similar. Well, I’d probably call those two ‘green’ and that one ‘blue,’ and that seems as good a reason to put them together as any.”
Alfred Bloom's research suggested that the lack of a subjunctive to express counterfactuals in Chinese compared to English led Chinese speakers to struggle with understanding hypothetical scenarios. However, criticisms from cognitive psychologists highlighted methodological flaws in Bloom's experiments, such as stilted Chinese translations and overlooked ambiguities in the stories, which, when corrected, eliminated the observed differences between the groups.
Pinker describes the various examples of thinking without language: Mr. Ford, the fully intelligent aphasic, deaf children who lack a language and invent one, and even deaf adults with no language (even sign language.)
There's also studies of languageless beings (babies, monkeys, and adult humans) who have shown reasoning ability. Psychologist Karen Wynn has shown that five-month-old babies can do a simple form of arithmetic. Vervet monkeys display understanding of complex relationship dynamics and recognition of cries from specific infants. Many creative people insist their most inspired thinking happened not in words but in mental images. For example, Michael Faraday, Watson and Crick, and Albert Einstein.
Philosophers in first half of 20th century argued against the idea of non-verbal thoughts by saying it would require a homunculus (“little man”) to interpret it, which would require another homunculus, which would lead to an infinite regress. But the Turing machine described a machine that could represent and solve any problem, and perhaps, he conjectured, anything that any physically embodied mind can do.
>Turing described a hypothetical machine that could be said to engage in reasoning. In fact this simple device, named a Turing machine in his honor, is powerful enough to solve any problem that any computer, past, present, or future, can solve. And it clearly uses an internal symbolic representation—a kind of mentalese—without requiring a little man or any occult processes. By looking at how a Turing machine works, we can get a grasp of what it would mean for a human mind to think in mentalese as opposed to English.
The Turing-machine-like theory of thinking is fundamental to cognitive science— termed “the physical symbol system hypothesis” or the “computational” or “representational” theory of mind. The representations that one posits in the mind have to be arrangements of symbols, and the processor has to be a device with a fixed set of reflexes, period.
Pinker challenges the notion that our internal thought processes directly mirror the language we speak. He argues that the ambiguities, logical inconsistencies, and context-specific elements inherent in any spoken language, such as English, demonstrate its unsuitability as a direct medium for thought. Pinker highlights issues like ambiguity, lack of logical explicitness, co-reference problems, the context-dependency, and synonymy of language to illustrate that thought and language operate on fundamentally different levels, necessitating an internal representation system that transcends specific languages.
>English (or any other language people speak) is hopelessly unsuited to serve as our internal medium of computation.
>Say you start talking about an individual by referring to him as the tall blond man with one black shoe. The second time you refer to him in the conversation you are likely to call him the man; the third time, just him. But the three expressions do not refer to three people or even to three ways of thinking about a single person; the second and third are just ways of saving breath. Something in the brain must treat them as the same thing; English isn’t doing it.
>These examples (and there are many more) illustrate a single important point. The representations underlying thinking, on the one hand, and the sentences in a language, on the other, are in many ways at cross-purposes. Any particular thought in our head embraces a vast amount of information. But when it comes to communicating a thought to someone else, attention spans are short and mouths are slow. To get information into a listener’s head in a reasonable amount of time, a speaker can encode only a fraction of the message into words and must count on the listener to fill in the rest. But inside a single head, the demands are different. Air time is not a limited resource: different parts of the brain are connected to one another directly with thick cables that can transfer huge amounts of information quickly. Nothing can be left to the imagination, though, because the internal representations are the imagination.
Given everything discussed, we end up with the idea that people think in a universal language of thought, a mentalese.
>People do not think in English or Chinese or Apache; they think in a language of thought. This language of thought probably looks a bit like all these languages; presumably it has symbols for concepts, and arrangements of symbols that correspond to who did what to whom, as in the paint-spraying representation shown above. But compared with any given language, mentalese must be richer in some ways and simpler in others.
# Chapter 7: Talking Heads, tl;dr
Pinker elucidates the inherent complexity of language understanding, emphasizing the remarkable capabilities of the human mind in navigating linguistic structures, ambiguities, and social nuances in real-time—a feat that remains a significant challenge for artificial intelligence. Through exploring parsing, memory, decision-making, and the pragmatic aspects of communication, "Talking Heads" offers insight into the sophisticated interplay between language and thought.
###### [Key takeaways provided by CourseHero](https://www.coursehero.com/lit/The-Language-Instinct-How-the-Mind-Creates-Language/chapter-6-summary/)
- The human brain comprehends speech not only effectively but also efficiently.
- Comprehension of sentences begins with parsing. The human brain has a "mental program" that analyzes sentence structure, referred to as a "parser." The brain parses speech, finding subjects, verbs, and other parts of speech, innately and unconsciously.
- Grammar is like a database that determines which sounds go with which meanings. Productive and receptive language share the same database, which facilitates communication.
- The parser is constrained by memory, specifically short-term memory, and the need for decision-making. Memory is easy for AI and hard for humans; decision-making is easy for humans and hard for AI.
- Sentences with multiple embedded elements pose difficulty for the human brain. The difficulty lies not in insufficient memory, but in the type of memory needed for effective analysis.
- The human brain uses a "depth-first strategy" for making meaning from ambiguous sentences. That is, the brain chooses a meaning and pursues it until the meaning doesn't fit, then starts over until meaning is constructed.
- Understanding of language is mediated less by a person's general knowledge than by the innate mechanisms for language use within the human brain.
- Understanding a sentence results from parsing and then making reasonable inferences based on context. Fragments of sentences must be integrated into the mental database, not as a list but as part of a complex framework.
- Communication is far more complex than a two-way transfer of information.
# Chapter 7: Talking Heads, notes
The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted—recognizing a face, lifting a pencil, walking across a room, answering a question—in fact solve some of the hardest engineering problems ever conceived.
Understanding a sentence is one of these hard easy problems. To interact with computers we still have to learn their languages; they are not smart enough to learn ours. In fact, it is all too easy to give computers more credit at understanding than they deserve.
In fact, from a scientist’s perspective, people have no right to be as good at sentence understanding as they are. Not only can they solve a viciously complex task, but they solve it fast. Comprehension ordinarily takes place in “real time.” Listeners keep up with talkers; they do not wait for the end of a batch of speech and interpret it after a proportional delay, like a critic reviewing a book. And the lag between speaker’s mouth and listener’s mind is remarkably short: about a syllable or two, around half a second. Some people can understand and repeat sentences, shadowing a speaker as he speaks, with a lag of a quarter of a second!
In fact, from a scientist’s perspective, people have no right to be as good at sentence understanding as they are. Not only can they solve a viciously complex task, but they solve it fast. Comprehension ordinarily takes place in “real time.” Listeners keep up with talkers; they do not wait for the end of a batch of speech and interpret it after a proportional delay, like a critic reviewing a book. And the lag between speaker’s mouth and listener’s mind is remarkably short: about a syllable or two, around half a second. Some people can understand and repeat sentences, shadowing a speaker as he speaks, with a lag of a quarter of a second!
How do we understand a sentence? The first step is to “parse” it.
But it does involve a similar process of finding subject, verbs, objects, and so on, that takes place unconsciously. Unless you are Woody Allen speed-reading War and Peace, you have to group words into phrases, determine which phrase is the subject of which verb, and so on. For example, to understand the sentence The cat in the hat came back, you have to group the words the cat in the hat into one phrase, to see that it is the cat that came back, not just the hat.
Grammar itself is a mere code or protocol, a static database specifying what kinds of sounds correspond to what kinds of meanings in a particular language. It is not a recipe or program for speaking and understanding. Speaking and understanding share a grammatical database (the language we speak is the same as the language we understand), but they also need procedures that specify what the mind should do, step by step, when the words start pouring in or when one is about to speak. The mental program that analyzes sentence structure during language comprehension is called the parser.
Why is it so hard to program a computer to do this? And why do people, too, suddenly find it hard to do this when reading bureaucratese and other bad writing? As we stepped our way through the sentence pretending we were the parser, we faced two computational burdens. One was memory: we had to keep track of the dangling phrases that needed particular kinds of words to complete them. The other was decision-making: when a word or phrase was found on the right-hand side of two different rules, we had to decide which to use to build the next branch of the tree. In accord with the first law of artificial intelligence, that the hard problems are easy and the easy problems are hard, it turns out that the memory part is easy for computers and hard for people, and the decision-making part is easy for people (at least when the sentence has been well constructed) and hard for computers.
What boggles the human parser is not the amount of memory needed but the kind of memory: keeping a particular kind of phrase in memory, intending to get back to it, at the same time as it is analyzing another example of that very same kind of phrase. Examples of these “recursive” structures include a relative clause in the middle of the same kind of relative clause, or an if…then sentence inside another if…then sentence. It is as if the human sentence parser keeps track of where it is in a sentence not by writing down a list of currently incomplete phrases in the order in which they must be completed, but by writing a number in a slot next to each phrase type on a master checklist. When a type of phrase has to be remembered more than once—so that both it (the cat that…) and the identical type of phrase it is inside of (the rat that…) can be completed in order—there is not enough room on the checklist for both numbers to fit, and the phrases cannot be completed properly.
Unlike memory, which people are bad at and computers are good at, decision-making is something that people are good at and computers are bad at. I contrived the toy grammar and the baby sentence we have just walked through so that every word had a single dictionary entry (that is, was at the right-hand side of only one rule). But all you have to do is open up a dictionary, and you will see that many nouns have a secondary entry as a verb, and vice versa. For example, dog is listed a second time—as a verb, for sentences like Scandals dogged the administration all year. Similarly, in real life hot dog is not only a noun but a verb, meaning “to show off.” And each of the verbs in the toy grammar should also be listed as nouns, because English speakers can talk of cheap eats, his likes and dislikes, and taking a few bites. Even the determiner one, as in one dog, can have a second life as a noun, as in Nixon’s the one. These local ambiguities present a parser with a bewildering number of forks at every step along the road.
These ambiguities are the rule, not the exception; there can be dozens or hundreds of possibilities to check at every point in a sentence. For example, after processing The plastic pencil marks…, the parser has to keep several options open: it can be a four-word noun phrase, as in The plastic pencil marks were ugly, or a three-word noun phrase plus a verb, as in The plastic pencil marks easily. In fact, even the first two words, The plastic…, are temporarily ambiguous: compare The plastic rose fell with The plastic rose and fell.
Computer parsers are too meticulous for their own good. They find ambiguities that are quite legitimate, as far as English grammar is concerned, but that would never occur to a sane person. One of the first computer parsers, developed at Harvard in the 1960s, provides a famous example. The sentence Time flies like an arrow is surely unambiguous if there ever was an unambiguous sentence (ignoring the difference between literal and metaphorical meanings, which have nothing to do with syntax). But to the surprise of the programmers, the sharp-eyed computer found it to have five different trees!
- Time proceeds as quickly as an arrow proceeds, (the intended reading)
- Measure the speed of flies in the same way that you measure the speed of an arrow.
- Measure the speed of flies in the same way that an arrow measures the speed of flies.
- Measure the speed of flies that resemble an arrow.
- Flies of a particular kind, time-flies, are fond of an arrow.
How do people home in on the sensible analysis of a sentence, without tarrying over all the grammatically legitimate but bizarre alternatives? There are two possibilities. One is that our brains are like computer parsers, computing dozens of doomed tree fragments in the background, and the unlikely ones are somehow filtered out before they reach consciousness. The other is that the human parser somehow gambles at each step about the alternative most likely to be true and then plows ahead with that single interpretation as far as possible. Computer scientists call these alternatives “breadth-first search” and “depth-first search.”
At the level of individual words, it looks as if the brain does a breadth-first search, entertaining, however briefly, several entries for an ambiguous word, even unlikely ones.
Mental dictionary lookup, then, is quick and thorough but not very bright; it retrieves nonsensical entries that must be weeded out later.
- The mediocre are numerous, but the prime number few.
- Carbohydrates that people eat are quickly broken down, but fat people eat accumulates.
- JR Ewing had swindled one tycoon too many into buying useless properties. The tycoon sold the offshore oil tracts for a lot of money wanted to kill JR.
These are called garden path sentences, because their first words lead the listener “up the garden path” to an incorrect analysis. Garden path sentences show that people, unlike computers, do not build all possible trees as they go along; if they did, the correct tree would be among them. Rather, people mainly use a depth-first strategy, picking an analysis that seems to be working and pursuing it as long as possible; if they come across words that cannot be fitted into the tree, they backtrack and start over with a different tree. (Sometimes people can hold a second tree in mind, especially people with good memories, but the vast majority of possible trees are never entertained.) The depth-first strategy gambles that a tree that has fit the words so far will continue to fit new ones, and thereby saves memory space by keeping only that tree in mind, at the cost of having to start over if it bet on the wrong horse raced past the barn.
Words can also help by suggesting to the parser exactly which other words they tend to appear with inside a given kind of phrase. Though word-by-word transition probabilities are not enough to understand a sentence (Chapter 4), they could be helpful; a parser armed with good statistics, when deciding between two possible trees allowed by a grammar, can opt for the tree that was most likely to have been spoken. The human parser seems to be somewhat sensitive to word pair probabilities: many garden paths seem especially seductive because they contain common pairs like cotton clothing, fat people, and prime number.
Finally, people find their way through a sentence by favoring trees with certain shapes, a kind of mental topiary. One guideline is momentum: people like to pack new words into the current dangling phrase, instead of closing off the phrase and hopping up to add the words to a dangling phrase one branch up. This “late closure” strategy might explain why we travel the garden path in the sentence Flip said that Squeaky will do the work yesterday. The sentence is grammatical and sensible, but it takes a second look (or maybe even a third) to realize it.
I have been talking about trees, but a sentence is not just a tree. Since the early 1960s, when Chomsky proposed transformations that convert deep structures to surface structures, psychologists have used laboratory techniques to try to detect some kind of fingerprint of the transformation. After a few false alarms the search was abandoned, and for several decades the psychology textbooks dismissed transformations as having no “psychological reality.” But laboratory techniques have become more sophisticated, and the detection of something like a transformational operation in people’s minds and brains is one of the most interesting recent findings in the psychology of language.
Remarkably, every one of these mental processes can be measured. During the span of words between the moved phrase and the trace—the region I have underlined—people must hold the phrase in memory. The strain should be visible in poorer performance of any mental task carried out concurrently. And in fact, while people are reading that span, they detect extraneous signals (like a blip flashed on the screen) more slowly, and have more trouble keeping a list of extra words in memory. Even their EEC’s (electroencephalograms, or records of the brain’s electrical activity) show the effects of the strain.
Connecting phrases with traces is a hairy computational operation. The parser, while holding the phrase in mind, must constantly be checking for the trace, an invisible and inaudible little nothing.
Comprehension uses the semantic information recovered from a tree as just one premise in a complex chain of inference to the speaker’s intentions. Why is this so? Why is it that even honest speakers rarely articulate the truth, the whole truth, and nothing but the truth? The first reason is air time. Conversation would bog down if one had to refer to the United States Senate Select Committee on the Watergate Break-In and Related Sabotage Efforts by uttering that full description every time.
The efficiency, though, depends on the participants’ sharing a lot of background knowledge about the events and about the psychology of human behavior. They must use this knowledge to cross-reference the names, pronouns, and descriptions with a single cast of characters, and to fill in the logical steps that connect each sentence with the next. If background assumptions are not shared—for example, if one’s conversational partner is from a very different culture, or is schizophrenic, or is a machine—then the best parsing in the world will fail to deliver the full meaning of a sentence. Some computer scientists have tried to equip programs with little “scripts” of stereotyped settings like restaurants and birthday parties to help their programs fill in the missing parts of texts while understanding them. Another team is trying to teach a computer the basics of human common sense, which they estimate to comprise about ten million facts.
Understanding, then, requires integrating the fragments gleaned from a sentence into a vast mental database. For that to work, speakers cannot just toss one fact after another into a listener’s head. Knowledge is not like a list of facts in a trivia column but is organized into a complex network. When a series of facts comes in succession, as in a dialogue or text, the language must be structured so that the listener can place each fact into an existing framework. Thus information about the old, the given, the understood, the topic, should go early in the sentence, usually as the subject, and information about the new, the focus, the comment, should go at the end.
The study of how sentences are woven into a discourse and interpreted in context (sometimes called “pragmatics”) has made an interesting discovery, first pointed out by the philosopher Paul Grice and recently refined by the anthropologist Dan Sperber and the linguist Deirdre Wilson. The act of communicating relies on a mutual expectation of cooperation between speaker and listener. The speaker, having made a claim on the precious ear of the listener, implicitly guarantees that the information to be conveyed is relevant: that it is not already known, and that it is sufficiently connected to what the listener is thinking that he or she can make inferences to new conclusions with little extra mental effort. Thus listeners tacitly expect speakers to be informative, truthful, relevant, clear, unambiguous, brief, and orderly. These expectations help to winnow out the inappropriate readings of an ambiguous sentence, to piece together fractured utterances, to excuse slips of the tongue, to guess the referents of pronouns and descriptions, and to fill in the missing steps of an argument. (When a receiver of a message is not cooperative but adversarial, all of this missing information must be stated explicitly, which is why we have the tortuous language of legal contracts with their “party of the first part” and “all rights under said copyright and all renewals thereof subject to the terms of this Agreement.”)
Human communication is not just a transfer of information like two fax machines connected with a wire; it is a series of alternating displays of behavior by sensitive, scheming, second-guessing, social animals. When we put words into people’s ears we are impinging on them and revealing our own intentions, honorable or not, just as surely as if we were touching them. Nowhere is this more apparent than in the convoluted departures from plain speaking found in every society that are called politeness. Taken literally, the statement “I was wondering if you would be able to drive me to the airport” is a prolix string of incongruities. Why notify me of the contents of your ruminations? Why are you pondering my competence to drive you to the airport, and under which hypothetical circumstances? Of course the real intent—“Drive me to the airport”—is easily inferred, but because it was never stated, I have an out. Neither of us has to live with the face-threatening consequences of your issuing a command that presupposes you could coerce my compliance. Intentional violations of the unstated norms of conversation are also the trigger for many of the less pedestrian forms of nonliteral language, such as irony, humor, metaphor, sarcasm, putdowns, ripostes, rhetoric, persuasion, and poetry.
Metaphor and humor are useful ways to summarize the two mental performances that go into understanding a sentence. Most of our everyday expressions about language use a “conduit” metaphor that captures the parsing process. In this metaphor, ideas are objects, sentences are containers, and communication is sending. We “gather” our ideas to “put” them “into” words, and if our verbiage is not “empty” or “hollow,” we might “convey” or “get” these ideas “across” “to” a listener, who can “unpack” our words to “extract” their “content.” But as we have seen, the metaphor is misleading. The complete process of understanding is better characterized by the joke about the two psychoanalysts who meet on the street. One says, “Good morning”; the other thinks, “I wonder what he meant by that.”