# Thoughts
I really enjoyed this thought provoking talk given by Chalmers. I have a lot of thoughts but will try to be concise. :grin:
Firstly, I appreciate that Chalmers pointed out that there's no common definition for sentience. But he states he's going to use the definition of sentience as a synonym for consciousness or subjective experience. In this view, a being is conscious if there's something that it's like to be that being. He then lists the different _kinds_ of conscious experience: sensory experience, affective experience, cognitive experience, agentive experience.
Chalmers defines sentience as "a synonym for consciousness" or subjective experience. Or, in the vein of Nagel: that there is something that that it's like to be that being.
>But it has some standard uses. I think perhaps the most common, certainly the one which I'm going to use here myself, that I'm most attracted to, is basically seeing sentience as a synonym for consciousness or what sometimes is called phenomenal consciousness by philosophers or subjective experience.
Chalmers then divides the talk into two broad sections: 1) examining reasons for thinking LLMs are sentient and 2) examining reasons for thinking LLMs are not sentient. I think it's important to note that the extraordinary claim, and the burden of evidence, lies on the side in support of AI sentience. And secondly, evidence for one side or the other may not be absolutely conclusive but instead be a piece of evidence that increases the probability to hold some belief. For example, while the biology argument against AI isn't conclusive, it's completely reasonable to be _more_ suspicious that AI might lack sentience as compared to biological systems. Ultimately, we need evidence to believe that AI is having subjective experience but it seems like we really don't know how to measure that. I think that is the case for at least a couple reasons: First, we just don't understand the (human) mind well enough to have a clear understanding of the causal mechanisms behind sentience. Second, the hard problem of consciousness seems present here. Given this state of affairs, it appears like the best heuristic for knowing if a being is conscious is if it has a similar developmental and structural patterns to ourselves. Lastly, I want to conclude that I think that there's a moral dimension at play here. In a certain sense, it's easy to say, "AI is sentient." But do they _really_ believe that? If one genuinely thinks AI is sentient, then it should reflect in their moral calculations and actions. However, I doubt many pro AI-sentientists are seriously updating either of these. This should bring into question just how seriously they take their own claims.
Something I appreciated about Chalmers was about pointing out that human-level intelligence was an overly high bar for sentience. I think it's common for people to falsely conflate intelligence and sentience. I think it's important we point this out because 1) there are many less intelligent things that we consider sentient (an easy example is an infant) 2) denying people sentience based on intelligence screams of ableism.
# Timestamped notes
#### 00:07:11
- Google engineer Blake Lemoine says LaMDA LLM has a soul.
- Gary Marcus responded by stating LLMs aren’t sentient.
- Chalmers is interested in examining both sides of the issue.
#### 00:09:00
- What is LaMDA? LaMDA is based on an LLM but tuned to hold conversation.
- A language model is a system that assigns probabilities to sequences of text, thereby predicting and generating text completions.
- **LLMs are giant artificial neural networks, almost all using a transformer architecture with multi-head self-attention.**
- Trained on data from all over the internet
- Up to 500 billion + parameters
- Transformer, BERT, GPT-2, GPT-3, PaLM
#### 00:12:00
- There are now also many **LLM+ models** which add further capacities to LLMs.
- Vision-language models
- Language-action models
- Add database queries, simulations
- Fine-tune with reinforcement learning
- LaMDA 2 is an LLM+ with database access and more
#### 00:14:53
- Why does AI sentience matter?
- Moral treatment
- AI would matter morally. Maybe not as much as humans. Sort of how we treat animals morally.
- For self-interest
- If we want to upload our consciousness
- For a valuable future
- Thinking about a future of populations of AI systems and humans. Many see sentient AI systems as having more value.
#### 00:17:19
- ”Sentientism” - some see sentience as key as mattering morally
- Are LLMs sentient?
- My intuition says no, but I don’t think it’s obvious.
- **My aim is this talk is to evaluate the answers for whether AI is either sentient or not**
#### 00:18:50
- **Plan for talk:**
1. Clarify sentience.
2. Examine reasons for thinking LLMs are sentient.
3. Examine reasons for thinking LLMs are not sentient.
4. Draw conclusions.
#### 00:19:11
- What is sentience?
- **“Sentience” has no single definition, but it has some standard uses.**
- One view: Sentience = consciousness
- Central use: sentience = consciousness = subjective experience
- Consciousness = subjective experience
- "But \[consciousness\] has some standard uses. I think perhaps the most common, certainly the one which I'm going to use here myself, that I'm most attracted to, is basically seeing sentience as a synonym for consciousness or what sometimes is called phenomenal consciousness by philosophers or subjective experience."
- So, a being is conscious if there is something it’s like to be that being.
- Thomas Nagel: “What is it like to be a bat?”
- Conscious experience can be divided
- A mental state is conscious if there’s something it is like to have that state
- Sensory experience e.g. seeing red
- Affective experience e.g. feeling pain, pleasure
- Cognitive experience e.g. thinking hard
- Agentive experience e.g. deciding to act
- Other uses
- Sentience = affective consciousness = valenced subjective experience (happiness, pleasure, pain, etc.)
- This is popular among animal rights
- Sentience = self-consciousness
- Sentience = reasoning, agency
#### 00:24:09
- What sentience is not
- Sentience (subjective experience) is not the same as intelligence (sophisticated behavior).
- Sentience != goal-directed behavior
- Sentience != human-level intelligence
- Sentience vs. human-level intelligence
- **Sentience is not the same as human-level intelligence (artificial general intelligence), and does not require it.**
- Most theorists hold: fish are sentient, as are newborn babies.
- This lowers the bar for sentient AI, compared to human-level AGI.
#### 00:27:24
- **I’ll focus on sentience as subjective experience**, but some of what I say will apply to the other notions.
- It won’t turn on accepting the hard problem of consciousness, panpsychism, etc.
- I’m agnostic to panpsychism, but only something like 8% of people on PhilPapers are sympathetic to panpsychism.
- 60+ percent of philosophers accept that there’s a hard problem of consciousness.
- We’ll assume consciousness is real and ignore illusionist views.
#### 00:29:00
- Now we’re moving on to section two: examining reasons LLMs are sentient.
- Challenge for proponents
- If you think LLMs are sentient, articulate a feature X such that
1. LLMs have X
2. If a system has X it probably is sentient
- And give good reasons for 1 and 2.
- X = Self-report
- This is what Lemoine did. He asked LaMDA if it was sentient and it replied that it is and sometimes feels happy or sad.
- This evidence seems weak!
- X = Seems sentient
- On interacting with LaMDA, Lemoine found it to be sentient.
- But we know the human minds tends to attribute sentience where it’s not present. (E.g. primitive AI systems like Eliza.)
- So this reaction is little evidence: what matters is the behavior that prompts the reaction.
- This also seems like weak evidence!
- X = Conversational ability
- LLMs display remarkable conversational abilities
- They give the appearance of coherent thinking and reasoning, with especially impressive casual/explanatory analyses.
- Current LLMs don’t pass the Turing Test, but they’re not far way.
- Domain-general abilities
- LLMs show signs of domain-general intelligence, reasoning about many domains.
- Domain-general use of information is often regarded as a sign of consciousness, in animals for example.
- This evidence seems… not bad!
- Prima facie evidence
- Two decades ago, we'd probably have taken LLM abilities as evidence that the system is sentient.
- Maybe that evidence can be defeated by something else we know (e.g. LLM's architecture, behavior, training), but it's at least some prima facie evidence.
#### 00:38:45
- Overall: Looking at evidence for AI sentience…
- **I don’t think there’s remotely conclusive evidence that LLMs are sentient.**
- **But their impressive general abilities give at least limited prima facie evidence.**
- If there are no strong reasons against (and/or defeaters for the evidence), we should take the sentience hypothesis seriously.
- If an AI were as smart as a mouse, and we assume mice are sentient, then we would have to assume the AI is sentient as well. Unless we knew of some relevant piece of evidence to believe otherwise.
- So what about reasons against?
#### 00:40:30
- Now section three: Examining reasons for thinking LLMs are not sentient.
- Challenge for opponents of LLM sentience
- If you think LLMs are not sentient, articulate a feature X such that
1. LLMs lack X
2. If a system lacks X it probably is not sentient
- And give good reasons for 1 and 2.
- There are _many_ candidates for X: biology, sensory perception, embodiment, world-model, human-level reasoning, global workspace, recurrent processing, unified agency.
- X = Biology
- Biological theory of consciousness: consciousness requires biology.
- I’ve addressed this and other general reasons against AI consciousness elsewhere (e.g. neuron replacement arguments for substrate neutrality.)
- This is kind of old news and been argued.
- This evidence is… highly contentious!
- X = Sensory perception
- Reasons for AI not being sentient, summarized:
- X = biology — contentious
- X = sensory perception — contentious, temporary
- X = embodiment — weakish, temporary
- X = world-model — unobvious, temporary
- X = human-level reasoning — overly high bar
- X = global workspace — unobvious, temporary
- X = recurrent processing — strongish, temporary
- X = unified agency - strongish, temporary