#### Introduction In a 2022 talk titled _Could a Large Language Model be Conscious?_, philosopher David Chalmers explores if large language models (LLMs) could be conscious and concludes that current LLMs have a <10% chance of being conscious, with LLMs in the next decade having a >25% chance.[^1] This claim will be examined in light of the 2017 paper, _What is consciousness, and could machines have it?_ by Dehaene et al.[^2] #### Summary of _Could a Large Language Model be Conscious?_ by David Chalmers Chalmers covers four arguments _for_ LLM consciousness: 1) Self-Report 2) Seems-Conscious 3) Conversational Ability and 4) General Intelligence. Of these, Chalmers finds General Intelligence the most convincing argument: "Conversation is not the fundamental thing here. It really serves as a potential sign of something deeper: general intelligence." Chalmers then considers six arguments _against_ LLM consciousness: 1) biology, 2) senses/embodiment, 3) world-model, 4) global workspace, 5) recurrent processing, and 6) unified agency. The strongest objections are 4, 5, and 6. (Global workspace is the theory supported by Dehaene and team.) Notably, all of these objections are temporary as AI advances, with the exception of the biology argument— the most contentious. #### Summary of _What is consciousness, and could machines have it?_ by Dehaene et al. In their paper, Dehaene and team propose that "consciousness" is actually two, orthogonal types of information-processing computations that we tend to conflate: global availability of information (C1) and self-monitoring (C2). C1 is the ability to bring information to mind and make it available to other cognitive processes. C2 refers to "a self-referential relationship in which the cognitive system is able to monitor its own processing and obtain information about itself," commonly referred to as introspection. Lastly, unconscious processes (C0) occur in parallel but outside conscious awareness and comprise most of human intelligence. The paper concludes that current AI are "mostly implementing computations that reflect unconscious processing (C0)." Taking inspiration from neurobiology, they contend that "a machine endowed with C1 and C2 would behave as if it were conscious." The authors close by intriguingly illustrating how both C1 and C2 are necessary for subjective experience: >In humans, damage to the primary visual cortex may lead to a neurological condition called “blindsight,” in which the patients report being blind in the affected visual field. Remarkably, those patients can localize visual stimuli in their blind field, but they cannot report them (C1) nor can they effectively assess their likelihood of success (C2)—they believe that they are merely “guessing.” #### Analysis While both works explore whether AI could have consciousness, Chalmers takes a broader, more philosophical approach, evaluating various arguments regarding LLM consciousness. On the other hand, Dehaene and colleagues take a more narrow, cognitive scientific approach, based off the Global Workspace Theory (GWT) of consciousness. Dehaene and colleague's paper seems to provide strong evidence for a consciousness that is capable of being engineered. This aligns with Chalmers' optimism. Notably as well, GWT's conceptualization probably requires recurrent processing, since this is how the brain would likely process and broadcast information. If that's the case, the successful engineering of a GWT-driven AI would resolve two of three of Chalmers' most serious objections. In light of this, Chalmers' claim of >25% probability that LLMs will have achieved consciousness by 2032, seems modestly plausible. However, it's important to note that both works explicitly sidestep the hard problem of consciousness— why and how physical processes give rise to subjective experiences, or qualia. In light of evidence such as blindsight, it is conceivable that C1 and C2 are _necessary_ but _not sufficient_ for subjective experience. Is there some missing ingredient? This question applies to all other physicalist theories of consciousness. Imagine an AI utilizing an LLM module and exhibiting both C1 and C2, which has a "music enjoying" module. The AI then focuses on a song you're playing and proclaims, "I remember this one. It's my favorite!" In what sense is the AI listening to and "enjoying" the song? What phenomenological experience, if any, is occurring? Both Chalmers' and Dehaene and team's work does not offer an explanation. Chalmers even proposes a challenge of creating benchmarks for consciousness, suggesting we still are not totally clear on what exactly consciousness is and how to verify it. In conclusion, much philosophical, scientific, and engineering progress has been made, but we do not have a fully satisfactory account of consciousness, especially qualia. However, the research explored today provides a solid starting point. [^1]: Chalmers, David J. "Could a Large Language Model be Conscious?" 2023, arXiv:2303.07103 \[cs.AI\]. arXiv.org. [^2]: Dehaene, Stanislas et al. “What is consciousness, and could machines have it?.” _Science (New York, N.Y.)_ vol. 358,6362 (2017): 486-492. doi\:10.1126/science.aan8871