1. Abstract and Introduction
  2. Definitions
  3. Literature Review
  4. Argument Development
  5. The AI Model’s Potential for Feeling During Inference
  6. Conclusion and References

2 Definitions

To ground our argument and ensure clarity, we begin by defining key concepts central to the discourse on consciousness and AI sentience. These definitions are drawn from established literature in philosophy of mind and neuroscience.

Consciousness: Consciousness is often described as the state of being aware of and able to think about oneself, one’s surroundings, and one’s own experiences (Block, 1995). Materially, it requires a system capable of integrated information processing and self-referential thought (Tononi, 2004). It encompasses both the experiential aspects of mental states (phenomenal consciousness) and the cognitive functions associated with access to information and reasoning (access consciousness). Additionally, Sentient is defined for this paper as “having consciousness”.

• Subjective Experience: Subjective experience refers to the phenomenological aspect of consciousness characterized by personal, first-person perspectives of mental states—what it is like to experience something (Nagel, 1974). Materially, it necessitates a system that processes information in a way that generates qualitative experiences, often referred to as qualia.

• First-Person Perspective: The first-person perspective is the unique point of view inherent to an individual, encompassing their thoughts, feelings, and perceptions (Shoemaker, 1996). Materially, it involves self-modeling and the ability to distinguish between self and environment, allowing for selfawareness and subjective experience (Metzinger, 2003).

• Experience (Functionalist Approach): From a functionalist perspective, experience is the accumulation and processing of inputs leading to behavioral outputs, where mental states are defined by their causal roles in the system (Putnam, 1967). A system experiences when it functions to process inputs, integrate information, and produce outputs in response to stimuli. In the context of machine learning, experience can be viewed as the accumulation and processing of inputs in a manner that separates useful, predictive information from noise (Alemi and Fischer, 2018). This aligns with the goal of learning representations that capture only what is necessary for future problem-solving, including representations of the self if such representations are possible within the system.

By adopting these definitions, we establish a framework for analyzing the OpenAI-o1 model’s potential for consciousness, considering both the phenomenological and functional aspects of experience.

Author:

(1) Victoria Violet Hoyle (victoria.hoyle@protonmail.com)


This paper is available on arxiv under CC BY 4.0 license.