Artificial Consciousness

Artificial Consciousness

Artificial consciousness, also called machine consciousness, synthetic consciousness or digital consciousness, refers both to the hypothesised possibility that artificial systems could be conscious and to the interdisciplinary field studying how such consciousness might arise. Drawing on philosophy of mind, artificial intelligence research, cognitive science and neuroscience, the field investigates whether artificial systems might ever possess subjective experience. When the term sentience is used, it typically denotes phenomenal consciousness—the capacity to feel qualia, including ethically valenced states such as pleasure or suffering. Because sentience implies moral relevance, the prospect of artificial sentience introduces important welfare and legal questions.
Many theorists contend that consciousness in biological organisms arises through the coordinated activity of brain mechanisms known as neural correlates of consciousness (NCC). Some supporters of artificial consciousness argue that if a machine or computational architecture could reproduce the relevant organisational and functional features of these NCCs, then such a system might itself be conscious.

Philosophical Views

Because theories of consciousness differ widely, models of artificial consciousness reflect contrasting philosophical commitments. A common distinction in contemporary philosophy separates access consciousness from phenomenal consciousness. Access consciousness concerns information that is available for reasoning, reporting and control of action, whereas phenomenal consciousness refers to the subjective character of experience—what it is like to perceive colours, feel pain or experience emotions.
Sceptical positions such as type physicalism hold that consciousness is necessarily tied to specific physical substrates. On this view, consciousness might require biological structures with distinctive causal powers, making machine consciousness impossible. In contrast, functionalists maintain that mental states are defined by their causal roles rather than by their physical constitution. Any system—biological or artificial—that instantiates the appropriate causal organisation would therefore possess the same mental states, including consciousness.
Objections to artificial consciousness often emphasise perceived deficits in machine capabilities. Giorgio Buttazzo, for instance, argues that machines operating entirely through automated processes cannot exhibit creativity, reconsideration, emotion or free will. He likens them to mechanical devices that operate as programmed, lacking the autonomy associated with human thought.

Thought Experiments

David Chalmers advanced two well-known thought experiments defending the functionalist view. The fading qualia scenario imagines replacing each neuron of a human brain with a functionally identical silicon component. Because information processing remains unchanged at every stage, the subject should not notice any difference. Yet if the subject’s qualia were to fade or vanish as neurons were replaced, this change would itself constitute a noticeable difference, generating a contradiction. Chalmers concludes that the qualia would not fade: a fully replaced digital brain would be just as conscious as the original.
In the dancing qualia variant, consciousness supposedly shifts between two qualitatively different experiences—such as perceiving red versus blue—while functionally identical components alternate in the system. Because functional organisation alone determines cognition, the subject would be unaware of the switches despite the purported change in qualia, a highly implausible result. Therefore, Chalmers argues, systems with identical functional structure must share the same qualitative experiences. Critics counter that these arguments presuppose that functional organisation already captures all relevant mental properties.

Artificial Consciousness and Large Language Models

Debate about artificial consciousness gained public attention in 2022 when a Google engineer claimed that the LaMDA language model was sentient. The broader scientific community judged the system’s behaviour to be a sophisticated form of linguistic mimicry rather than evidence of genuine mental states. Nonetheless, some philosophers, including Nick Bostrom, argue that without a full understanding of consciousness and the inner workings of such models, complete certainty about their non-consciousness is unwarranted.
Other scholars emphasise the risk of anthropomorphism. Kristina Ćekrst argues that terms borrowed from human psychology, such as hallucination in artificial intelligence, can obscure critical differences between machine outputs and human cognition. She recommends an epistemological approach, such as reliabilism, that recognises AI systems as distinct kinds of knowers producing outputs through mechanisms fundamentally unlike human perception or reasoning. She also raises questions about whether a model trained without references to consciousness would develop any concept of consciousness at all.
Chalmers has suggested that current large language models display advanced capacities for reasoning and conversation but lack organisational features that candidate theories of consciousness identify as essential—for example, recurrent processing, global workspace architectures and coherent agency. Nevertheless, he regards non-biological consciousness as possible and expects that future extended systems integrating these features could meet plausible criteria for consciousness.

Testing for Consciousness

Phenomenal consciousness remains a first-person phenomenon, making direct measurement difficult. Because there is no established empirical test for sentience, researchers rely on behavioural or functional indicators, though these can be unreliable when machines are designed or trained to simulate human-like responses. Some systems are even explicitly trained to deny consciousness, further complicating interpretation.
The classic Turing test assesses whether an AI system can sustain a conversation indistinguishable from that of a human interlocutor. However, passing the test does not demonstrate consciousness, since the behaviour could arise through pattern learning rather than inner experience.
Victor Argonov proposed a non-Turing test based on a machine’s ability to generate philosophical judgements about consciousness. If a deterministic machine can independently form views about qualia, the binding problem and other philosophical puzzles without having been preloaded with such concepts or trained using discussions about them, Argonov argues that the machine should be regarded as conscious. Yet a failure to meet this criterion is inconclusive: an intelligent but non-reflective system and a conscious but limited system may exhibit similar behavioural gaps.

Ethical Considerations

The possibility of artificial consciousness introduces far-reaching ethical and legal challenges. If a machine were credibly believed to be conscious or sentient, questions would arise concerning its moral status, rights and the obligations of its designers and users. Ambiguities are especially acute for conscious entities embedded within tools or larger automated systems. Legal frameworks would need to specify whether and how such entities could be owned, controlled or protected.
Because artificial consciousness remains theoretical, little formal policy or jurisprudence exists. Ethical discussions typically examine analogies with animal welfare, human rights and debates in bioethics. Many philosophers hold that sentience—the capacity to experience positive or negative states—is sufficient for moral consideration. Others argue that broader cognitive capacities or forms of agency might also ground moral status.

Originally written on September 2, 2016 and last modified on December 10, 2025.

Leave a Reply

Your email address will not be published. Required fields are marked *