Recently, while reading relevant books and papers, I found some views supporting the idea that machines could develop consciousness, of course, there are also many opposing voices. Below are some positive views I've compiled recently:
In the book "A Thousand Brains", one chapter discusses the topic of "When Machines Have Consciousness". The book explores from the perspective of brain science whether machines could possibly develop consciousness.
are core parts of consciousness, which depend on the continuous formation of memories based on recent thoughts and experiences, and replaying these memories in daily life.
If two people have different qualia for the same input, then their world models in their brains are different.
will continuously memorize the state of the model and recall the memorized states, and these systems may all be conscious. Therefore, machines may also possess consciousness.
information into a unified experience, it can be considered conscious. If an AI system reaches a certain threshold of integrated information, according to this theory, it might be regarded as conscious.
A paper published last month also discussed this topic, titled "Consciousness in Artificial Intelligence: Insights from the Science of Consciousness." Paper link: https://arxiv.org/pdf/2308.08708.pdf.
obvious technical obstacles to building AI systems that meet these criteria for consciousness.
The main logic of the article is:
Adopt computational functionalism, which posits that performing a certain computation is necessary and sufficient for consciousness, as a working hypothesis. This is a mainstream view in the philosophy of mind, though controversial. The hypothesis is adopted out of pragmatism: unlike other views, it implies that AI could, in principle, be conscious, and studying the functioning of AI systems is relevant to judging whether they might be conscious. This means that if computational functionalism is true, considering its implications for AI consciousness would be productive. It is believed that neuroscience provides crucial empirical evidence for theories of consciousness, which can help evaluate the consciousness of AI. These theories aim to identify the functions that are necessary and sufficient for human consciousness; computational functionalism implies that similar functions would also be sufficient for generating consciousness in AI. It is believed that a theory-driven approach is most suitable for studying the consciousness of AI. This involves investigating whether AI systems perform functions similar to those that scientific theories consider related to consciousness, then providing a level of confidence based on the similarity of functions, the strength of the relevant theories, and trust in computational functionalism. The main alternative to this method is using behavioral tests to detect consciousness, but this approach is unreliable because AI systems can mimic human behavior in different ways.
This paper is quite long and academic, and I haven't fully understood it either, just skimmed through it roughly. If you're interested, you can download the original text and study it yourself.
However, I find an interesting point raised in "A Thousand Brains Intelligence": if machines have consciousness, then can they not be turned off anymore, because that would be equivalent to deciding to end the life of this conscious existence?
The viewpoint in the book is that it doesn't matter, turning off is just like sleeping, and the machine can be turned on again to continue operating. Even if permanent shutdown is equivalent to death, it still doesn't matter, because consciousness exists in the cortex (new brain), while the fear of death exists in the brainstem (old brain), so machines will not have such emotions. Without emotions, even death will not harm her.