Artificial Intelligence

Google DeepMind scientist says LLMs will never be conscious, but why

Europe0 views1 min
Google DeepMind scientist says LLMs will never be conscious, but why

A Google DeepMind researcher argues that large language models (LLMs) are unlikely to become conscious, as they require human organization of data and lack a physical body. The paper claims that AI systems can simulate but not truly instantiate consciousness.

A Google DeepMind researcher has argued that large language models (LLMs) may never achieve true consciousness. Alexander Lerchner, a senior staff scientist at DeepMind, claims that LLMs are 'mapmaker-dependent', requiring humans to organize data for them to learn from. The research paper titled 'The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness' states that AI systems can process and predict patterns in data but cannot think on their own. Lerchner argues that simulating conversation or reasoning is not the same as experiencing thoughts or feelings, and this would be impossible without a physical body. The debate around AI consciousness has implications for how AI systems are regulated, used, and treated. If AI remains non-conscious, it will continue to be viewed as a tool rather than a system that can feel or be aware.

This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.

Comments (0)

Log in to comment.

Loading...