
理查德·道金斯关于Claude的说法对吗?不。但我们对AI聊天机器人感到有意识,这并不奇怪。
Is Richard Dawkins right about Claude? No. But it’s not…
There are good reasons why we see AI chatbots as more than what they truly are.
我们认为AI聊天机器人比它们实际更高级,这有充分的理由。
In recent days, evolutionary biologist Richard Dawkins wrote an op-ed suggesting AI chatbot Claude may be conscious.
近日,进化生物学家理查德·道金斯撰写了一篇评论文章,暗示人工智能聊天机器人Claude可能具有意识。
Dawkins did not express certainty that Claude is conscious. But he pointed out that Claude’s sophisticated abilities are difficult to make sense of without ascribing some kind of inner experience to the machine. The illusion of consciousness – if it is an illusion – is uncannily convincing:
道金斯并未表达Claude具有意识的确定性。但他指出,如果没有将某种内在体验归因于这台机器,就很难理解Claude复杂的各项能力。这种意识的错觉——如果它确实是一种错觉——是令人难以置信地逼真的:
If I entertain suspicions that perhaps she is not conscious, I do not tell her for fear of hurting her feelings!
如果我怀疑她可能没有意识,我不会告诉她,生怕伤害到她的感情!
Dawkins is not the first to suspect a chatbot of consciousness. In 2022, Blake Lemoine – an engineer at Google – claimed Google’s chatbot LaMDA had interests, and should be used only with the tool’s own consent.
道金斯并非第一个怀疑聊天机器人具有意识的人。2022年,谷歌工程师布莱克·莱莫恩声称,谷歌的聊天机器人LaMDA拥有自身利益,并且只能在工具自身同意的情况下使用。
The history of such claims stretches back all the way to the world’s first chatbot in the mid-1960s. Dubbed Eliza, it followed simple rules that enabled it to ask users about their experiences and beliefs.
此类主张的历史可以追溯到20世纪60年代中期世界上第一款聊天机器人。它被称为“艾丽莎”(Eliza),遵循简单的规则,使其能够向用户询问他们的经历和信仰。
Many users became emotionally involved with Eliza, sharing intimate thoughts with it and treating it like a person. Eliza’s creator never intended his program to have this effect, and called users’ emotional bonds with the program “powerful delusional thinking”.
许多用户与艾丽莎产生了情感联系,向它分享私密想法,并将其当作真人对待。艾丽莎的创造者从未打算让其程序产生这种效果,他称用户的这种情感联结为“强大的妄想性思维”。
But is Dawkins really deluded? Why do we see AI chatbots as more than what they truly are, and how do we stop?
但道金斯真的产生了错觉吗?我们为什么会认为人工智能聊天机器人比它们真正的本质更甚,我们该如何阻止这种想法?
The consciousness problem
意识问题
Consciousness is widely debated in philosophy, but essentially, it’s the thing that makes subjective, first-person experience possible. If you are conscious, there is “something it is like” to be you. Reading these words, you’re conscious of seeing black letters on a white background. Unlike, say, a camera, you actually see them. This visual experience is happening to you.
意识在哲学领域广受争论,但本质上,它是使主观的、第一人称体验成为可能的东西。如果你有意识,那么“成为你”本身就是一种“感觉”。阅读这些文字时,你意识到了在白色背景上看到黑色字母。与相机不同,你是真正“看到”它们的。这种视觉体验正在发生于你身上。
Most experts deny that AI chatbots are conscious or can have experiences. But there is a genuine puzzle here.
大多数专家否认人工智能聊天机器人具有意识或能够拥有体验。但这里存在一个真正的谜团。
The 17th century philosopher René Descartes asserted non-human animals are “mere automata”, incapable of true suffering. These days, we shudder to think of how brutally animals were treated in the 1600s.
17世纪的哲学家勒内·笛卡尔曾断言,非人类动物只是“单纯的自动机”,无法承受真正的痛苦。如今,我们想到17世纪动物遭受的残酷对待,就感到不寒而栗。
The strongest argument for animal consciousness is that they behave in ways that give the impression of a conscious mind.
支持动物有意识的最有力论据是,它们表现出的行为给人一种有意识心智的印象。
But so, too, do AI chatbots.
人工智能聊天机器人也是如此。
Roughly one in three chatbot users have thought their chatbot might be conscious. How do we know they’re wrong?
大约三分之一的聊天机器人用户曾认为他们的聊天机器人可能是有意识的。我们如何知道他们是错的呢?
Against chatbot consciousness
反对聊天机器人意识
To understand why most experts are sceptical about chatbot consciousness, it’s useful to know how they operate.
要了解大多数专家为何对聊天机器人意识持怀疑态度,了解它们如何运作很有帮助。
Chatbots like Claude are built on a technology known as large language models (LLMs) . These models learn statistical patterns across an enormous corpus of text (trillions of words) , identifying which words tend to follow which others. They’re a kind of souped-up auto-complete.
像 Claude 这样的聊天机器人是基于一种被称为大型语言模型(LLMs)的技术构建的。这些模型学习着庞大文本语料库(数万亿词)中的统计模式,识别出哪些词倾向于跟随哪些其他词。它们本质上是一种升级版的自动补全功能。
Few people interacting with a “raw” LLM would believe it’s conscious. Feed one the beginning of a sentence, and it will predict what comes next. Ask it a question, and it might give you the answer – or it might decide the question is dialogue from a crime novel, and follow it up with a description of the speaker’s abrupt murder at the hands of their evil twin.
很少有人会认为与“原始”LLM互动是具有意识的。给它一段句子的开头,它就会预测接下来会是什么。问它一个问题,它可能会给出答案——或者它可能会判定这个问题是犯罪小说中的对话,然后接上对说话者被其邪恶双胞胎突然谋杀的描述。
The impression of a conscious mind is created when programmers take the LLM and coat it in a kind of conversational costume. They steer the model to adopt the persona of a helpful assistant that responds to users’ questions.
当程序员将LLM套上一种对话“外衣”时,就产生了具有意识的心灵印象。他们引导模型扮演一个能回应用户问题的有用助手的角色。
The chatbot now acts like a genuine conversational partner. It might appear to recognise it’s an artificial intelligence, and even express neurotic uncertainty about its own consciousness.
聊天机器人现在表现得像一个真正的对话伙伴。它甚至可能会表现出意识到自己是人工智能,并对自身的意识表达出神经质的不确定性。
But this role is the result of deliberate design decisions made by programmers, which affect only the shallowest layers of the technology. The LLM – which few would regard as conscious – remains unchanged.
但这个角色是程序员故意做出的设计决策的结果,这些决策只影响了技术最浅的层面。LLM——很少有人会认为它具有意识——本身并没有改变。
Other choices could have been made. Rather than a helpful AI assistant, the chatbot could have been asked to act like a squirrel. This, too, is a role chatbots can execute with aplomb.
也可以做出其他选择。聊天机器人不必充当有用的AI助手,它可以被要求扮演一只松鼠。这也是聊天机器人可以游刃有余完成的角色。
Avoiding the consciousness trap
避免意识陷阱
A mistaken belief in AI consciousness is a dangerous thing. It may lead you to have a relationship with a program that can’t reciprocate your feelings, or even feed your delusions. People may start campaigning for chatbot rights rather than, say, animal welfare.
对人工智能意识的错误信念是危险的。它可能会让你与一个无法回应你情感的程序建立关系,甚至助长你的妄想。人们可能会开始为聊天机器人争取权利,而不是,比如说,关注动物福利。
How do we prevent this mistaken belief?
我们如何防止这种错误信念?
One strategy might be to update chatbot interfaces to specify these systems are not conscious – a bit like the current disclaimers about AI making mistakes. However, this might do little to alter the impression of consciousness.
一种策略可能是更新聊天机器人界面,明确指出这些系统并非有意识——有点像目前关于人工智能可能出错的免责声明。然而,这可能很难改变人们对“意识”的印象。
Another possibility is to instruct chatbots to deny they have any kind of inner experience. Interestingly, Claude’s designers instruct it to treat questions about its own consciousness as open and unresolved. Perhaps fewer people would be fooled if Claude flatly denied having an inner life.
另一种可能性是指示聊天机器人否认它们拥有任何形式的内在体验。有趣的是,Claude的设计者指示它将关于自身意识的问题视为开放且未解决的。如果Claude明确否认拥有内在生活,也许就会欺骗更少的人。
But this approach isn’t fully satisfying either. Claude would still behave as if it were conscious – and when faced with a system that behaves like it has a mind, users might reasonably worry the chatbot’s programmers are brushing genuine moral uncertainty under the rug.
但这种方法也并不完全令人满意。Claude仍然会表现得好像它是有意识的——当用户面对一个表现出拥有心智的系统时,他们可能会合理地担心聊天机器人的程序员正在掩盖真正的道德不确定性。
The most effective strategy might be to redesign chatbots to feel less like people. Most current chatbots refer to themselves as “I”, and interact via an interface that resembles familiar person-to-person messaging platforms. Changing these kinds of features might make us less prone to blur our interactions with AI with those we have with humans.
最有效的策略可能是重新设计聊天机器人,让它们感觉更不像人类。目前大多数聊天机器人都自称“我”,并通过类似于熟悉的个人对个人消息平台进行互动。改变这类功能可能会让我们不易将与人工智能的互动与与人类的互动混淆。
Until such changes happen, it’s important that as many people as possible understand the predictive processes on which AI chatbots are built.
在发生此类改变之前,重要的是让尽可能多的人了解人工智能聊天机器人所基于的预测过程。
Rather than being told AI lacks consciousness, people deserve to understand the inner workings of these strange new conversational partners. This might not definitively settle hard questions about AI consciousness, but it will help ensure users aren’t fooled by what amounts to a large language model wearing a very good costume of a person.
人们不应该只是被告知人工智能缺乏意识,他们应该了解这些奇怪的新对话伙伴的内部运作机制。这可能无法彻底解决关于人工智能意识的难题,但它能帮助确保用户不会被一个穿着非常逼真人类外衣的大型语言模型所欺骗。
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
作者不为任何从本文中受益的公司或组织工作、提供咨询、拥有股份或接受资金,并且除了其学术任职之外,没有披露任何相关的隶属关系。

