邁克爾·波倫在他的新書《世界顯現》中指出,人工智能可以做很多事情——但它不能成為人。
MICHAEL POLLAN
THE BIG STORY
FEB 24, 2026
![]()
如今,布萊克·勒莫因事件被視為人工智能炒作的巔峰之作。它讓“有意識的人工智能”這一概念在新聞周期內迅速進入公眾視野,同時也引發了計算機科學家和意識研究者之間的一場討論,這場討論在隨后的幾年里愈演愈烈。盡管科技界仍在公開場合對這一概念(以及可憐的萊莫因)不屑一顧,但在私下里,他們已經開始更加認真地看待這種可能性。有意識的人工智能可能缺乏明確的商業邏輯(如何將其商業化?),并會帶來棘手的道德困境(我們應該如何對待一臺能夠感受到痛苦的機器?)。
然而,一些人工智能工程師開始認為,通用人工智能的終極目標——一臺不僅具有超級智能,而且擁有人類水平的理解力、創造力和常識的機器——或許需要類似意識的東西才能實現。在科技界,曾經圍繞有意識的人工智能的非正式禁忌——公眾會覺得這種前景令人毛骨悚然——突然開始瓦解。
![]()
轉折點出現在2023年夏天,當時19位頂尖的計算機科學家和哲學家發布了一份長達88頁的報告,題為《人工智能中的意識》,非正式名稱為“巴特林報告”。短短幾天內,人工智能和意識科學界的幾乎所有人都閱讀了這份報告。報告草稿的摘要中有一句引人注目的話:“我們的分析表明,目前沒有任何人工智能系統具備意識,但也表明,構建具有意識的人工智能系統并不存在明顯的障礙。”
作者們承認,召集這個小組并撰寫這份報告的部分靈感來源于“布萊克·勒莫因的案例”。一位合著者告訴《科學》雜志:“如果人工智能能夠給人以意識的印象,那么科學家和哲學家們就必須對此進行深入探討,這已成為一項緊迫的任務。”
但真正吸引所有人目光的是預印本摘要中的一句話:“構建有意識的人工智能系統不存在明顯的障礙。”當我第一次讀到這句話時,我感覺自己跨越了一道重要的門檻,而且這不僅僅是技術上的門檻。不,這關乎我們作為物種的本質。
如果人類在不久的將來發現世上出現了一臺完全有意識的機器,那意味著什么?我猜想,那將是一個如同哥白尼式的重大發現,它會突然動搖我們自以為是的中心地位和特殊性。幾千年來,我們人類一直通過與“低等”動物的對立來定義自身。這意味著我們否認動物擁有諸如情感(笛卡爾最明顯的錯誤之一)、語言、理性以及意識等被認為是人類獨有的特征。然而,近年來,隨著科學家們證明許多物種都擁有智慧和意識,擁有情感,并且會使用語言和工具,這些區分大多已經瓦解,同時也挑戰了幾個世紀以來人類的優越論。這種轉變仍在進行中,它引發了關于我們身份認同以及我們對其他物種的道德義務的棘手問題。
人工智能對我們崇高的自我認知構成的威脅,完全來自另一個層面。如今,我們人類必須重新定義自身,不再與其他動物比較,而是與人工智能建立聯系。隨著計算機算法在純粹的腦力上超越我們——在國際象棋、圍棋以及數學等各種“高等”思維領域輕松擊敗我們——我們至少可以感到欣慰的是,我們(以及許多其他動物物種)仍然擁有意識的恩賜與重負,擁有感受和主觀體驗的能力。從這個意義上講,人工智能或許可以成為我們共同的敵人,將人類與其他動物拉近距離:我們對抗它,生命對抗機器。這種新的團結將構成一個令人振奮的故事,對于那些被邀請加入“意識團隊”的動物來說,或許是個好消息。但是,如果人工智能開始挑戰人類——或者更確切地說,是動物——對意識的壟斷,又會發生什么呢?那時,我們又會變成什么樣子呢?
我覺得這前景令人深感不安,雖然我并不完全確定原因。我逐漸接受了與其他動物(就我而言,甚至可能包括植物)共享意識的想法,并且樂于將它們納入不斷擴大的道德考量范圍。但是機器呢?
或許我對這個想法的不安源于我的背景和教育。我從小就浸潤在人文學科的溫水煮沸中,尤其是在文學、歷史和藝術領域,這些學科一直將人類意識視為一種值得捍衛的卓越存在。我們所珍視的文明幾乎一切都是人類意識的產物:藝術與科學、高雅文化與通俗文化、建筑、哲學、宗教、政府、法律、倫理道德,更不用說價值本身的概念了。我想,有意識的計算機或許能為這些輝煌的寶庫增添一些全新的、我們尚未想象到的東西。我們當然可以抱有這樣的希望。迄今為止,人工智能創作的詩歌水平與打油詩相差無幾;缺乏意識或許可以解釋為什么它們連一絲原創性或新穎見解的火花都沒有。但是,如果(或者說當?)有意識的人工智能開始創作真正優秀的詩歌時,我們又會作何感想呢?
我們憑什么認為有意識的機器會比有意識的人類更具美德?
人工智能永遠不會有意識作為一個人道主義者,我難以接受動物對意識的壟斷地位可能會被打破。但我現在遇到了一些其他類型的人(其中一些人自稱為超人類主義者),他們對這個未來持更為樂觀的態度。一些人工智能研究人員支持制造有意識的機器,因為作為擁有自身情感的實體,有意識的機器比僅僅具備智能的計算機更有可能發展出同理心。
正如一位神經科學家和一位人工智能研究人員試圖說服我的那樣,制造有意識的人工智能是一項道德義務。為什么?因為另一種選擇是擁有超凡智慧卻冷酷無情的人工智能,它會為了實現目標而不擇手段,因為它缺乏我們意識和共同脆弱性所帶來的所有道德約束。只有有意識的人工智能才有可能發展出同理心,從而拯救我們。我并非夸大其詞,這就是他們的論點。
真讓人懷疑這些人是否讀過《弗蘭肯斯坦》!弗蘭肯斯坦博士賦予他的造物不僅生命,更賦予其意識,而這正是問題的關鍵所在。瑪麗·雪萊的小說記錄了“一個敏感而理性的動物的誕生”,正是這兩種特質的結合決定了怪物的命運。驅使怪物尋求復仇并最終走上殺戮之路的,并非怪物的理性,而是他內心的創傷。
“我所見之處皆是幸福,唯獨我被無可挽回地排除在外,”怪物被逐出人類社會后向弗蘭肯斯坦博士抱怨道,“我原本仁慈善良,是苦難讓我變成了惡魔。”怪物的理性能力固然幫助他實現了邪惡的計劃,但真正賦予他動機的卻是他的意識——他的情感。我們又憑什么認為有意識的機器會比有意識的人類更具美德呢?
令人驚訝的是,巴特林關于人工智能意識的報告代表了該領域某種程度上的共識;我采訪的大多數計算機科學家都贊同其結論。然而,我花在閱讀這份報告(以及采訪其中一位合著者)上的時間越多,就越開始質疑其關于人工智能意識即將實現的結論。值得稱贊的是,作者們嚴謹地闡述了他們的假設和方法,但這反而讓我懷疑,他們是否是建立在一個站不住腳的基礎上,才得出如此大膽的結論。
本書開篇,這些計算機科學家和哲學家就提出了他們的指導性假設:“我們采用計算功能主義作為工作假設,即執行正確類型的計算是意識存在的必要且充分條件。”計算功能主義的出發點是,意識本質上是一種運行在可能是大腦或計算機的硬件上的軟件——該理論完全持不可知論的態度。但計算功能主義是正確的嗎?作者們并不打算對此妄下斷言,只是說它是“主流觀點——盡管存在爭議”。即便如此,出于“務實的原因”,他們還是會假設它是正確的。
坦誠固然可貴,但這種信念上的跳躍需要極大的勇氣,我不確定我們是否應該這樣做。
就本報告而言,系統的“物質基礎”(即大腦或計算機)“對意識而言并不重要……它可以存在于多種基礎中,而不僅僅是生物大腦。”任何能夠運行必要算法的基礎都可以。“我們初步假設,我們所知的計算機原則上能夠實現足以產生意識的算法,”作者指出,“但我們并不聲稱這一點是確定的。”這種對不確定性的承認遠遠不夠。報告中未加質疑的比喻是:大腦是計算機——意識軟件運行的硬件。在這里,我們看到的是一個偽裝成事實的比喻。事實上,整篇論文及其結論都建立在這個比喻的有效性之上。
隱喻是強大的思考工具,但前提是我們不能忘記它們只是隱喻——一種不完美或片面的類比,將一件事物比作另一件事物。兩者之間的差異與相似之處同樣重要,但這些差異似乎在人工智能的狂熱浪潮中被忽略了。正如控制論專家阿圖羅·羅森布魯斯和諾伯特·維納多年前所指出的那樣,“隱喻的代價是永恒的警惕。” 除了這份報告的作者之外,整個人工智能領域似乎都放松了警惕。
想想硬件和軟件之間涇渭分明的區別。計算機中硬件與軟件分離的妙處在于,同一臺機器上可以運行許多不同的程序;軟件及其編碼的知識在硬件“消亡”后依然存在。這種分離也符合我們關于二元論的直覺——即,正如笛卡爾所言,我們可以清晰地劃分精神和物質。但在大腦中,硬件和軟件之間的區別根本不存在;在那里,軟件就是硬件,反之亦然。記憶是大腦中神經元之間物理連接的模式,它既非硬件也非軟件,而是兩者兼具。
事實上,發生在你身上的一切——你經歷、學習或記憶的一切——都會改變你大腦的物理結構,永久性地重塑其連接。(從這個意義上講,大腦中不存在二元性;精神層面的東西永遠無法與物質層面完全分離。)認為同樣的意識算法可以在各種不同的載體上運行的想法是毫無意義的,因為所討論的載體——大腦——會不斷地被運行在其上的任何信息(或“意識算法”)進行物理重構。你的大腦與我的大腦在物質層面上是不同的,正是因為它們被不同的生活經歷——也就是意識本身——所塑造。大腦根本無法互換,無論是與電腦還是與其他大腦。
幾乎在任何方面,你只要深入探討,就會發現“計算機等同于大腦”的比喻并不成立。計算機科學家將大腦中的神經元視為芯片上的晶體管,通過電脈沖來控制它們的開關。這種類比固然有一定道理,但實際情況卻很復雜,因為電并非影響神經元活動的唯一因素。大腦中還充斥著各種化學物質,包括神經調節劑和激素,它們不僅影響神經元是否放電,還影響其放電的強度。這就是為什么精神活性藥物能夠深刻地改變意識(而對計算機卻沒有明顯影響)的原因。神經元的活動還受到大腦中以波狀模式傳播的振蕩的影響;這些振蕩的不同頻率與不同的心理活動相關,例如意識及其缺失、注意力集中和做夢(以及其他睡眠階段)。
將神經元比作晶體管,是對它們復雜性的極大低估。與芯片上的晶體管相比,大腦中的神經元相互連接極其復雜,每個神經元都直接與其他多達10000個神經元通信,構成一個極其精細的網絡,以至于我們距離繪制出其連接的最粗略圖譜,仍然需要數十年時間。在計算機科學領域,人們對“深度人工神經網絡”的出現大加贊賞——這是一種機器學習架構,據稱以大腦為模型,它以驚人的數量疊加處理器,使網絡能夠處理和學習海量數據。這令人印象深刻,但最近的一項研究表明,單個皮層神經元就能完成整個深度人工神經網絡所能完成的一切。
沒錯,計算機在很多方面確實與大腦相似,計算機科學也通過模擬大腦的各個方面和運作方式取得了長足的進步。但是,認為大腦和計算機在任何方面都可以互換——計算功能主義的前提——無疑是牽強附會的。然而,這不僅是巴特林報告的立足之地,也是該領域大多數理論的基石。原因不難理解。如果大腦是計算機,那么足夠強大的計算機應該能夠做到大腦所做的一切,包括產生意識。這個前提幾乎必然得出這樣的結論。換句話說,正是作者們自己掃除了構建有意識人工智能的最大“障礙”——即認為大腦與計算機在關鍵方面存在差異的障礙。
將神經元比作晶體管,是對神經元復雜性的嚴重低估。
報告的第二個方面讓我質疑其結論的可信度,那就是它提出的判斷人工智能是否真正具有意識的標準。這是一個嚴峻的挑戰。作者引用了勒莫因事件(無論是否恰當),指出人工智能很容易欺騙人類,讓他們相信自己擁有意識,而實際上并非如此。(或許更準確的說法是我們自己欺騙了自己,這要歸功于我們對擬人化和魔法的迷戀。)當人工智能的訓練數據幾乎涵蓋了所有關于意識的論述時,“可報告性”(哲學術語,其實就是直接詢問人工智能)就無法奏效了。解決這一難題的一個方法是,從人工智能訓練所用的數據集中移除所有關于意識(以及可能包括感覺和情感)的引用,然后觀察它是否還能令人信服地表達自己擁有意識。
作者建議,我們應該尋找與各種意識理論預測相符的人工智能意識“指標”。例如,如果人工智能的設計包含一個工作空間,該工作空間匯集了各種信息流,但前提是這些信息流必須經過競爭才能進入該空間,那么這很符合全局工作空間理論,因此可能被視為具有意識。該報告回顧了六種意識理論,并確定了人工智能必須展現的“指標”,以滿足每種理論的要求,從而被認為具有潛在的意識。
這里的問題(或者說其中一個問題)在于:它提出的用來衡量人工智能的意識理論,沒有一個能達到任何人滿意的程度。那么,這算什么證明標準呢?更糟糕的是,很多這類理論都可以在人工智能設計中模擬出來,這并不奇怪,因為它們都基于意識是計算問題這一前提。我們陷入了無休止的循環。
當我仔細研讀完巴特林報告后,我之前一直擔憂的“哥白尼時刻”似乎比報告大膽的結論所暗示的要遙遠得多。在回顧了報告中提到的六種左右的意識理論后,我發現它們都存在一個共同的缺陷:它們都想當然地認為意識可以被簡化為某種算法。
我也注意到,這些理論存在一些缺失。它們都沒有提及具身性——即意識可能依賴于同時擁有身體和大腦——或者說,它們對任何與生物學相關的概念都只字未提。這些理論也沒有解釋意識主體。究竟是誰或什么接收了在全球工作空間中傳播的信息?或者說,整合信息理論(IIT)中整合的信息?情感在使體驗成為意識的過程中又扮演著怎樣的角色?
最后一點作者們也注意到了,他們指出大多數現有理論都忽略了“情感”這一概念,并建議該領域應更加關注有意識的機器是否擁有“真實”的情感這一問題。因為如果事實證明它們確實擁有情感,我們將面臨一場道德和倫理危機。報告指出:“任何能夠感知痛苦的實體都應受到道德考量。”(但痛苦難道不總是有意識的嗎?)報告繼續說道:“這意味著,如果我們未能認識到有意識的人工智能系統的意識,我們可能會造成或允許造成具有重大道德意義的傷害。”我們究竟對能夠感知痛苦的機器負有什么責任?我們真的希望給這個世界帶來更多的痛苦嗎?
除了這種對情感(作為賦予機器意識的棘手副產品)的高度推測性討論之外,在人工智能領域,關于意識的討論一如既往地抽象——如同人們所預期的那樣,它冷冰冰的、沒有形體的,并且完全無視生物學。當我向一位致力于構建有意識人工智能的研究人員提出“計算機是否會感到痛苦”的難題時,他輕描淡寫地揮了揮手,解釋說只需對算法進行簡單的修改就能解決這個問題:“我們完全可以把快樂的程度調高一點。”
改編自邁克爾·波倫的《世界顯現:意識之旅》。版權所有?2026 邁克爾·波倫。經企鵝出版社(企鵝出版集團旗下品牌,企鵝蘭登書屋有限責任公司)授權出版。
AI Will Never Be Conscious
In his new book, A World Appears, Michael Pollan argues that artificial intelligence can do many things—it just can’t be a person.
![]()
PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES
SAVE THIS STORY
THE BLAKE LEMOINE incident is remembered today as a high?water mark of AI hype. It thrust the whole idea of conscious AI into public awareness for a news cycle or two, but it also launched a conversation, among both computer scientists and consciousness researchers, that has only intensified in the years since. While the tech community continues to publicly belittle the whole idea (and poor Lemoine), in private it has begun to take the possibility much more seriously. A conscious AI might lack a clear commercial rationale (how do you monetize the thing?) and create sticky moral dilemmas (how should we treat a machine capable of suffering?). Yet some AI engineers have come to think that the holy grail of artificial general intelligence—a machine that is not only supersmart but also endowed with a human level of understanding, creativity, and common sense—might require something like consciousness to attain. In the tech community, what had been an informal taboo surrounding conscious AI—as a prospect that the public would find creepy—suddenly began to crumble.
![]()
COURTESY OF PENGUIN PRESS
Buy This Book At:
Amazon
Bookshop.org
Target
If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.
The turning point came in the summer of 2023, when a group of 19 leading computer scientists and philosophers posted an 88?page report titled “Consciousness in Artificial Intelligence,” informally known as the Butlin report. Within days, it seemed, everyone in the AI and consciousness science community had read it. The draft report’s abstract offered this arresting sentence: “Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious barriers to building conscious AI systems.”
ADVERTISEMENT
The most ambitious, future-defining stories from our favorite writers.
SIGN UP
By signing up, you agree to our user agreement (including class action waiver and arbitration provisions), and acknowledge our privacy policy.
The authors acknowledged that part of the inspiration behind convening the group and writing the report was “the case of Blake Lemoine.” “If AIs can give the impression of consciousness,” a coauthor told Science magazine, “that makes it an urgent priority for scientists and philosophers to weigh in.”
FEATURED VIDEO
Michael Pollan Answers Psychedelics Questions From Twitter
But what caught everyone’s attention was that single statement in the abstract of the preprint: “no obvious barriers to building conscious AI systems.” When I read those words for the first time, I felt like some important threshold had been crossed, and it was not just a technological one. No, this had to do with our very identity as a species.
What would it mean for humanity to discover one day in the not?so?distant future that a fully conscious machine had come into the world? I’m guessing it would be a Copernican moment, abruptly dislodging our sense of centrality and specialness. We humans have spent a few thousand years defining ourselves in opposition to the “lesser” animals. This has entailed denying animals such supposedly uniquely human traits as feelings (one of Descartes’s most flagrant errors), language, reason, and consciousness. In the last few years, most of these distinctions have disintegrated as scientists have demonstrated that plenty of species are intelligent and conscious, have feelings, and use language and tools, in the process challenging centuries of human exceptionalism. This shift, still underway, has raised thorny questions about our identity, as well as about our moral obligations to other species.
With AI, the threat to our exalted self?conception comes from another quarter entirely. Now we humans will have to define ourselves in relation to AIs rather than other animals. As computer algorithms surpass us in sheer brainpower—handily beating us at games like chess and Go and various forms of “higher” thought like mathematics—we can at least take solace in the fact that we (and many other animal species) still have to ourselves the blessings and burdens of consciousness, the ability to feel and have subjective experiences. In this sense, AI may serve as a common adversary, drawing humans and other animals closer together: us against it, the living versus the machines. This new solidarity would make for a heartwarming story and might be good news for the animals invited to join Team Conscious. But what happens if AI begins to challenge the human—or animal, I should say—monopoly on consciousness? Who will we be then?
ADVERTISEMENT
I find this a deeply unsettling prospect, though I’m not entirely sure why. I’m getting comfortable with the idea of sharing consciousness with other animals (and possibly even with plants, in my case) and I’d be happy to admit them into an expanding circle of moral consideration. But machines?
It could be that my discomfort with the idea stems from my background and education. I have been slow?cooked in the warm broth of the humanities, especially literature and history and the arts, and these have always held up human consciousness as something exceptional that is worth defending. Just about everything we value about civilization is the product of human consciousness: the arts and the sciences, high culture and low, architecture, philosophy, religion, government, law, and ethics and morality, not to mention the very idea of value itself. I suppose it is possible that conscious computers could add something new and as yet unimagined to the stock of these glories. We can hope so. To date, poetry written by AIs isn’t much better than doggerel; the absence of consciousness might explain why it lacks even a spark of originality or fresh insight. But how will we feel if (when?) conscious AIs start producing really good poetry?
ADVERTISEMENT
Why should we assume that conscious machines would be any more virtuous than conscious humans?
ADVERTISEMENT
As a humanist, I struggle with the possibility that the animal monopoly on consciousness might fall. But I have now met other types of humans (some of whom call themselves transhumanists) who are more sanguine about this future. Some AI researchers endorse the effort to build conscious machines because, as entities with feelings of their own, conscious machines are more likely to develop empathy than computers that are merely intelligent. Building a conscious AI is a moral imperative, as both a neuroscientist and an AI researcher sought to convince me. Why? Because the alternative is the blazingly smart but unfeeling AI that will be ruthless in pursuit of its objectives, because it will lack all of the moral constraints that have arisen from our consciousness and shared vulnerabilities. Only a conscious AI is apt to develop empathy and therefore spare us. I am not exaggerating; this is the argument.
One has to wonder if these people have ever read Frankenstein! Dr. Frankenstein gives his creation the gift of not only life but also consciousness, and therein lies the rub. Mary Shelley’s novel chronicles “the creation of a sensitive and rational animal,” and it is the combination of those two qualities that determines the monster’s fate. It is not the monster’s rationality but his emotional injury that spurs him to seek revenge and turn homicidal.
ADVERTISEMENT
“Everywhere I see bliss, from which I alone am irrevocably excluded,” the monster complains to Dr. Frankenstein after being driven out of human society. “I was benevolent and good; misery made me a fiend.” The monster’s ability to reason surely helped him realize his demonic scheme, but it was his consciousness—his feelings—that supplied the motive. Why should we assume that conscious machines would be any more virtuous than conscious humans?
REMARKABLY ENOUGH, THE Butlin report on artificial consciousness represents something of a consensus view in the field; most of the computer scientists I interviewed endorsed its conclusions. Yet the more time I spent reading it (and interviewing one of its coauthors), the more I began to question its conclusion that artificial consciousness is right around the corner. To their credit, the authors are scrupulous about setting forth their assumptions and methods, both of which make me wonder if they haven’t erected their bold conclusion atop a dubious foundation.
Right on page one, these computer scientists and philosophers set forth their guiding assumption: “We adopt computational functionalism, the thesis that performing computations of the right kind is necessary and sufficient for consciousness, as a working hypothesis.” Computational functionalism takes as its starting point the idea that consciousness is essentially a kind of software running on the hardware of what could be a brain or a computer—the theory is completely agnostic. But is computational functionalism true? The authors aren’t quite prepared to nail themselves to that claim, only to say that it is “mainstream—although disputed.” Even so, they will proceed on the assumption that it is true for “pragmatic reasons.”
ADVERTISEMENT
The candor is admirable, but the approach demands a tremendous leap of faith that I’m not sure we should make.
For the purposes of the report, the “material substrate” of the system—that is, whether it is a brain or a computer—“does not matter for consciousness … It can exist in multiple substrates, not just in biological brains.” Any substrate that can run the necessary algorithm will do. “We tentatively assume that computers as we know them are in principle capable of implementing algorithms sufficient for consciousness,” the authors state, “but we do not claim that this is certain.” The acknowledgment of uncertainty doesn’t go nearly far enough. Unquestioned in the report is the metaphor that brains are computers—the hardware on which the software of consciousness is run. Here, we meet a metaphor parading as fact. Indeed, the whole paper and its conclusions hinge on the validity of this metaphor.
Metaphors can be powerful tools for thinking, but only as long as we don’t forget they are metaphors—imperfect or partial analogies likening one thing to another. The differences between the two things are as important as the similarities, but these differences seem to have gotten lost in the enthusiasm surrounding AI. As cyberneticists Arturo Rosenblueth and Norbert Wiener noted years ago, “The price of metaphor is eternal vigilance.” Beyond the authors of this report, the whole field of AI appears to have let down its guard on this one.
ADVERTISEMENT
Consider the sharp distinction between hardware and software. The beauty of separating hardware from software in computers is that a great many different programs can run on the same machine; the software and the knowledge it encodes survive the “death” of the hardware. The separation also speaks to our folk intuition that dualism is true—that, following Descartes, we can draw a bright line between mental stuff and physical stuff. But the distinction between hardware and software simply doesn’t exist in brains; there, software is hardware and vice versa. A memory is a physical pattern of connection among neurons in the brain, neither hardware nor software but both.
Indeed, everything that happens to you—everything you experience or learn or remember—changes the physical structure of your brain, permanently rewiring its connections. (In this sense, there is no dualism in the brain; mental stuff can never be completely disentangled from physical stuff.) The idea that the same consciousness algorithm can be run on a variety of different substrates makes no sense when the substrate in question—a brain—is continually being physically reconfigured by whatever information (or “algorithm of consciousness”) is run on it. Your brain is materially different from mine precisely because it has been shaped, literally, by different life experiences—that is, by consciousness itself. Brains are simply not interchangeable, neither with computers nor with other brains.
ADVERTISEMENT
Just about anyplace you push on it, the computer?as?brain metaphor breaks down. Computer scientists treat neurons in a brain as though they are transistors on a chip, switched on or off by pulses of electricity. That analogy has some truth to it, but it is complicated by the fact that electricity is not the only factor influencing the firing of neurons. Brains are also awash in chemicals, including neuromodulators and hormones that powerfully influence the behavior of neurons, not just whether or not they fire but how strongly. This is why psychoactive drugs can profoundly alter consciousness (and have no discernible effect on computers). The activity of neurons is also influenced by oscillations that traverse the brain in wavelike patterns; the different frequencies of these oscillations correlate with different mental operations, such as consciousness and its absence, focused attention and dreaming (as well as other stages of sleep).
To liken neurons to transistors is to grossly underestimate their complexity. Compared with transistors on a chip, neurons in the brain are massively interconnected, each one communicating directly with as many as 10,000 others in a network so intricate that we are still decades away from being able to draw even the crudest map of its connections. In computer science, much has been made about the advent of “deep artificial neural networks”—a type of machine?learning architecture, supposedly modeled on the brain’s, that layers a mind?boggling number of processors in such a way that the network can process and learn from vast troves of data. Impressive, for sure, yet a recent study demonstrated that a single cortical neuron can do everything an entire deep artificial neural network can.
ADVERTISEMENT
Yes, there are plenty of ways in which computers do resemble brains, and computer science has made great strides by simulating various aspects and operations of the brain. But the idea that brains and computers are in any way interchangeable—the premise of computational functionalism—is surely a stretch. And yet this is the premise upon which stands not only the Butlin report but also most of the field. It’s not hard to see why. If brains are computers, then sufficiently powerful computers should be able to do whatever brains do, including becoming conscious. The premise all but guarantees the conclusion. Put another way, it is the authors themselves who have removed the biggest “barrier” to building a conscious AI—the barrier that says brains differ from computers in crucial ways.
To liken neurons to transistors is to grossly underestimate their complexity.
ADVERTISEMENT
There is a second aspect of the report that makes me wonder how seriously to take its conclusion, and that is the standard it proposes for deciding if an AI is actually conscious or not. This is a serious challenge. Citing the Lemoine incident (fairly or not), the authors point out that AIs can easily dupe humans into believing they are conscious when they are not. (It’s probably more accurate to say that we dupe ourselves into this belief, thanks to our weakness for anthropomorphism and magic.) “Reportability” (philosophical jargon for just asking the AI itself) won’t work when the AI has been trained on pretty much everything that’s been said and written about consciousness. One approach to this dilemma would be to remove all references to consciousness (and presumably feeling and emotion as well) from the dataset on which the AI has been trained and then see if it can still speak convincingly about being conscious.
Instead, the authors propose that we look for “indicators” of AI consciousness that match the predictions of the various theories of consciousness in play. So, for example, if the design of an AI included a workspace that brought together various streams of information, but only after those streams had competed to enter it, that would look a lot like global workspace theory and so might qualify as conscious. The report reviewed a half?dozen theories of consciousness, identifying the “indicators” that an AI would have to exhibit to satisfy each of them and, by doing so, be deemed potentially conscious.
ADVERTISEMENT
The problem here (well, one of them) is this: None of the theories of consciousness that it proposes we measure AIs against are even remotely close to being proved to anyone’s satisfaction. So what kind of standard of proof is that? What’s more, many of these theories can be simulated in the design of an AI, which should come as no surprise, because they’re all based on the idea that consciousness is a matter of computation. Round and round we go.
By the time I finished digesting the Butlin report, the Copernican moment I’d worried about seemed more distant than the report’s bold conclusion had led me to believe. After reviewing the half?dozen or so theories of consciousness covered by the report, it seemed clear that all of them stacked the deck by taking for granted that consciousness could be reduced to some kind of algorithm.
I was also struck by what was missing from the theories under consideration. None of them had anything to say about embodiment—the idea that consciousness might depend on having both a body and a brain—or, for that matter anything remotely biological. Nor did the theories have anything to say about the conscious subject. Who or what, exactly, is the recipient of the information that is broadcast in the global workspace? Or the information that is integrated in integrated information theory (IIT)? And what about the role of feelings in rendering experience conscious?
ADVERTISEMENT
This last point was not lost on the authors, who noted the absence of “affect” from most current theories and recommended that the field pay more attention to the issue of whether conscious machines would have “real” feelings, because if it turns out they do, we will have a moral and ethical crisis on our hands. “Any entity which is capable of conscious suffering deserves moral consideration,” the report states. (But isn’t suffering always conscious?) “This means that if we fail to recognize the consciousness of conscious AI systems,” the report continued, “we may risk causing or allowing morally significant harms.” What would we owe machines that can suffer? And do we really want to bring any more suffering into the world?
Apart from this sort of highly speculative discussion of feeling (as a troublesome by?product of making machines conscious), in the AI community, the conversation about consciousness is as relentlessly abstract—as bloodless, bodiless, and utterly oblivious to biology—as one would expect. When I posed the suffering?computer conundrum to a researcher seeking to build a conscious AI, he waved away the problem, explaining it could be offset with a simple fix to the algorithm: “There’s no reason we couldn’t just turn up the dial on joy.”
Adapted from A World Appears: A Journey into Consciousness by Michael Pollan. Copyright ?2026 by Michael Pollan. Published by arrangement with Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC.
特別聲明:以上內容(如有圖片或視頻亦包括在內)為自媒體平臺“網易號”用戶上傳并發布,本平臺僅提供信息存儲服務。
Notice: The content above (including the pictures and videos if any) is uploaded and posted by a user of NetEase Hao, which is a social media platform and only provides information storage services.