<cite id="ffb66"></cite><cite id="ffb66"><track id="ffb66"></track></cite>
      <legend id="ffb66"><li id="ffb66"></li></legend>
      色婷婷久,激情色播,久久久无码专区,亚洲中文字幕av,国产成人A片,av无码免费,精品久久国产,99视频精品3
      網(wǎng)易首頁 > 網(wǎng)易號(hào) > 正文 申請(qǐng)入駐

      深度故事|新手老師AI時(shí)代的課堂手記

      0
      分享至

      教師大戰(zhàn)聊天機(jī)器人:我在人工智能時(shí)代的課堂之旅


      我當(dāng)時(shí)是個(gè)新手,第一次要應(yīng)對(duì)所有課堂上常見的難題。把人工智能也一股腦兒地塞進(jìn)來,感覺就像在恐慌發(fā)作時(shí)猛灌一杯咖啡。

      彼得·C·貝克

      2026年3月3日星期二 05:00 GMT

      兩年前,39歲的我開始接受教師培訓(xùn)。我想教英語——幫助年輕人成為更優(yōu)秀的讀者、作家和思考者,并與他們建立更深層次的文學(xué)聯(lián)系。在做了15年的自由撰稿人和小說家之后,我自信自己能夠有所貢獻(xiàn)。但隨著培訓(xùn)的深入,我卻越來越感到迷茫。有一個(gè)問題始終縈繞在我心頭,讓我不知所措:我們?cè)撊绾螒?yīng)對(duì)人工智能?

      眼前的難題是:所有學(xué)生現(xiàn)在都能免費(fèi)使用在線聊天機(jī)器人,并能按需生成流暢且相當(dāng)復(fù)雜的文章,這對(duì)英語教學(xué)意味著什么?這個(gè)問題其實(shí)是一系列由來已久的教學(xué)難題中最棘手的一個(gè):我們?cè)趯W(xué)校里究竟想做什么?我們應(yīng)該如何去做?我們?nèi)绾闻袛嗍欠癯晒Γ课耶?dāng)時(shí)是個(gè)新手,第一次面對(duì)這一切。把人工智能引入教學(xué),感覺就像在恐慌發(fā)作時(shí)猛灌一杯咖啡。

      我開始瘋狂地搜尋關(guān)于人工智能和英語課堂的各種觀點(diǎn):教育學(xué)播客、教育學(xué)Substacks、教育學(xué)YouTube頻道。我的算法推送捕捉到了我的興趣,并開始迎合我的需求,為我提供源源不斷的資訊——其中也包括科技公司鋪天蓋地的廣告——這些內(nèi)容都聲稱能幫助我思考這些迫切的問題,并確保我能為我的學(xué)生盡到應(yīng)盡的責(zé)任。

      我很快意識(shí)到,這是一個(gè)充滿激烈爭論、甚至常常充滿敵意的世界。一方(簡單來說)是人工智能的反對(duì)者:教師和教育專家們認(rèn)為,人工智能無異于貪婪的科技公司對(duì)課堂教學(xué)核心活動(dòng)的生存威脅。他們認(rèn)為,學(xué)生需要學(xué)習(xí)如何克服困難:閱讀復(fù)雜的文本,構(gòu)建復(fù)雜的論點(diǎn)。他們需要明白,這些過程充滿摩擦和不確定性,他們需要學(xué)會(huì)接受這一事實(shí),而不是逃避。而一鍵式寫作工具的出現(xiàn),讓逃避變得太容易了。

      人工智能反對(duì)者分享了一些令人震驚的故事:學(xué)生們提交的論文竟然是人工智能生成的,連最簡單的問題都回答不了,或者引用了聊天機(jī)器人“憑空想象”出來的根本不存在的資料。他們還發(fā)布了一些研究,表明使用聊天機(jī)器人會(huì)削弱學(xué)生的推理能力,甚至阻礙大腦的生理發(fā)育。他們提出了諸多倫理方面的擔(dān)憂,包括人工智能對(duì)環(huán)境的影響、聊天機(jī)器人對(duì)受版權(quán)保護(hù)的文字的依賴,以及大型科技公司的寡頭政治傾向。對(duì)大多數(shù)反對(duì)者來說,解決方案是構(gòu)建一個(gè)人工智能無法觸及的課堂。他們討論了轉(zhuǎn)向課堂論文寫作,或許可以采用手寫的方式。他們還探討了恢復(fù)口試和測(cè)驗(yàn)的可行性。

      另一邊則是人工智能的擁護(hù)者。我說的不是那些瘋狂的科技高管,他們大多是男性,歇斯底里地宣稱人工智能很快就會(huì)終結(jié)我們所知的學(xué)校教育,或者已經(jīng)意味著讀書是浪費(fèi)時(shí)間。我說的是那些教師和專家,他們常常熱情洋溢地論證,盡管人工智能存在諸多教學(xué)風(fēng)險(xiǎn),但也蘊(yùn)藏著巨大的潛力。聊天機(jī)器人并非作弊機(jī)器,而是強(qiáng)大的助教,能夠同時(shí)與課堂上的每一位學(xué)生互動(dòng),確保每個(gè)人都能在需要時(shí)獲得個(gè)性化的反饋,并巧妙地引導(dǎo)每位學(xué)生沿著最適合自己的學(xué)習(xí)路徑,實(shí)現(xiàn)最佳學(xué)習(xí)效果。在擁護(hù)者看來,那些反對(duì)者本能地排斥人工智能工具,反映出他們對(duì)人工智能的潛力缺乏了解;這也對(duì)他們的學(xué)生造成了損害,因?yàn)閷W(xué)生們畢業(yè)時(shí)并沒有掌握任何能在大學(xué)和未來職業(yè)生涯中發(fā)揮作用的技術(shù)技能。

      當(dāng)我努力理清反對(duì)者和支持者之間的爭論,試圖解讀他們各自援引的統(tǒng)計(jì)數(shù)據(jù)和學(xué)術(shù)研究時(shí),我的焦慮與日俱增。我注意到教師群體(包括我自己)的一個(gè)共同點(diǎn):因?yàn)槲覀儗?duì)自己的職責(zé)無比認(rèn)真,所以我們常常害怕做錯(cuò)事:使用無效或已被證偽的教學(xué)策略,未能給予學(xué)生他們真正需要的東西。我們常常從經(jīng)驗(yàn)中體會(huì)到,優(yōu)秀的教師能夠改變?nèi)松晃覀円仓溃愀獾慕處熗瑯訒?huì)留下深刻的印記,尤其是在英語教學(xué)領(lǐng)域,他們往往是教師兼作家凱莉·加拉格爾所說的“閱讀殺手”——扼殺學(xué)生對(duì)閱讀的美好感受——的罪魁禍?zhǔn)住N覀兛释粴w入正確的行列,卻又害怕被歸入錯(cuò)誤的行列。

      我認(rèn)為,在這種恐懼之下,隱藏著一種更為根本的恐懼:害怕被視為——更不用說害怕自己真的就是——與時(shí)代脫節(jié)的失敗者,躲在教室里和孩子們待在一起,因?yàn)樵谶@個(gè)瞬息萬變的成人世界里,我們無處容身。我對(duì)這種恐懼感同身受。我決心不被科技炒作所蒙蔽,但我也不想因?yàn)榫芙^考慮可能有用的新工具而讓自己陷入困境。

      我需要的只是一個(gè)初步的結(jié)論。我不需要決定人工智能究竟是邪惡的騙局,還是未來的發(fā)展方向。我也不需要決定人工智能對(duì)教育的未來意味著什么。我需要決定的是,人工智能對(duì)我即將執(zhí)教的高中英語課意味著什么。我忐忑不安地下載了更多播客,在郵箱里塞滿了Substacks的郵件,還看了更多YouTube視頻,希望通過吸收更多相關(guān)資料,能提高我做出正確判斷的幾率,或者至少能減輕我對(duì)犯錯(cuò)的恐懼。

      去年春天,我開始每周花15個(gè)小時(shí)在芝加哥郊區(qū)一所大型學(xué)校觀摩一位資深英語老師的教學(xué)。這所學(xué)校是那種很多家庭專門為了“好學(xué)校”而搬遷的地方。我的觀摩老師——我們姑且稱她為艾米麗——教兩個(gè)年齡段的學(xué)生:一個(gè)是剛上高中的14歲學(xué)生,另一個(gè)是即將畢業(yè)的18歲學(xué)生。我在她的課堂上看到的景象,立刻讓我下定決心要加入那些反對(duì)她教學(xué)的行列。

      我親眼目睹了人工智能與課堂相關(guān)文章中所描述的種種顛覆性影響:完全由人工智能生成的論文;人工智能臆造的引言;師生之間就“究竟什么才是可證明的”展開的緊張對(duì)話。我陪著艾米麗批改作業(yè),和她一起為那些模棱兩可的案例而焦慮,試圖區(qū)分學(xué)生的胡言亂語和人工智能的胡言亂語,區(qū)分學(xué)生的進(jìn)步和人工智能的潤色。

      我之所以成為一名教師,很大程度上是因?yàn)槲蚁牖〞r(shí)間與年輕人的寫作相處,認(rèn)真傾聽并給予他們應(yīng)有的尊重。在艾米麗的指導(dǎo)下,我看到了人工智能的存在(甚至是潛在的存在)是如何干擾這一過程的。我體會(huì)到了面對(duì)一篇論文時(shí),那種獨(dú)特的絕望感——不是努力尋找最佳的回應(yīng)方式,而是試圖探究其背后的原因。我還發(fā)現(xiàn),教師們自身也時(shí)刻受到人工智能輔助工具的狂轟濫炸,這些工具不僅來自電子郵件和社交媒體廣告,更重要的是,它們還集成在學(xué)校的電子郵件和成績管理軟件中。

      艾米麗的學(xué)生都配備了學(xué)校發(fā)的筆記本電腦,她的電腦上裝了一個(gè)程序,可以讓她監(jiān)控每個(gè)學(xué)生的屏幕內(nèi)容;所有學(xué)生的屏幕內(nèi)容同時(shí)顯示在屏幕上,排列成網(wǎng)格狀,就像一排閉路電視監(jiān)控器。使用這個(gè)程序總是讓人感到不安——老大哥就在我身邊——但也總是讓人著迷。有些學(xué)生完全不用人工智能,至少在課堂上不用。而另一些學(xué)生則一有機(jī)會(huì)就用,幾乎是下意識(shí)地把正在做的問題都輸入進(jìn)去。至少有一個(gè)學(xué)生習(xí)慣把每個(gè)新科目都輸入到ChatGPT里,讓它生成筆記,以便在被點(diǎn)名時(shí)可以查閱。我經(jīng)常看到,即使學(xué)生們并非有意使用人工智能,他們也會(huì)不知不覺地被引導(dǎo)到人工智能的使用中。我習(xí)慣了看著學(xué)生在谷歌上搜索某個(gè)主題(比如“羅密歐與朱麗葉的關(guān)鍵主題”),閱讀現(xiàn)在大多數(shù)谷歌搜索結(jié)果頂部顯示的AI生成的答案,然后點(diǎn)擊“在AI模式下深入探索”——突然間,他們就開始和谷歌的聊天機(jī)器人Gemini聊天了,而Gemini總是樂于推銷自己的功能。“我應(yīng)該詳細(xì)闡述其中一個(gè)或多個(gè)主題嗎?我應(yīng)該為一篇關(guān)于這個(gè)主題的文章起草一個(gè)開頭段落嗎?”

      艾米莉告訴我,她現(xiàn)在布置的大部分閱讀作業(yè)都必須在課堂上完成,而且她會(huì)朗讀很多內(nèi)容,尤其是在學(xué)年伊始。我感到震驚。沒錯(cuò),我讀過無數(shù)關(guān)于“當(dāng)代閱讀危機(jī)”的報(bào)紙專題報(bào)道,但親眼目睹青少年閱讀水平的普遍下降,仍然讓我感到沮喪。當(dāng)初我決定成為一名教師時(shí),我的腦海中充滿了浪漫的憧憬:我?guī)ьI(lǐng)學(xué)生們(“哦,船長,我的船長! ”)與文學(xué)的復(fù)雜性及其與生活的聯(lián)系展開較量。在我的憧憬中,閱讀本身大多發(fā)生在課堂之外,在教室的圍墻之外。我的許多學(xué)生似乎缺乏自主閱讀的能力——而且,到了寫作的時(shí)候,他們中的許多人都會(huì)下意識(shí)地求助于人工智能——這對(duì)我的教師抱負(fù)意味著什么?我沮喪地想,我是否選擇了一項(xiàng)注定會(huì)被歷史的不可阻擋的力量摧毀的事業(yè)。

      但當(dāng)我看到艾米莉給全班朗讀時(shí),我的心情頓時(shí)好了起來。對(duì)于一個(gè)作家來說,描述所謂的課堂魔力有點(diǎn)像描述性愛;很多時(shí)候,這種嘗試只會(huì)寫出讓人尷尬又缺乏說服力的句子。然而:我還是覺得有必要告訴你,朗讀時(shí)間有時(shí)確實(shí)充滿魔力。

      我到校后不久,低年級(jí)的學(xué)生們就開始讀《西線無戰(zhàn)事》。起初,學(xué)生們都難以置信:我們真的又要讀一整本書嗎?后來,在艾米莉的幫助下,他們逐漸理解了這本書的內(nèi)容:第一次世界大戰(zhàn)、年輕的德國士兵、塹壕戰(zhàn)、天真無邪的喪失、每日與死亡擦肩而過帶來的心理創(chuàng)傷、與后方的隔絕。筆記本電腦和手機(jī)都被收了起來。(按照學(xué)校規(guī)定,它們都放在教室門口的袋子里。)每個(gè)人都知道,他們可以隨時(shí)舉手提問或發(fā)表意見。有時(shí),艾米莉會(huì)停下來,指出她懷疑學(xué)生們感到困惑但又不敢承認(rèn)的地方,或者他們自己都沒意識(shí)到的誤讀,又或者那些可以有多種解讀的句子。日復(fù)一日,在不易察覺的細(xì)微變化中,這本書從一本晦澀難懂的巨著變成了一個(gè)熟悉的伙伴。

      不知何時(shí),學(xué)生們不再抱怨,而是開始投入其中:他們渴望知道故事的結(jié)局,為跌宕起伏的情節(jié)驚嘆不已,情不自禁地、充滿感情地思考著書中人物的行為動(dòng)機(jī)。埃里希·瑪麗亞·雷馬克為何要如此描寫?然后,有一天,奇跡發(fā)生了:一群2025年的美國14歲少年,仿佛置身于一部講述1910年代德國19歲青年故事的書中,他們既透過自身生活的視角審視這本書,又透過書中的視角審視自己的生活。我能真切地感受到:教室里彌漫著一種微妙的能量流動(dòng),這種能量在學(xué)生、老師以及近一個(gè)世紀(jì)前寫在紙上的文字之間交織碰撞。

      我親眼目睹的人工智能種種惡作劇令人沮喪,而我親眼目睹的不使用人工智能的教學(xué)卻令人振奮。在觀察期結(jié)束前,艾米麗讓我親自帶領(lǐng)一些閱讀活動(dòng),有好幾次我都感到無比暢快。我感覺自己恨不得大聲宣告:我拒絕人工智能——而且我為此感到自豪!

      然而,整個(gè)夏天,我的疑慮又悄然襲來。盡管在艾米麗的課堂上閱讀課令人振奮,但我知道它并沒有真正解答我關(guān)于人工智能和課堂教學(xué)的所有(甚至任何)疑問。我知道秋季我將重返校園,這次是以實(shí)習(xí)教師的身份,承擔(dān)大部分的備課和批改作業(yè)的責(zé)任。我需要做出更多決定,其中最核心的是關(guān)于寫作的。考慮到我對(duì)聊天機(jī)器人的擔(dān)憂,我應(yīng)該讓學(xué)生寫些什么?何時(shí)寫?又該如何寫?

      因?yàn)槲医佑|過(并且還在繼續(xù)接觸)大量關(guān)于人工智能和教學(xué)的內(nèi)容,所以我能夠在腦海中進(jìn)行一場截然不同的觀點(diǎn)之間的辯論。

      我:“全班一起閱讀,沒有任何人工智能或電子設(shè)備的輔助,這種感覺很棒。我對(duì)此非常肯定。我想以此為起點(diǎn)。”

      我也是:“但是學(xué)生們到底學(xué)到了什么?你怎么知道?”

      我:“嗯,我得以實(shí)時(shí)聆聽他們的想法演變過程。”

      我也是:“但是每個(gè)學(xué)生都參與了嗎?”

      我:“嗯,沒有。但是之后他們都做了很多寫作練習(xí)——在教室里,用手寫——我能夠讀懂那些練習(xí)。”

      我也是:“讀了他們寫的之后,你真的認(rèn)為每個(gè)學(xué)生都學(xué)到了他們理論上應(yīng)該學(xué)到的所有東西嗎?他們都學(xué)到了你想讓他們學(xué)到的所有內(nèi)容嗎?”

      我:“嗯……我想不是。不是全部。不是所有。”

      我也是:“如果學(xué)生們?cè)陂喿x和討論之后,坐下來寫作時(shí),每個(gè)人都能使用一個(gè)人工智能聊天機(jī)器人,該機(jī)器人可以根據(jù)他們現(xiàn)有的理解水平和學(xué)習(xí)風(fēng)格提供量身定制的反饋,那會(huì)怎么樣?如果作為老師的你能夠訓(xùn)練這個(gè)聊天機(jī)器人,使其行為與你對(duì)作業(yè)和整個(gè)課程的目標(biāo)完全一致,那又會(huì)怎么樣?”

      我:“嗯,那本來就是我的工作——給他們提供個(gè)性化的反饋。”

      我也會(huì)說:“但是你到底有多少時(shí)間做這些?你真的能在每次需要的時(shí)候都介入嗎?學(xué)生在家寫作的時(shí)候怎么辦?作業(yè)截止前一天晚上,他們卻完全搞錯(cuò)了怎么辦?你為什么不想讓他們知道呢?”

      我:(大汗淋漓)

      為了盡職調(diào)查,我開始試用人工智能聊天機(jī)器人,包括那些專為課堂設(shè)計(jì)的,或者帶有某種“學(xué)生模式”的機(jī)器人。首先,我評(píng)估了它們完成最糟糕任務(wù)的能力:我布置了一份作業(yè),并添加了一些簡單的指令——“這篇作業(yè)應(yīng)該像一個(gè)15歲學(xué)生寫的”、“請(qǐng)?zhí)砑右恍┏R姷钠磳懞驼Z法錯(cuò)誤”、“不要寫得太流暢”——然后生成了一篇我無法區(qū)分學(xué)生寫作水平的文章。在2023年的美好時(shí)光里,人們普遍認(rèn)為老師可以立即識(shí)別出機(jī)器寫作。但如今,無論好壞,情況已經(jīng)完全不同了。

      接下來,我測(cè)試了這些聊天機(jī)器人一些不那么明顯的惡意用途,比如對(duì)草稿進(jìn)行評(píng)論,或者回答關(guān)于作業(yè)的疑問。不同機(jī)器人的表現(xiàn)參差不齊,但有些表現(xiàn)非常出色。事實(shí)上,它們的表現(xiàn)讓我印象深刻,以至于我開始偶爾把自己的雜志文章草稿發(fā)給這些機(jī)器人,時(shí)不時(shí)地就能收到一些真正有用的即時(shí)反饋。坐在電腦前,我仿佛看到一群啦啦隊(duì)員在我身后聚集,準(zhǔn)備慶祝勝利。

      我反復(fù)回想起在艾米麗課堂上一起閱讀的時(shí)光,試圖分析究竟是什么讓我覺得如此特別。我想,部分原因在于這項(xiàng)活動(dòng)如何集中了每個(gè)人的注意力。因?yàn)樗械墓P記本電腦和手機(jī)都被收起來了,所以每個(gè)人都全神貫注。這真是令人驚嘆。

      我開玩笑的。那是在學(xué)校。班上同學(xué)的注意力時(shí)而分散,時(shí)而又集中在青少年們不得不考慮的各種事情上:下一節(jié)課的考試;周末的計(jì)劃,或者令人擔(dān)憂的無所事事;暗戀對(duì)象是否也喜歡自己;昨晚聽到父母吵架;還有移民執(zhí)法人員在附近巡邏。但是,多虧了閱讀時(shí)間的安排,集中注意力總是觸手可及。學(xué)生總能找到重新集中注意力的方法,而不會(huì)被明亮、可滾動(dòng)的屏幕誘惑——那永遠(yuǎn)開啟的通往更多干擾的入口——所干擾。

      我確信,在學(xué)習(xí)和科技誘惑之間人為地劃清界限是件好事。我本能地想要盡可能地在他們的寫作過程中也做到這一點(diǎn)。是否有可能設(shè)計(jì)出一個(gè)能夠提供可靠且有用的寫作反饋的聊天機(jī)器人?也許可以。能否控制聊天機(jī)器人反饋的頻率,使其不至于成為一種依賴?大概率可以。能否命令聊天機(jī)器人不向?qū)W生提供一鍵修改功能?當(dāng)然可以。但是,每個(gè)高中生——忙碌、壓力巨大、對(duì)寫作感到焦慮、渴望在晚上或周末結(jié)束學(xué)校作業(yè)——都知道,在公共互聯(lián)網(wǎng)上,這些省力的選擇只需輕輕點(diǎn)擊一下鼠標(biāo)即可獲得。

      我無法將聊天機(jī)器人從他們的世界中徹底抹去,就像我無法刪除手機(jī)里的內(nèi)容一樣。我所能做的,就是決定在多大程度上引導(dǎo)學(xué)生使用聊天機(jī)器人,又在多大程度上引導(dǎo)他們體驗(yàn)其他事物。

      我:“所以……我想秋季我會(huì)盡量減少人工智能的使用。我認(rèn)為學(xué)生最需要的是持續(xù)的閱讀和寫作體驗(yàn)——包括這些過程中涉及的所有摩擦和不確定性——而不受科技干擾。”

      我也是:“但學(xué)會(huì)應(yīng)對(duì)科技帶來的干擾是生活的一部分。而且,他們未來肯定需要人工智能來增強(qiáng)思維能力,成為更有競爭力的勞動(dòng)者。”

      我:“也許吧。但是,如果你還沒學(xué)會(huì)如何思考,你能增強(qiáng)你的思考能力嗎?我不是經(jīng)常看到硅谷高管的采訪,他們都在嚴(yán)格限制自己的孩子上網(wǎng)和使用電子屏幕嗎?”

      我也是:“你是不是把自己浪費(fèi)太多時(shí)間在網(wǎng)上這件事,以及你希望如果有人能幫你關(guān)掉這些網(wǎng)站,你就能成為更優(yōu)秀、更成功的作家的想法,投射到別人身上了?”

      我:“是的,有可能。”

      弗洛伊德認(rèn)為,教師是“不可能的職業(yè)”之一。你永遠(yuǎn)無法宣稱自己取得了完全的成功,甚至無法確切地知道自己所做的事情會(huì)產(chǎn)生怎樣的全部效果。(更糟糕的是:“你甚至可以預(yù)先確定自己會(huì)得到不盡如人意的結(jié)果。”)整個(gè)秋季,我每天都提醒自己這一點(diǎn),試圖讓自己感覺好一些,因?yàn)槲覍?duì)幾乎所有事情都感到深深的不確定。

      當(dāng)我把課堂時(shí)間用來閱讀時(shí),感覺很棒。但隨后我又擔(dān)心,正因?yàn)楦杏X這么好,我是不是做得太多了,這就好比老師為了健康只吃菠菜一樣。當(dāng)我讓學(xué)生完全在課堂上完成論文時(shí),我感覺自己很了不起,因?yàn)槲肄饤壛丝萍脊灸切└g大腦的捷徑。(伊恩·麥克萊恩飾演的甘道夫,面對(duì)著高大可怕的炎魔,堅(jiān)定地咆哮著“你不能通過!”的畫面,成了我的一個(gè)標(biāo)志性畫面。)

      然后,到了晚上,回想白天的種種挑戰(zhàn),我會(huì)擔(dān)心,把寫作作業(yè)限制在課堂上,是不是讓學(xué)生錯(cuò)過了我最珍視的寫作體驗(yàn):那種反復(fù)推敲、修改重組文字的挫敗感與樂趣交織,從草稿到最終定稿的迭代過程,以及作品與生活其他部分相互影響、彼此交融的感受。當(dāng)我布置更具挑戰(zhàn)性的作業(yè),并給予學(xué)生完成這些作業(yè)所需的額外時(shí)間——包括必要的自主學(xué)習(xí)時(shí)間——我又會(huì)感到欣慰。然而,我的腦海中又會(huì)浮現(xiàn)出學(xué)生們?cè)诩遥盐业淖鳂I(yè)要求粘貼到 ChatGPT、Gemini、Claude、Copilot 和 Grammarly 等各種軟件里的畫面。

      我花了很多時(shí)間試圖想出一些跳出固有思維的寫作作業(yè),這些作業(yè)結(jié)構(gòu)精巧、引人入勝,完全不像過去那些僵化的公式化作文,讓學(xué)生們沒有理由逃避它們。

      想象一下你在好萊塢工作:我們剛剛讀完的這本書要被拍成電影,你需要選擇電影配樂;解釋哪些歌曲與哪些場景相配以及原因,并通過這樣做來表明你理解這些場景的基調(diào)以及它們?cè)谡麄€(gè)故事中的作用。

      請(qǐng)用你認(rèn)為經(jīng)常被誤解的、對(duì)你來說很重要的事物來代替Binyavanga Wainaina 的諷刺散文《如何描寫非洲》 ,寫出你自己的版本,以此來展示你對(duì) Wainaina 修辭選擇的理解。

      我喜歡閱讀這些作業(yè)。我喜歡了解學(xué)生們?nèi)绾卫斫馕覀兯x的內(nèi)容。我喜歡聽他們的音樂。我喜歡了解他們對(duì)性別的看法、他們的文化背景、他們居住的社區(qū),并記錄我的感想。但這份喜愛并沒有讓我停止擔(dān)憂。

      誰知道呢——也許聊天機(jī)器人能幫上忙。我確信在某些情況下它們確實(shí)起到了作用。每次布置作業(yè),我都能發(fā)現(xiàn)有人用聊天機(jī)器人作弊。當(dāng)我提出這個(gè)問題時(shí),作弊者往往立刻承認(rèn),聲稱是時(shí)間緊迫加上沒理解我的要求。我懇求他們:如果你們不明白,就告訴我!但我忍不住想:如果我訓(xùn)練一個(gè)聊天機(jī)器人,讓它以我認(rèn)可的方式回答他們的問題呢?會(huì)不會(huì)少一些人作弊?(我甚至知道到底有多少人作弊了嗎?)他們的寫作水平會(huì)不會(huì)提高得更快?或者,會(huì)不會(huì)有更多人,在作弊這條路上,興高采烈地走下去?我想信任他們;但我確信我必須設(shè)定界限。這些決定感覺是不可能的,令人稍感安慰的是,一位喜歡可卡因的奧地利精神分析學(xué)家在 1937 年也表達(dá)了同樣的觀點(diǎn)。

      除了閱讀之外,還有一種課堂活動(dòng)讓人感覺相對(duì)安全,不受這種揮之不去的疑慮影響。那就是我們直接討論人工智能的時(shí)候——我會(huì)嘗試解釋我對(duì)這個(gè)主題的看法(包括我的疑慮),同時(shí)也會(huì)征求同學(xué)們的想法。我給高年級(jí)的學(xué)生發(fā)放了人工智能問卷,引導(dǎo)他們描述自己使用哪些人工智能工具,用于什么用途,使用時(shí)長以及感受。一些學(xué)生告訴我,他們從未使用過人工智能,也從未想過要使用——因?yàn)檫@讓他們感到毛骨悚然。一些學(xué)生表達(dá)了對(duì)人工智能未來就業(yè)前景的擔(dān)憂。還有一些學(xué)生描述了他們?nèi)绾问褂昧奶鞕C(jī)器人生成學(xué)習(xí)卡片和復(fù)習(xí)題,獲取穿搭建議,編輯社交媒體帖子,用聊天機(jī)器人代替谷歌搜索,獲取烹飪建議、運(yùn)動(dòng)訓(xùn)練建議、健康建議以及寵物健康建議。

      幾乎所有填寫問卷的人都表達(dá)了某種擔(dān)憂(或者至少意識(shí)到了這一點(diǎn)),即人工智能可能會(huì)削弱他們的獨(dú)立思考能力。我明白,他們中的一些人可能察覺到了我的抵觸情緒,所以說了些我愛聽的話。我也知道,他們中的一些人可能隱瞞了一些他們不想告訴我的事情,比如他們使用聊天機(jī)器人來緩解孤獨(dú)感。盡管如此,他們對(duì)自身認(rèn)知能力的擔(dān)憂依然讓我感到真誠。

      然而,學(xué)生們是否真正理解原創(chuàng)思維的本質(zhì),從而意識(shí)到這種思維方式何時(shí)被繞過,這一點(diǎn)并不總是顯而易見的。不止一位學(xué)生曾堅(jiān)定地表示要培養(yǎng)自己的思考能力——但幾行之后,他們又分享了一些“負(fù)責(zé)任的”人工智能使用案例,在我看來,這些案例恰恰破壞了他們?cè)鞠M囵B(yǎng)的能力。比如,我會(huì)讓人工智能給我一個(gè)論點(diǎn),然后我自己寫論文;我會(huì)讓人工智能給我?guī)讉€(gè)論點(diǎn),然后我選一個(gè),讓人工智能幫我擬定提綱;我會(huì)讓人工智能寫一個(gè)初稿,然后我自己修改,讓它變成原創(chuàng)作品。

      只有一位學(xué)生表示,他使用人工智能完成了他不想做的作業(yè)。他解釋說,他并非有意冒犯我,只是生活繁忙,“有些老師”習(xí)慣布置重復(fù)性作業(yè),他覺得這些作業(yè)不值得花費(fèi)時(shí)間。這位學(xué)生的父親在家長會(huì)上找到我,告訴我他理解我制定人工智能政策的初衷,但也感到擔(dān)憂。在他自己的職業(yè)生涯中,他看到雇主在招聘和晉升討論中非常重視人工智能技能。難道他兒子的教育不應(yīng)該鼓勵(lì)他掌握這項(xiàng)技能嗎?

      我明顯感覺到,即使是那些最常使用人工智能的學(xué)生,對(duì)這項(xiàng)技術(shù)的背景知識(shí)也極其匱乏。有一天,我突發(fā)奇想,提出給任何能用通俗易懂的語言(不看屏幕)解釋聊天機(jī)器人如何生成文本的人一大筆額外的加分。結(jié)果無人能做到。我還分享了一封我收到的來自美國作家協(xié)會(huì)的郵件,郵件解釋了如何確定我是否有資格從一起代表圖書作家對(duì)人工智能公司Anthropic提起的集體訴訟中獲得賠償。Anthropic開發(fā)了Claude,一些作家認(rèn)為Claude是他們最喜歡的聊天機(jī)器人。我問,Anthropic憑什么要賠償像我這樣的作家?一片沉默。

      所以我試著談?wù)撨@件事。感覺有點(diǎn)尷尬。我很快意識(shí)到,自己用淺顯易懂的語言解釋聊天機(jī)器人文本來源,在分享之后并沒有我想象中那么簡單明了。但感覺也不錯(cuò)。我感覺到,隨著我們探討關(guān)于世界以及我們?cè)谑澜缰械奈恢玫膯栴},學(xué)生們的注意力——坦白說,也包括我自己的——都更加集中了。

      我預(yù)感未來我會(huì)尋找更多機(jī)會(huì)將人工智能引入課堂,但同時(shí)對(duì)于人工智能工具的使用仍會(huì)保持高度謹(jǐn)慎。我希望學(xué)生們能夠更好地思考文學(xué)作品,沒錯(cuò)——但也要更好地思考他們接觸到的所有語言,包括廣告、政治演講、報(bào)紙?jiān)u論和社交媒體內(nèi)容。如果這些語言機(jī)器將成為他們與世界互動(dòng)的重要組成部分,我希望他們能夠就這些機(jī)器提出問題。我希望他們能夠解釋人工智能公司的商業(yè)模式,這些商業(yè)模式對(duì)聊天機(jī)器人的行為有何影響,以及低收入工人在聊天機(jī)器人輸出中扮演的角色。我希望學(xué)生們了解并回應(yīng)那些因與聊天機(jī)器人互動(dòng)而導(dǎo)致自殘、精神錯(cuò)亂甚至自殺的人的經(jīng)歷。我希望他們知道,許多人工智能高管都曾公開預(yù)測(cè),人工智能的發(fā)展最終將導(dǎo)致地球表面大部分被數(shù)據(jù)中心覆蓋,我想聽聽他們對(duì)此的看法。

      實(shí)習(xí)的最后一天,我留下來批改了一大堆低年級(jí)學(xué)生的作業(yè)。我們花了幾個(gè)星期閱讀短篇小說,探討人類與老師、導(dǎo)師和榜樣之間錯(cuò)綜復(fù)雜的關(guān)系。我沒有讓他們寫作文,而是讓他們從單元學(xué)習(xí)的內(nèi)容中挑選人物,構(gòu)思原創(chuàng)的情節(jié),將這些人物聯(lián)系起來,并使其與單元的主題相呼應(yīng)。

      我允許這些學(xué)生在課外時(shí)間創(chuàng)作這些故事,并以電子方式提交。但我也讓他們?cè)谡n堂上繼續(xù)創(chuàng)作,并要求他們與我見面,解釋他們的創(chuàng)作思路。據(jù)我觀察,只有一兩個(gè)學(xué)生明顯把這項(xiàng)任務(wù)交給了聊天機(jī)器人(如果你好奇的話,聊天機(jī)器人做得相當(dāng)不錯(cuò))。

      總的來說,我對(duì)學(xué)生們故事的創(chuàng)意和質(zhì)量,以及他們對(duì)其他作家作品的深刻理解感到非常欣喜。令我驚訝的是,他們中的許多人都借鑒了一篇在課堂上被普遍認(rèn)為“太怪異”而遭到冷落的故事:馬克·吐溫的《神秘的陌生人》。在我們讀的版本中(吐溫至少修改過三次),一群年輕人被一個(gè)名叫撒旦的天使所蠱惑——他向他們保證,這不是那個(gè)撒旦,那是他的叔叔。這個(gè)撒旦,不管他是誰,都精通各種酷炫的魔法,起初男孩們覺得這些魔法妙不可言。然而,最終,這卻是一個(gè)恐怖故事。盡管撒旦表面上魅力十足,但他看待人類的態(tài)度卻冷漠、蔑視,甚至充滿敵意。年輕人與他接觸越多,就越有可能在不知不覺中吸收他類似的態(tài)度。

      很多學(xué)生筆下的撒旦都表現(xiàn)得極其相似,簡直就是最新聊天機(jī)器人的翻版,這一點(diǎn)顯而易見。撒旦會(huì)主動(dòng)幫角色做作業(yè),潤色他們已經(jīng)完成的作品,讓他們騰出時(shí)間去做更輕松愉快的事情。我發(fā)誓,這一切都是他們自發(fā)進(jìn)行的,完全沒有我的任何提示。盡管我一向比較保守,但我之前從未想到過馬克·吐溫筆下的撒旦還可以有這樣的解讀。

      閱讀那些故事的時(shí)光令我無比愉悅,也基本擺脫了整個(gè)學(xué)期以來一直縈繞在我心頭的AI焦慮。這種愉悅最大的威脅,來自我文字處理軟件、郵箱和作業(yè)管理工具中嵌入的AI工具源源不斷地發(fā)出請(qǐng)求。我是否希望機(jī)器為我學(xué)生的作文打分?幫我評(píng)分?還是根據(jù)它檢測(cè)到的相似之處進(jìn)行分類?

      我沒有。我想讀讀學(xué)生們寫的東西。整個(gè)學(xué)期我都在跟他們說,寫作是人類創(chuàng)造的禮物,是我們跨越時(shí)空了解自己和彼此的一種方式。如果說了這么多,最后我卻把點(diǎn)評(píng)他們文章的任務(wù)交給算法,那又意味著什么呢?我把剩下的故事打印出來,關(guān)掉了電腦。

      我是否記錄下了人工智能作弊的每一個(gè)例子?我肯定沒有,而且我肯定現(xiàn)在有些老師——無論是反對(duì)者還是支持者——都在搖頭嘆息我的天真。但我了解我的學(xué)生;這不就是我的工作嗎?我在課堂上觀察過他們的草稿進(jìn)展;我讓他們當(dāng)面解釋他們的故事——那些奇特的、滑稽的、感人的故事。這一切肯定都有意義。我意識(shí)到自己可能是在自欺欺人。但我卻感到出奇的平靜。我做了我認(rèn)為在這個(gè)學(xué)期里最合適的事情。在未來的學(xué)期里,教學(xué)方法肯定會(huì)發(fā)生我無法預(yù)料的變化。那也是我的工作。我拿起筆,從那堆稿子里抽出下一篇,開始閱讀。

      Teacher v chatbot: my journey into the classroom in the age of AI


      Illustration: Jack Purling/The Guardian

      I was a newcomer, negotiating all of usual classroom difficulties for the first time. Throwing AI into the mix felt like downing a coffee in the middle of a panic attack

      Peter C Baker

      Tue 3 Mar 2026 05.00 GMT

      Share

      Two years ago, at the age of 39, I began training to be a school teacher. I wanted to teach English – to help young people become stronger readers, writers and thinkers, with a deeper connection to literature. After 15 years of working as a freelance writer and as a novelist, I felt confident that I had something to offer. But the further I progressed in my training, the more uncertain I felt. One particular question taunted me for my lack of an answer. What to do about artificial intelligence?

      The immediate dilemma: what does it mean for English instruction that all pupils now have access to free online chatbots that can produce fluid, fairly complex prose on demand? This question sits atop a teetering pile of timeless pedagogical quandaries: What are we actually trying to do in school? How should we go about doing it? How do we know if we’ve succeeded? I was a newcomer, negotiating all of this for the first time. Throwing AI into the mix felt like downing a coffee in the middle of a panic attack.

      I started frantically seeking out perspectives on AI and the English classroom wherever I could find them: pedagogy podcasts, pedagogy Substacks, pedagogy YouTube channels. My algorithmic feeds picked up on this interest and started catering to it, serving me an apparently endless supply of content – including endless advertising from tech companies – that promised to help me think through these urgent questions and ensure I did right by my students.

      I quickly learned that this was a world of heated, often acrimonious, debate. On one side (to simplify a bit) were the AI rejectionists: teachers and education pundits for whom AI was nothing less than an existential assault by rapacious tech companies on the defining activities of the classroom. What students needed, they argued, was to learn how to push themselves through difficulty: to read complex texts and develop complex arguments. They needed to learn that these were processes full of friction and uncertainty, and they needed to learn how to embrace that fact, rather than running away from it. Access to a one-click writing machine made it too easy to run away.

      AI rejectionists shared horror stories of students handing in AI-generated papers about which they couldn’t answer the simplest questions, or citing nonexistent sources their chatbots had “hallucinated”. They posted studies suggesting that chatbot use dulled students’ reasoning faculties, or even impeded the physical development of their brain. They raised ethical concerns, including AI’s environmental costs, chatbots’ reliance on copyrighted writing, and the oligarchal leanings of big tech companies. For most rejectionists, the solution was to build a classroom that AI couldn’t touch. They talked about shifting toward in-class essays, perhaps written by hand. They debated the feasibility of reviving oral tests and quizzes.

      On the other side were the AI cheerleaders. I’m not talking about their crazy uncles, the mostly male tech execs who spoke maniacally about how AI would soon mean the end of schooling as we knew it, or already meant that reading books was a waste of time. I’m talking about teachers and pundits who argued – often quite passionately – that, for all AI’s pedagogical risks, it also carried great potential. Instead of cheating machines, chatbots could be powerful assistant teachers, able to engage with every student in a classroom simultaneously, making sure everyone got personalised feedback exactly when needed, carefully nudging each student down their particular path to maximum learning. From the cheerleaders’ perspective, the rejectionists’ instinct to shun AI tools represented a lack of understanding about their possibilities; it also did a disservice to their students, who would leave school without having acquired tech skills they could use to their advantage at university and in their future careers.

      As I waded through arguments between the rejectionists and the cheerleaders, attempting to parse their duelling deployment of statistics and academic studies, my anxiety increased. I’ve noticed something about teachers, including myself. Because we take our responsibilities so seriously, we often fear doing the “wrong” thing: using ineffective or discredited teaching strategies, failing to give our students what they need. We believe, often from experience, that good teachers can change people’s lives; we know really bad teachers can leave a mark, too, especially in English, where they are often a culprit in what the teacher and writer Kelly Gallagher calls “readicide”: the killing off of good feelings about reading. We long to be in the right category, and dread being in the wrong one.

      Beneath this fear, I think, is a more fundamental one: the fear of being seen as – not to mention the fear of actually being – out-of-touch losers, hiding with children in the classroom because there’s nowhere else in the ever-changing adult world we quite fit. I know this fear well. I was resolved not to get suckered by tech hype, but I also didn’t want to sucker myself by refusing to even consider a potentially useful new tool.

      All I needed was a provisional ruling. I didn’t need to decide if AI was an evil scam or the future of everything. I didn’t need to decide what AI meant for the future of education, writ large. What I had to decide was what AI meant for the high-school English classes I was on the verge of teaching. I nervously downloaded more podcasts, clogged my inbox with still more Substacks and watched more YouTube videos, hoping that by absorbing more materials on the subject I could increase my chances of getting it right, or at least tamp down my terror of getting it all wrong.

      Last spring I started spending 15 hours a week observing a veteran English teacher in a large school in a Chicago suburb: the type of place that families move to specifically “for the schools”. My host teacher – let’s call her Emily – taught two age groups: 14-year-olds just starting high school and 18-year-olds almost done with it. What I saw in her classroom immediately disposed me to join the rejectionists.

      I witnessed all the disruptive effects you read about in articles about AI and the classroom: fully AI-generated papers; AI-hallucinated quotes; tense student-teacher conversations about what exactly was provable. I sat with Emily while she marked papers and joined her in stressing over ambiguous cases, trying to sort student nonsense from AI nonsense, student improvement from AI-powered polish.

      I’d become a teacher in large part because I wanted to spend time with young people’s writing, honouring it with close attention. Watching over Emily’s shoulder, I saw how AI’s presence (or even its potential presence) interfered with this process. I became acquainted with the unique variety of despair produced by looking at a paper and, rather than figuring out how to best respond to it, trying to divine its origins. I also saw how teachers are themselves constantly bombarded with offers of AI assistance, not just via email and social media advertisements, but also – more, actually – from AI tools integrated into their schools’ email and gradekeeping software.

      Emily’s students all had school-issued laptops, and her computer had a program that allowed her to surveil the content of every one of her students’ screens; they all appeared on the screen simultaneously, in a grid that recalled a bank of CCTV monitors. Using this program was always discomfiting – Big Brother, c’est moi – and always transfixing. Some students didn’t use AI at all, at least in class. Others turned to it every chance they got, feeding in whatever question they were working on almost as a reflex. At least one student was in the habit of putting every new subject into ChatGPT, having it generate notes that he could refer to if called on. Often, I saw students getting funnelled toward AI use even when they hadn’t necessarily been looking for it. I got used to watching a student Google a subject (“key themes in Romeo and Juliet”), read the AI-generated answer that now appears atop most Google search results, click “Dive deeper in AI mode” – and suddenly be chatting with Gemini, Google’s chatbot, which was always ready to advertise its own capabilities. “Should I elaborate on one or more of these themes? Should I draft a first paragraph for an essay on the subject?”

      Emily told me that most of the reading she assigned now had to happen in class and that she read much of it aloud, especially toward the beginning of the year. I was shocked. Yes, I’d read countless newspaper features on the “contemporary reading crisis” but it was still dismaying to encounter the diminished baseline state of teen reading in the wild. When I decided to become a teacher, my head had been filled with romantic visions in which I led students (“O captain, my captain!”) into battle with literary complexity and its connections to life. In these visions, the reading itself took place mostly off-camera, beyond the walls of the classroom. What did it mean for my teacherly ambitions that so many of my students appeared unequipped to read on their own – and that, when it came time to write, so many of them turned reflexively to AI? I wondered, depressively, if I’d signed up for something that unstoppable forces of history were on the brink of wiping out.

      But then I watched Emily read to the class and my spirits lifted. For a writer, describing alleged classroom magic is a bit like describing sex; so often, the attempt produces sentences that are both cringe-inducing and unconvincing. And yet: I feel obliged to tell you that reading time was sometimes magic.

      Shortly after I’d arrived, the younger classes started All Quiet on the Western Front. Students began by expressing disbelief: We’re really reading another whole book? Then, with Emily’s help, they got their bearings: first world war, young German soldiers, trench warfare, the loss of innocence, the psychological toll of daily proximity to death, the disconnect from the home front. Laptops were away, as were phones. (Per school policy, they were in pouches by the classroom door.) Everyone knew they could raise a hand any time to ask for clarification or make an observation. Sometimes, Emily stopped to highlight moments that she suspected were producing confusion that students might be afraid to admit to, or misreadings they weren’t even conscious of, or sentences ripe with multiple possibilities for interpretation. Day by day, and mostly in imperceptible micro-movements, the book transformed from an imposing monolith into a familiar companion.

      At some point the students stopped complaining and started getting into it: expressing a desire to know how it all turned out, gasping at dramatic turns, wondering aloud, and with feeling, why characters were doing what they were doing. Why had Erich Maria Remarque written it like that? And then, one day, it happened: a room full of American 14-year-olds in 2025 was inside a story about German 19-year-olds in the 1910s, simultaneously viewing the book through the lens of their lives and their lives through the lens of the book. I could feel it on my skin: the room quietly crackling with the crisscrossing lines of energy between students and teacher and words first committed to paper almost a century before.

      The AI shenanigans I’d witnessed had been depressing: the AI-free teaching I’d witnessed had been inspiring. Before my observation period ended, Emily let me lead some of the readings myself, and a couple of times I experienced a full-body high. I felt ready to scream it from the rooftops: I’m an AI rejectionist – and proud of it!

      Over the summer, though, my doubts came creeping back. As stirring as reading time in Emily’s classroom had been, I knew it hadn’t actually answered all (or any) of my questions about AI and the classroom. I knew that in the fall I would be returning, this time as a student teacher, taking most of the responsibility for lesson planning and marking. I had more decisions to make, centrally about writing. What, given my concerns about chatbots, would I have students write? And when, and how?

      Because I’d consumed – and was continuing to consume – so much content devoted to AI and teaching, I was capable of staging an internal debate, in my head, between radically different takes.

      Me: “Reading together as a class without any AI or devices felt great. I know that for sure. I want to use that as my starting point.”

      Also me: “But what did the students really learn? How do you know?”

      Me: “Well, I got to hear their thoughts evolving in real time.”

      Also me: “But did every single student participate?”

      Me: “Well, no. But they all did a lot of writing afterward – in the classroom, by hand – and I was able to read that.”

      Also me: “Having read what they wrote, do you really think every student learned as much as they theoretically could have? Did they all learn everything you wanted them to?”

      Me: “Well … I guess not. Not all of them. Not everything.”

      Also me: “What if, after your AI-free reading and discussion, when students sat down to write, they each had access to an AI chatbot that could give them feedback tailored exactly to their existing comprehension level and learning style? What if you, the teacher, could train that chatbot, aligning its behaviour precisely to your goals for the assignment and the class overall?”

      Me: “Well, that’s already my job – to give them personalised feedback.”

      Also me: “But how much time do you have for that? Can you really intervene every single time it would be useful? What about when your students are writing at home? What about when it’s the night before an assignment is due and they’re off to a completely wrong start? Why wouldn’t you want them to know that?”

      Me: [sweating profusely]

      In the name of due diligence, I started playing around with AI chatbots, including those designed specifically for classrooms, or with some kind of “student mode” included. First, I evaluated their ability to do the Worst Thing: take one of my assignments, add a few simple instructions – “This should sound like was written by a 15-year-old student”, “Please insert a realistic sprinkling of common typos and grammatical errors”, “Don’t make it too smooth” – and generate something I could not distinguish from student writing. In the halcyon days of 2023, it was a reassuring article of faith that machine writing was instantly detectable by a teacher. I can report that, for better or worse, that’s simply no longer the case.

      Next I tested these chatbots on less obviously poisonous uses, such as making comments on drafts, or answering clarifying questions about assignments. Performance varied from bot to bot, but some were very good at it. In fact, I was impressed enough that I started occasionally feeding these same bots drafts of my own magazine pieces, now and then getting instant feedback that felt truly useful. Sitting at my computer, I felt an imaginary squad of cheerleaders gathering behind me, ready to claim a victory.

      I kept returning to my memories of reading time in Emily’s classroom, trying to analyse what had felt so special. Part of it, I decided, had to do with how the activity structured everyone’s attention. Because all the laptops and phones were away, everyone was fully engaged at all times. It was truly astonishing to see.

      I’m kidding. It was school. Some shifting amount of the class’s collective attention was on all the things teenagers have to think about. Next period’s test. Their plans for the weekend, or worrisome lack thereof. Whether their crush liked them back. The fight they heard their parents having the night before. The presence of ICE officers in the neighbourhood. But, thanks to the architecture of reading time, the possibility of paying attention was always close at hand. A student could find their way back to it without being waylaid en route by the temptations of a bright, scrollable screen, an always-on portal to more distractions.

      It was good – I was sure of it – to have some enforced separation between the learning and the temptations of tech. My reflex was to enforce, to the extent possible, that same separation on their writing processes. Is it possible to design a chatbot that gives reliably useful writing feedback? Maybe. Can the frequency of chatbot feedback be regulated so that it doesn’t become a crutch? Probably. Can a chatbot be ordered not to offer students one-click rewrites? Yes. But every high-school student – busy, overwhelmed, nervous about writing, eager to be done with school work for the night or weekend – knows that, on the public internet, these labour-saving options sit a mere click away.

      I couldn’t wipe chatbots from their world, any more than I can wipe phones. All I could do was decide how much I would steer students toward them and how much I would nudge them toward other experiences.

      Me: “So … I think in the fall I’ll try making things as AI-free as possible. I think what the students need most are sustained experiences of reading and writing – with all the friction and uncertainty those processes involve – without tech distractions in the mix.”

      Also me: “But learning to deal with tech distractions is part of life. And surely they’ll need AI, in the future, to supercharge their thinking and be competitive workers.”

      Me: “Maybe. But can you supercharge your thinking when you haven’t learned how to think yet? Aren’t I always reading interviews with Silicon Valley executives where they describe strictly limiting their own kids’ access to the web and screens?”

      Also me: “Any chance you’re projecting some of your own concerns about how much time you waste online, and what a better, more successful writer you want to think you’d be if someone would just turn them off on your behalf?”

      Me: “That’s possible, yes.”

      Teaching, according to Freud, is one of the “impossible professions”. It is never possible to declare total success, or even know for sure the full effects of what you are doing. (Worse: “One can be sure beforehand of achieving unsatisfying results.”) Through the fall I reminded myself of this idea daily, trying to make myself feel better about how profoundly unsure I felt about almost everything I did.

      When I devoted class time to reading, it felt great. But then I worried that because it felt so great I was doing too much of it, the teacherly equivalent of trying to be healthy by eating only spinach. When I had students write their essays entirely in class, I felt virtuous for having banished big tech’s brain-rotting shortcut machine. (The image of Ian McKellen-as-Gandalf, standing firm in the face of the monstrous, towering Balrog, bellowing “YOU SHALL NOT PASS!” became a companion.)

      Then, at night, looking over the battles of the day, I would worry that, by confining work for written assignments to class time, I wasn’t exposing students to the very aspects of writing that I valued most: the intertwined frustrations and pleasures of picking apart what you’ve written and reassembling it, the movement from draft to draft, the experience of living with a piece over time, your engagement with it colouring and being coloured by the rest of your life. When I set more ambitious assignments, and gave students the extra time that ambition required – including, by necessity, unsupervised time – I would feel virtuous again. Then my mind’s eye would be invaded by visions of my students at home, pasting my instructions into ChatGPT, into Gemini, into Claude, into Copilot, into Grammarly.

      I spent a lot of time trying to come up with outside-the-box writing assignments that were so well constructed – so damn interesting, so not the rigidly formulaic essays of yesteryear – that students would feel no desire to skip them.

      Imagine you work in Hollywood: the book we’ve just read is being made into a movie and you have to select the soundtrack; explain which songs go with which scenes and why, and by doing so demonstrate that you understand those scenes’ tone and role in the overarching story.

      Write your own version of Binyavanga Wainaina’s satirical essay How to Write About Africa, replacing “Africa” with something important to you that you feel is often misrepresented, and by doing so demonstrate your understanding of Wainaina’s rhetorical choices.

      I loved reading these assignments. I loved learning how students understood what we were reading. I loved hearing their music. I loved learning about their relationships to gender, their cultural backgrounds, their neighbourhoods, making notes about my responses. But this love didn’t stop me from worrying.

      And who knows – maybe chatbots could have helped. I’m sure in a few cases they did. For every assignment, I caught a few people using them to cheat. When I floated the question, the culprits tended to admit it right away, claiming a combination of time pressure and failure to understand what I’d asked them to do. I implored them: when you don’t understand, just let me know! But I couldn’t help thinking: what if I’d trained a chatbot to answer their questions in ways that I approved? Might fewer of them have done the Worst Thing? (Did I even know how many actually had?) Might their writing have got better, faster? Or would more of them, set at the foot of the garden path to full-blown cheating, have merrily traipsed down it? I wanted to trust them; I felt sure I had to set limits. The decisions felt impossible, and it was of limited consolation that an Austrian psychoanalyst with a fondness for cocaine had said as much in 1937.

      Besides reading, there was one other type of classroom activity that felt relatively safe from this hovering cloud of doubts. These were the times when we talked directly about AI – when I tried to explain my thinking on the subject (including my uncertainty) and also to solicit the class’s thoughts. I gave my older students AI questionnaires, prompting them to describe what AI tools they used for what, how long they’d been using them, and how they felt about it. A few of them told me they’d never used AI and never wanted to – that it creeped them out. Some expressed concern about what it meant for jobs. Others described using chatbots to generate flash cards and test review questions, to get advice on what to wear, to edit their social media posts, as a replacement for Google searches, to get cooking advice, to get athletic training advice, to get health advice, and to get health advice for their pets.

      Almost everyone who filled out the questionnaire expressed some fear (or at least recognition) that AI could erode their capacity for original thought. I recognise that some of them, having intuited my rejectionist leanings, might have been telling me what they thought I wanted to hear. I also knew some of them were likely leaving out things they understandably didn’t want to tell me, such as that they used chatbots to alleviate loneliness. Still, their concerns about their own cognitive lives felt genuine.

      It wasn’t always clear, though, that the students understood the nature of original thinking well enough to understand when it was being bypassed. More than one expressed firm resolve to develop their own thinking abilities – then, a few lines later, shared examples of “responsible” AI usage that, from my perspective, trashed exactly what they were hoping to cultivate. I’ll have AI give me a thesis statement, but then I’ll write the paper. I’ll have AI give me a few thesis statements, then I’ll pick one and have AI do the outline. I’ll have AI write a first draft, then go in and change things to make it original.

      Only one student said that he used AI to complete, start to finish, assigned writing that he didn’t want to do. He meant no offence to me personally, he explained, but his life was busy and “some teachers” were in the habit of giving repetitive assignments that he felt confident weren’t worth his time. This same student’s father approached me at a parents’ night to tell me that, while he understood where I was coming from with my AI policies, he was also worried. In his own professional life, he saw how much employers emphasised AI fluency in discussions about hirings and promotion. Shouldn’t his son’s education be encouraging that fluency?

      I got a distinct sense that, even among students who used AI the most, contextual knowledge about the technology was extremely low. One day, I spontaneously offered a much-too-large heap of extra credit to anyone who could produce (without looking at a screen) a plain-language account of how chatbots generate text. No one could. I also shared an email I’d received from the US Authors Guild, explaining how to determine my eligibility for compensation from a class-action lawsuit brought on behalf of book writers against the AI firm Anthropic, creator of Claude, a chatbot some of them had identified as their favourite. On what grounds, I asked, might Anthropic owe writers like me money? Silence.

      So I tried to talk about it. It felt a little awkward. My own plain-language explanation of chatbot text provenance was, I quickly realised upon sharing it, not as plain as I’d hoped. But it also felt good. I sensed my students’ attention – and, frankly, my own – slipping into higher gear as we took on questions about the world and our place in it.

      I suspect that in the future I’ll be seeking out more opportunities to bring the subject of AI into the classroom, even as I maintain an extreme caution about doing the same with AI tools. I want students to get better at thinking about literature, yes – but also about all the language they encounter, including in advertisements, politicians’ speeches, newspaper op-eds and social media content. If these language machines are going to be a major part of how they’re interfacing with the world, I want them to be able to ask questions about the machinery. I want them to be able to explain the business models of AI companies, what those business models can mean for how chatbots behave, and the role played in chatbot outputs by low-wage workers. I want students to know about, and respond to, the experience of people for whom chatbot interactions end in self-harm, psychosis and suicide. I want them to know that multiple AI executives have openly predicted that AI growth will eventually result in the surface of our planet being mostly covered by data centres, and I want to hear what they think about it.

      On my last day of student teaching, I stayed late, grading a pile of my younger students’ work. We’d spent several weeks reading short stories on the complicated relationships we humans have with our teachers, mentors and role models. In place of essays, I’d asked them to write short stories where they plucked characters from across the unit and came up with original scenarios that brought them together in ways that reflected the unit’s themes.

      I’d allowed these students to work on these stories outside class, and to submit them digitally. But I also had them work on them in class time and made them meet me to describe their choices. Only one or two, that I could tell, had obviously tossed the task over to chatbots (which, if you’re wondering, did a pretty serviceable job).

      Overall, I was delighted by the inventiveness and quality of my students’ stories, and the depth of understanding of other authors’ work that they demonstrated. To my surprise, many of them drew on a story that, in class, had been widely dismissed as “too weird”: Mark Twain’s The Mysterious Stranger. In the version we read (Twain re-wrote it at least three times), a group of young men falls under the sway of an angel named Satan – not thatSatan, he assures them; that’s his uncle. This Satan, whoever he is, knows all kinds of cool magic, which at first the boys find totally delightful. In the end, though, it’s a horror story. For all Satan’s surface charms, he is revealed to view humanity with a combination of indifference, scorn and hostility. The more the young men interact with him, the more they risk unthinkingly absorbing a similar attitude.

      Multiple students had their Satans act in ways that, it was impossible to miss, mirrored the behaviour of the latest chatbots. Satan offered to do characters’ homework, to take work they’d done and make it more polished, to free up their time for more immediately pleasurable activities. They did this, I swear, without any prompting from me. Despite my rejectionist inclinations, this way of looking at Twain’s Satan had never occurred to me.

      The hours I spent reading those stories were a joy, and mostly uncomplicated by the AI anxieties that had colonised my mind for so much of the semester. The biggest threat to this joy was the steady stream of solicitations from the AI tool embedded in my word processing software, from the AI tool embedded in my email inbox, and the AI tool embedded in my digital assignment-management tool. Did I want the machine to give me notes on my students’ stories? To grade them for me? To put them in categories based on similarities it detected among them?

      I didn’t. I wanted to read what my students had written. I’d been telling them all semester that writing was a gift humanity had made for itself, a way for us to know ourselves and each other across space and time. What would it mean if, after all that, I gave over the task of responding to their writing to an algorithm? I printed the remaining stories out and shut my computer.

      Did I clock every single instance of AI cheating? I’m sure I didn’t, and I’m sure some teachers out there – rejectionists and cheerleaders alike – are shaking their heads right now at my naivety. But I knew my students; that was the job, wasn’t it? I’d watched their drafts’ progress in class; I’d made them explain their stories – their weird, hilarious, touching stories – to my face. Surely all that counted for something. I was aware of the possibility that I was fooling myself. But I felt surprisingly at peace. I’d done what I thought was right for the semester. In future semesters, the approach will surely change in ways I can’t yet predict. That, too, is the job. I picked up my pen, grabbed the next story from the pile, and began to read.

      Listen to our podcasts here and sign up to the long read weekly email here.

      特別聲明:以上內(nèi)容(如有圖片或視頻亦包括在內(nèi))為自媒體平臺(tái)“網(wǎng)易號(hào)”用戶上傳并發(fā)布,本平臺(tái)僅提供信息存儲(chǔ)服務(wù)。

      Notice: The content above (including the pictures and videos if any) is uploaded and posted by a user of NetEase Hao, which is a social media platform and only provides information storage services.

      相關(guān)推薦
      熱點(diǎn)推薦
      居民貸款腰斬!大家還在拼命還債!

      居民貸款腰斬!大家還在拼命還債!

      櫻桃大房子
      2026-04-15 21:38:09
      “天生的壞種,典型的倀鬼”,小學(xué)生地鐵站一個(gè)動(dòng)作,被全網(wǎng)討伐

      “天生的壞種,典型的倀鬼”,小學(xué)生地鐵站一個(gè)動(dòng)作,被全網(wǎng)討伐

      妍妍教育日記
      2026-04-16 09:10:09
      錢賺夠了,名聲沒了,謝娜開演唱會(huì)迎來全網(wǎng)罵潮,劉燁當(dāng)初沒說謊

      錢賺夠了,名聲沒了,謝娜開演唱會(huì)迎來全網(wǎng)罵潮,劉燁當(dāng)初沒說謊

      洲洲影視娛評(píng)
      2026-04-15 23:09:10
      為什么你一定會(huì)老死?因?yàn)槟愕腄NA在故意殺掉你

      為什么你一定會(huì)老死?因?yàn)槟愕腄NA在故意殺掉你

      半解智士
      2026-04-12 13:31:31
      多家寺廟陸續(xù)宣布關(guān)門,并非維修也非裝修,知情人透露真實(shí)原因!

      多家寺廟陸續(xù)宣布關(guān)門,并非維修也非裝修,知情人透露真實(shí)原因!

      番外行
      2026-04-16 08:14:05
      天后麥當(dāng)娜:一生交往一百多猛男,與自己保鏢歡愛視頻,還被拍賣

      天后麥當(dāng)娜:一生交往一百多猛男,與自己保鏢歡愛視頻,還被拍賣

      七阿姨愛八卦
      2026-04-09 09:32:37
      CBA最令人失望球隊(duì)!10天前還是聯(lián)賽第4,如今連第8都快保不住了

      CBA最令人失望球隊(duì)!10天前還是聯(lián)賽第4,如今連第8都快保不住了

      后仰大風(fēng)車
      2026-04-16 07:55:08
      馬筱梅吐槽婆婆別墅沒處下腳,背刺蘭姐,可見,她是真的壞

      馬筱梅吐槽婆婆別墅沒處下腳,背刺蘭姐,可見,她是真的壞

      魔都姐姐雜談
      2026-04-16 04:42:28
      周繼紅出手!跳水隊(duì)大換血:2人下桌2人換桌,全紅嬋陳芋汐在列

      周繼紅出手!跳水隊(duì)大換血:2人下桌2人換桌,全紅嬋陳芋汐在列

      阿鳧愛吐槽
      2026-04-16 02:15:51
      小鵬最貴SUV預(yù)售39.98萬!L4架構(gòu)3000TOPS算力,座椅支持“三折疊”

      小鵬最貴SUV預(yù)售39.98萬!L4架構(gòu)3000TOPS算力,座椅支持“三折疊”

      車東西
      2026-04-16 00:39:40
      4-3慘敗!皇馬客場崩盤遭拜仁淘汰,四大皆空+主帥下課倒計(jì)時(shí)?

      4-3慘敗!皇馬客場崩盤遭拜仁淘汰,四大皆空+主帥下課倒計(jì)時(shí)?

      阿晞體育
      2026-04-16 11:16:37
      許家印,KTV豪情終成妄念

      許家印,KTV豪情終成妄念

      哲空空
      2026-04-15 11:40:51
      “中方要求兩大航運(yùn)公司立即停止巴拿馬港口運(yùn)營”

      “中方要求兩大航運(yùn)公司立即停止巴拿馬港口運(yùn)營”

      觀察者網(wǎng)
      2026-04-15 16:28:22
      76人晉級(jí)獲4大喜訊!再戰(zhàn)老對(duì)手賽程出爐 恩比德受困傷病恐難復(fù)出

      76人晉級(jí)獲4大喜訊!再戰(zhàn)老對(duì)手賽程出爐 恩比德受困傷病恐難復(fù)出

      錢說體育
      2026-04-16 12:33:23
      超2400萬伊朗人注冊(cè)“為伊朗捐軀”網(wǎng)站

      超2400萬伊朗人注冊(cè)“為伊朗捐軀”網(wǎng)站

      看看新聞Knews
      2026-04-15 23:00:11
      別再瞎整容了!《蜜語紀(jì)》朱珠李夢(mèng)同框,原生臉與科技高下立判

      別再瞎整容了!《蜜語紀(jì)》朱珠李夢(mèng)同框,原生臉與科技高下立判

      阿鳧愛吐槽
      2026-04-15 20:03:11
      1920年,林徽因和父親林長民,在倫敦寓所里吃西餐,顏值太高了!

      1920年,林徽因和父親林長民,在倫敦寓所里吃西餐,顏值太高了!

      云霄紀(jì)史觀
      2026-04-14 02:18:38
      燃盡了!賽季最佳皇馬,敗于一個(gè)選擇

      燃盡了!賽季最佳皇馬,敗于一個(gè)選擇

      足球周刊
      2026-04-16 11:15:30
      1952年,抗日名將柏輝章被押往刑場,曾在淞滬會(huì)戰(zhàn)立下赫赫戰(zhàn)功

      1952年,抗日名將柏輝章被押往刑場,曾在淞滬會(huì)戰(zhàn)立下赫赫戰(zhàn)功

      磊子講史
      2026-02-03 12:47:11
      52歲何靜近況曝光!經(jīng)歷兩段失敗婚姻,如今與女兒何彥琳相依為命

      52歲何靜近況曝光!經(jīng)歷兩段失敗婚姻,如今與女兒何彥琳相依為命

      代軍哥哥談娛樂
      2026-04-15 10:33:12
      2026-04-16 12:51:00
      科學(xué)的歷程 incentive-icons
      科學(xué)的歷程
      吳國盛、田松主編
      3181文章數(shù) 15015關(guān)注度
      往期回顧 全部

      科技要聞

      39.98萬!小鵬GX預(yù)售“純電增程同價(jià)”

      頭條要聞

      24歲抗癌博主去世媽媽和姐姐也病逝 一家五口只剩兩人

      頭條要聞

      24歲抗癌博主去世媽媽和姐姐也病逝 一家五口只剩兩人

      體育要聞

      很快,亞洲籃球要有自己的NCAA了?

      娛樂要聞

      絲芭傳媒創(chuàng)始人王子杰去世,享年63歲

      財(cái)經(jīng)要聞

      一季度GDP,5.0%!

      汽車要聞

      空間大五個(gè)乘客都滿意?體驗(yàn)嵐圖泰山X8

      態(tài)度原創(chuàng)

      藝術(shù)
      親子
      健康
      房產(chǎn)
      公開課

      藝術(shù)要聞

      張大千『 花菓薈萃冊(cè)』

      親子要聞

      帶喜娜醬學(xué)舞蹈,小家伙一點(diǎn)不怯場,在我眼皮底下長大了

      干細(xì)胞抗衰4大誤區(qū),90%的人都中招

      房產(chǎn)要聞

      業(yè)主狂喜!海口二手房價(jià),終于漲了!

      公開課

      李玫瑾:為什么性格比能力更重要?

      無障礙瀏覽 進(jìn)入關(guān)懷版