Does ChatGPT write like a student? Engagement markers in argumentative essays

Feng Kevin Jiang, Ken Hyland

Research output: Contribution to journalArticlepeer-review

18 Downloads (Pure)

Abstract

ChatGPT has created considerable anxiety among teachers concerned that students might turn to Artificial Intelligence (AI) programmes to write their assignments. This AI-powered large language model is able to create grammatically accurate and coherent texts, thus potentially enabling cheating and undermining literacy and critical thinking skills. This study seeks to explore the extent AI can mimic human-produced texts by comparing essays by ChatGPT and student writers. By analysing 145 essays from each group, we focus on the way writers relate to their readers with respect to the positions they advance in their texts by examining the frequency and types of engagement markers. The findings reveal that student essays are significantly richer in the quantity and variety of engagement features, producing a more interactive and persuasive discourse. The ChatGPT-generated essays exhibited fewer engagement markers, particularly questions and personal asides, indicating its limitations in building interactional arguments. We attribute the patterns in ChatGPT’s output to the language data used to train the model and its underlying statistical algorithms. The study suggests a number of pedagogical implications for incorporating ChatGPT in writing instruction.

Keywords: ChatGPT; argumentative writing; reader engagement; academic interaction
Original languageEnglish
JournalWritten Communication
Publication statusAccepted/In press - 15 Oct 2024

Cite this