Abstract
ChatGPT has created considerable anxiety among teachers concerned that students might turn to large language models (LLMs) to write their assignments. Many of these models are able to create grammatically accurate and coherent texts, thus potentially enabling cheating and undermining literacy and critical thinking skills. This study seeks to explore the extent LLMs can mimic human-produced texts by comparing essays by ChatGPT and student writers. By analyzing 145 essays from each group, we focus on the way writers relate to their readers with respect to the positions they advance in their texts by examining the frequency and types of engagement markers. The findings reveal that student essays are significantly richer in the quantity and variety of engagement features, producing a more interactive and persuasive discourse. The ChatGPT-generated essays exhibited fewer engagement markers, particularly questions and personal asides, indicating its limitations in building interactional arguments. We attribute the patterns in ChatGPT’s output to the language data used to train the model and its underlying statistical algorithms. The study suggests a number of pedagogical implications for incorporating ChatGPT in writing instruction.
Original language | English |
---|---|
Journal | Written Communication |
Early online date | 30 Apr 2025 |
DOIs | |
Publication status | E-pub ahead of print - 30 Apr 2025 |
Keywords
- ChatGPT
- academic interaction
- argumentative writing
- reader engagement