Abstract
The advent of ChatGPT, a novel AI-powered language model able to create grammatically accurate and coherent texts, has generated considerable concern among educationalists anxious about its potential to enable cheating among students and to undermine the development of critical thinking, problem-solving and literacy skills. The similarities and differences between ChatGPT texts and human writing, however, remain underexplored. This study aims to bridge this gap by comparing the use of 3-word bundles in A-level argumentative essays written by British students with those generated by ChatGPT. Our findings show that ChatGPT essays contain a lower frequency of bundles but these have a higher type/token ratio, suggesting that its bundles are more rigid and formulaic. We also found noun and preposition-based bundles are more prevalent in ChatGPT texts, employed for abstract descriptions and to provide transitional and structuring cues. Student essays are characterized by more epistemic stances and authorial presence, crucial in persuasive argumentation. We attribute these distinct patterns in ChatGPT’s output to its processing of vast training data and underlying statistical algorithms. The study points to pedagogical implications for incorporating ChatGPT in writing instruction.
Original language | English |
---|---|
Journal | Applied Linguistics |
Early online date | 20 Aug 2024 |
DOIs | |
Publication status | E-pub ahead of print - 20 Aug 2024 |