Effects of interaction with AI-assisted writing evaluation on EFL students’ writing performance
Abstract
This study aimed to measure a newly developed automated written evaluation feedback instrument, EditGPT, on the writing performance of 30 Omani EFL learners and their perceptions of this tool’s functionality in learning English as a foreign language (EFL). The learners were divided into three groups: control (receiving general feedback on writing), experimental group A (receiving writing feedback from researchers), and experimental group (receiving writing feedback from EditGPT). To collect the required data, a pretest and posttest of writing based on narration and compare-and-contrast were conducted to measure students’ progress. A questionnaire initially developed by Huang and Renandya (2020) was adopted, translated, and used thoroughly to elicit students’ perceptions. The results showed that experimental group A performed much better than the other two groups; experimental group B, which received feedback only from EditGPT, showed more progress than the control group. The results of this study demonstrate the usability of EditGPT and the role of the teacher in such learning contexts.
Full Text:
PDFRefbacks
- There are currently no refbacks.
This work is licensed under a Creative Commons Attribution 4.0 License.
Laboratory for Knowledge Management & E-Learning, The University of Hong Kong