Yamada Laboratory, Kyushu University

ChatGPT vs. Teacher: Which is More Useful as a Source for Help-Seeking? — The Case of English Writing Instruction

2026年02月26日

Hello everyone.

In this article, I would like to introduce a paper I read in our most recent English Literature Seminar and share my thoughts on it.

  • Paper Title: Unpacking help-seeking process through multimodal learning analytics: A comparative study of ChatGPT vs Human expert

  • Journal: Computers & Education

  • Year of Publication: 2024

  • Authors: Angxuan Chen, Mengtong Xiang, Junyi Zhou, Jiyou Jia, Junjie Shang, Xinyu Li, Dragan Gašević, Yizhou Fan

Help-seeking is known as an important learning strategy in self-regulated learning. When learners face challenges, seeking appropriate assistance leads to improved learning outcomes. Effective help-seeking requires active engagement in cognitive and metacognitive processes. However, in actual learning situations, learners do not always seek help appropriately, and issues such as help abuse or avoidance have been reported. Furthermore, generative AI like ChatGPT, which has emerged in recent years, possesses characteristics different from conventional support provided by teachers. While generative AI possesses human-like intelligence and allows for natural dialogue, concerns have also been raised that its use may lead to deterioration of learning habits, decline in metacognitive abilities, and an increased potential for academic dishonesty.

In conventional theory, the help-seeking process is explained through five stages (Nelson-Le Gall, 1981). First, the learner identifies the need for help, then decides to seek help. Next, they identify potential help sources and execute a strategy to elicit help. Finally, they evaluate the reaction to their help-seeking attempt. This model assumes that learners identify the need for support when facing a problem, select an appropriate supporter and method, and evaluate the usefulness after receiving support. Furthermore, various factors have been reported to influence help-seeking. For example, it is said that learners with insufficient metacognition and a lack of prior knowledge often cannot identify the need for help in the first place (Nelson & Fyfe, 2019). Additionally, perceived benefits and costs play a major role in the decision-making for help-seeking. While academic help-seeking is related to mastery-approach goals, it has been clarified that help-seeking avoidance behavior shows a positive correlation with performance-avoidance goals and a negative correlation with mastery-approach goals (Roussel et al., 2011).

This study conducted an experiment with 38 Chinese university students (average age 22.7 years) to investigate differences in the help-seeking processes toward ChatGPT and human experts. Participants were randomly divided into an AI group (18 members) and a human expert group (20 members). In the experiment, participants wrote a draft of an English essay of 300–400 words about the future of education, based on three provided learning materials and a rubric. Subsequently, they watched a training video and received an explanation of the available help sources. Then, participants spent one hour revising the essay. The AI group used ChatGPT 4.0, while the human expert group interacted with teachers experienced in English writing instruction. During the experiment, behavioral log data such as mouse movements, tool clicks, and page navigation, gaze tracking data from a Tobii Nano Pro eye tracker, and dialogue text with ChatGPT or the human teacher were collected. These multimodal data were integrated in the form of screen recordings and analyzed in a synchronized manner.

From the results of analyzing the multimodal learning behavior data using behavioral pattern mining, clearly different help-seeking patterns were observed between the AI group and the human teacher group. A characteristic point of the AI group was that the process was non-linear. Specifically, many learners tended to omit the “problem diagnosis” stage assumed in theoretical models and directly throw questions at ChatGPT. For example, many cases were observed where students asked questions like “What are the problems with my essay?” before thoroughly checking the essay themselves. Additionally, the AI group was characterized by frequent transitions between the “help request” and “help processing” stages, and the “help evaluation” stage was often omitted. This tendency may be related to ChatGPT providing instantaneous answers. In an environment where new questions can be asked immediately, there might be a tendency to move to the next question rather than evaluating the help received. On the other hand, in the human teacher group, a linear process closer to the theoretical model was observed. Learners showed a step-by-step progression: diagnosing the problem, asking a question, evaluating the help received, and then processing it. Particularly noteworthy was that the “help evaluation” stage was clearly visible in the human teacher group, with a strong tendency to actively provide feedback on the advice received.

Differences were also seen in the analysis of activities between the two groups. In the AI group, “executive help-seeking”—that is, the tendency to seek direct answers—was strongly observed. This is thought to reflect that instantaneous support has become easily obtainable due to the development of information and communication technology. For example, there were many direct requests such as “Please rewrite this paragraph.” In contrast, the human teacher group showed a strong tendency to carefully check previously received help and ask new questions based on that. This is thought to reflect the influence of social costs. Out of concern for “appearing ignorant” to the teacher, they tended to prepare before seeking help. These findings provide important implications for educational practice. First, it is suggested that when utilizing ChatGPT as a learning support tool, a mechanism to support the improvement of metacognitive skills is necessary. For example, it might be effective to intentionally incorporate activities that prompt the evaluation of the help received. Furthermore, an integrated support environment design should be considered that takes advantage of the characteristics of support by human teachers while also incorporating the benefits of ChatGPT.

The following are my personal thoughts. I believe this study adopts a very suggestive research approach, as it goes beyond validating ChatGPT’s functions to empirically verify differences and positioning compared to support by teachers. Furthermore, the research background and literature review in this paper are described systematically and in detail, and the development of the discussion is logical and easy to understand. On the other hand, regarding the measurement of help-seeking effectiveness, I believe more detailed verification is needed on the important perspective of how learning effects should be evaluated. While this study focuses on comparing processes, a deeper analysis of the relationship with learning outcomes is also desired. Additionally, it should be considered that the needs and nature of help from generative AI and human teachers might be essentially different. Particularly in the context of language learning, feedback on linguistic content, such as writing revisions, is a crucial perspective. Therefore, I felt that a more detailed analysis is required regarding differences in the content and quality of help provided by generative AI versus teachers. I believe these issues will become important research tasks when considering the effective division of roles between generative AI and human teachers in the future.

By: Geng Xuewang

PAGE TOP