Yamada Laboratory, Kyushu University

Participated in the Spring 2024 National Conference of the Japan Society for Educational Technology

2024年06月07日

Hello, everyone.

On March 2nd and 3rd, I attended the Spring 2024 National Conference of the Japan Society for Educational Technology (44th Annual Conference) held at Kumamoto University. This was my first time visiting Kumamoto and participating in the Japan Society for Educational Technology. Overall, it was an extremely valuable experience for me.

Previously, I had only attended the Information Processing Society of Japan Special Interest Group on Computers and Education (IPSJ SIG-CLE) workshops, but this time I experienced a different atmosphere. Multiple sessions were held simultaneously, allowing me to choose presentations that matched my interests and preferences. Notably, there were many studies on generative AI, and I encountered several intriguing research projects.

Among them, two studies were particularly impressive. The first one was titled “AN LLM Chatbot in Minecraft with Educational Applications,” presented in the Student Session (1). This research focuses on building and creative activities within Minecraft using Creative Mode. Minecraft’s Creative Mode offers a free environment that allows for infinite imagination and creativity, catering to those who enjoy architectural design and creative expression. In this study, instead of traditional building methods, users create structures by inputting commands through a chatbot utilizing Large Language Models (LLM). From a technical perspective, the combination of Minecraft and a chatbot is very intriguing. Having had some experience with Minecraft myself, I know that coming up with ideas for building can be challenging. This approach could enhance the game’s appeal. However, from an educational standpoint, I’m interested in understanding whether this method is more effective than traditional methods in fostering imagination and creativity, and what implications it has for future work.

The second study that stood out was titled “Proposal of a Learning Reflection Support Method Using Multimodal Generative AI with Photos,” presented in the Educational Learning Support Systems and AI (8) session. Multimodal generative AI can process multiple data types, such as text and images, and can classify images containing specific objects. This research uses a generative AI called GPT-4 with vision to understand which photos learners select and how they reflect on their learning. The approach involves recording learners’ activities in multiple photos and then having the learners choose photos and explain their reasons after completing the activities. The selected photos are analyzed by the generative AI, and the analysis results are output in text format from three perspectives: “things well done,” “things learned,” and “both well done and learned.” Based on the output text, the classified images are marked with different colors corresponding to the three perspectives. This method is expected to help educators understand learners’ reflections. However, a future challenge lies in how to effectively utilize the results obtained from the generative AI.

Through these studies, I was able to see the multifaceted ways AI can be utilized in the field of education to enrich the learning process. Particularly, the research on multimodal generative AI demonstrated how AI can be applied not only to text but also to a broader context including images. This has greatly influenced my own research and data analysis approaches, providing inspiration to explore new research directions.

Overall, participating in this conference was a highly valuable experience as it allowed me to learn about the latest research on applying AI in education. I look forward to continuing to follow how AI can contribute to education and the learning process in the future.

On the other hand, my presentation was part of the Student Session (4), where I presented my research on a system that supports Japanese language learners in learning mimetic words using VR technology. Using “iraira” (irritated) as an example, I demonstrated how the developed VR system helps learners experience and understand emotions.

During the Q&A session, I received important questions from two people. The first question was about why it was necessary to use VR technology for emotional experiences—couldn’t similar effects be achieved through real interpersonal interactions? I responded that while real-life methods could indeed evoke emotions, they are temporary and lack reproducibility. In contrast, the VR system can provide scenes that are reproducible at any time, allowing learners to study autonomously according to their needs. Reflecting on my response, I feel that I should have further emphasized that the high level of immersion provided by VR technology is a crucial factor in inducing strong emotional experiences, making it indispensable for learning mimetic words.

The second question was about how to determine whether learners actually experienced the emotions and whether using a VR system with a lot of Japanese text would be difficult for beginners in Japanese. These issues are central to my research, and I couldn’t respond effectively at the time. However, I recently conducted an experiment to evaluate this aspect. By asking learners about their specific emotions after experiencing the scenarios through various methods, I believe we can clearly express their emotional experiences.

During the discussion session, there were also interesting questions about why watching videos alone might not help in understanding emotions and why foreign learners of Japanese might find it difficult to intuitively understand mimetic words. Native Japanese speakers can grasp the emotional nuances of mimetic words from the tone of voice, even if they encounter them for the first time. However, Japanese learners lack this intuition, making it challenging for them to understand the emotional nuances of mimetic words. This is one of the reasons I developed this VR system. As a Japanese learner myself, I have struggled with understanding abstract meanings, and through this system, I aim to support learners in overcoming such difficulties.

Although the presentation time was limited, it made me aspire to present my research findings on a larger stage in the future.

By: Tang Li (M2 student)

PAGE TOP