An Explainable Knowledge Tracing for open-ended answer using Large Language Model
- 주제(키워드) deep learning , knowledge tracing , explainable AI , education , personalized learning
- 주제(DDC) 006.31
- 발행기관 아주대학교 일반대학원
- 지도교수 Young-June Choi
- 발행년도 2024
- 학위수여년월 2024. 2
- 학위명 석사
- 학과 및 전공 일반대학원 인공지능학과
- 실제URI http://www.dcollection.net/handler/ajou/000000033621
- 본문언어 영어
- 저작권 아주대학교 논문은 저작권에 의해 보호받습니다.
초록/요약
Knowledge tracing involves predicting learners' understanding of concepts over time, and fusion networks have emerged as effective tools for integrating diverse data sources into the learning models. This study focuses on understanding the necessity and influence of various components within the fusion network. The traditional model for KT has limitations in capturing the nuanced and evolving nature of a student's understanding. These models primarily focus on tracking learners' conceptual understanding, neglecting factors such as guess and slip, which can significantly impact learning. Additionally, for educational content that requires open-ended answer questions, KT only considers the binary label of correctness that excludes the student's text information. The model was compared with existing knowledge tracing models using two datasets, ASSISTments. The results demonstrated that AEKT outperformed all other models, achieving a 10% improvement in the area under the receiver operating characteristic curve (AUC). Additionally, the study introduced the method of integrated gradients to interpret correctness in open-ended answers and understand the influence of students' responses. In this paper, We proposed novel model designed to enhance the prediction of student knowledge states by incorporating answer text information. Building upon the DKVMN model, our proposed model integrates the BERT language model to achieve state-of-the-art performance in knowledge tracing, and understanding the necessity and influence of answert text. Keywords: deep learning, knowledge tracing, explainable AI, education, personalized learning
more목차
1. Introduction 1
2. Related Works 5
2.1 Knowledge Tracing 5
2.2 Bidirectional Encoder Representation Transformers (BERT) 7
2.3 Integrated Gradients 8
3. Proposed Model 9
3.1 Correlation weight 12
3.2 Read process 12
3.3 Write process 14
3.4 Optimization 17
4. Experiment 18
4.1 Experiment setup 18
4.2 Result and Discuss 19
4.3 Visualization of AEKT 24
5. Conclusion 27