검색 상세

Improved Object Detection Using Self Distillation Based on YOLO Architecture

초록/요약

Self-Distillation is a method to enhance performance by transferring knowledge within a single model without using a teacher-student framework in deep learning. Traditional knowledge distillation approaches often face challenges of extended training time and resource consumption due to the large teacher model. To address these issues, this paper applies self-distillation to the object detection problem, aiming to improve performance. Specifically, we propose applying self-distillation to the YOLOv6 model to enhance generalization and accuracy. By computing the distillation loss between the features of the Backbone and the Neck, this loss is incorporated into the final training process. The Neck, which integrates multi-scale information, serves as the teacher, while the Backbone learns as the student to improve the model’s generalization. The proposed method demonstrates outstanding performance, particularly in small object detection, and achieves significant performance improvements without considerably increasing training time.

more

목차

Ⅰ. Introduction 1
Ⅱ. Related Work 5
A. Self-Distillation in Classification 5
B. Self-Distillation in Real-time Object Detector 6
Ⅲ. Proposed Method 8
A. Architecture of YOLOv6's neck 8
B. Network Design 10
C. Loss Function 12
Ⅳ. Experimental Results 16
A. Experimental Setup 16
B. Implementation Detail 16
C. Comparison with YOLOv6 3.0 18
D. Ablation Study on Distillation Method 19
E. Ablation Study on Weight Decay 19
Ⅴ. Conclusion 20
VI. Limitations and Future Work 21
VII. Reference 22

more