Object detection for emergency steering control
- 주제(키워드) object detection , deep learning , focal loss , yolo
- 주제(DDC) 006.31
- 발행기관 아주대학교
- 지도교수 황원준
- 발행년도 2021
- 학위수여년월 2021. 8
- 학위명 석사
- 학과 및 전공 일반대학원 인공지능학과
- 실제URI http://www.dcollection.net/handler/ajou/000000031241
- 본문언어 한국어
- 저작권 아주대학교 논문은 저작권에 의해 보호받습니다.
초록/요약
Traffic accidents cause a lot of damage every year due to the negligence of drivers or pedestrians. In a sudden collision situation, the car has activated the Autonomous Emergency Braking System to avoid the competition situation. However, situations in which collisions cannot be avoided simply by Autonomous Emergency Braking (AEB) occur more often than they would otherwise. In this paper, we present a surrounding environment recognition methodology that enables accidents to be avoided through Automatic Emergency Steering (AES) in order to solve such situations. The road environment recognition method presented in this paper first defines the scenarios of possible accidents. To learn the defined scenario, proceed with the existing lane recognition [1], then vehicle recognition, simply using yolo v2 [2]. This reduces the dependence on the lane and presents a simpler method than the conventional two-step learning method. This is an autonomous driving system that pursues high-speed situational judgment, demonstrating that it shows better efficiency than conventional methods. Also, in this paper, focal loss [3] is used to solve the class imbalance between foreground and background. Present a new loss using the modulation factor for the existing cross entropy loss [4] so that we can reduce the weight of the predictable easy example and focus on the hard example.
more목차
Ⅰ. Introduction 1
Ⅱ. Related Work 5
Ⅲ. Proposed Method 7
A. Baseline Model 7
B. Proposed Method 9
C. Loss Function 11
Ⅳ. Datasets & Experiments 13
A. Datasets 13
B. Implementation Details 14
Ⅴ. Experimental results 15
A. Experimental results(mAP) for each scenario 15
B. Object Detection 16
C. Performance evaluation in real dangerous situations 17
D. Focal loss 18
Ⅵ. Conclusion 19
Ⅶ. Reference 20