검색 상세

Stringent Multiple Quality of Services Enabled Resource Management in Vehicular Networks

초록/요약

Plenty of different services are provided to users in vehicular networks, like non-safety-related and safety-related services. The resource management has been well designed in existing 802.11p and 5G to meet the demands from various services. Generally, different services have different quality of service (QoS) requirements in terms of reliability, latency, throughput, and energy. Each QoS requirement may also require different levels of performance, like stringent delay, delay tolerance, and best effort service. Therefore, two schemes are proposed in this dissertation to solve existing problems in resource allocation starting with the demands of a single ultra reliable low latency communication (URLLC) service to different levels of multiple QoS requirements. Among different services, this dissertation first addresses the essential basic safety message (BSM) broadcasting service. The existing solutions either use heavy overhead to achieve less access collision and broadcasting message collision notification or assume an ideal channel case. They have not sufficiently discussed research on ”merging collision” arising from overlapping two or more collision domains due to vehicle mobility. In this study, we examine the effect of merging collision in terms of packet delivery ratio, overhead, and inter-basic safety message (inter-BSM) delay. To mitigate the effect of merging collision, we also propose a lightweight distributed time division multiple access (TDMA) protocol, where the order of vehicles on the road and their relative distances are exploited to determine the dedicated slot. The collision is further resolved using in-band signaling from the surrounding vehicles. Simulation studies show that the proposed scheme improves the performance of the packet delivery ratio (PDR) 10% more than the comparative protocol, and consequently achieves 97% and 95% PDRs, in highway and urban environment, respectively. Furthermore, the overhead and inter-BSM delay are substantially reduced. The proposed relative distance based MAC protocol can handle BSM broadcasting successfully. However, closed-form solutions can not meet various requirements from the upper layers’ services. The state-of-the-art researches only use machine learning algorithms to handle one service or plus energy consumption. Hence, the way to handle throughput requirement is power control, which has a limitation while handling high throughput requirement. In this study, a deep Q networks (DQN) based intelligence resource allocation (I-RA) is proposed because reinforcement learning has the potential to handle more complex tasks. To address the multiple QoS requirements, we first formulated the problem. We designed a state-action framework which is different from others to handle resource allocation task in vehicular networks. Different services are tested through simulations. The results show that the proposed I-RA can learn the successful transmission patterns to fulfill different levels of multiple QoS requirements.

more

목차

Chapter 1 Introduction 1
1.1 Background 3
1.1.1 802.11p based V2X 4
1.1.2 Cellular based V2X 6
1.2 Problem Definition 8
1.2.1 Access Collision 8
1.2.2 Merging Collision 9
1.2.3 Hidden Terminal Problem 10
1.2.4 URLLC Broadcasting 10
1.2.5 Multiple QoS Requirements 11
1.3 Contributions 12
1.3.1 Relative Distance based Slot Assignment 12
1.3.2 Reinforcement Learning based Resource Allocation 12
1.4 Organization 13
Chapter 2 Related Works 15
2.1 DSRC MAC 15
2.1.1 Contention-based MAC 15
2.1.2 TDMA based MAC 17
2.2 Cellular Resource Allocation 19
2.3 Multiple Interfaces 21
2.4 Machine Learning based V2X 22
2.5 Summary 26
Chapter 3 Relative Distance based Channel Assignment 27
3.1 The Proposed Scheme 29
3.1.1 Effect of Merging Collision 29
3.1.2 System Model 30
3.1.3 The Overall Design 31
3.1.4 Slot Selection Process 31
3.1.5 Collision Notification Process 35
3.1.6 Coexistence with Congestion Control 37
3.2 Theoretical Analysis 39
3.3 Simulation Experiments 42
3.3.1 Highway Scenario 43
3.3.2 Urban Scenario 46
3.3.3 Parameters Analysis 50
3.4 Summary 51
Chapter 4 Reinforcement Learning based Resource Management 52
4.1 Problem Formulation 54
4.1.1 System Model 54
4.1.2 MDP 57
4.2 I-RA 60
4.2.1 Overview 60
4.2.2 Distributed DQN based I-RA 61
4.3 Simulation Experiments 65
4.3.1 URLLC 66
4.3.2 Multiple QoS requirements 70
4.3.3 Delay Stringent Multiple QoS Requirements 73
4.3.4 Parameter Tuning 76
4.4 Summary 78
Chapter 5 Final Remark 80
Bibliography 83

more