검색 상세

Hierarchical Quantized Domain Adaptive Network Pruning

초록/요약

Centrifugal pumps in industrial environments face diverse operational conditions, causing shifts in data distribution over time. Traditional fault diagnosis models trained on single domains often fail under new or changing conditions. Additionally, large, complex models are unsuitable for deployment on industries' resource-constrained systems. This study proposes a lightweight fault diagnosis methodology referred to as Hierarchical Quantized Domain Adaptive Network Pruning (HQ-DANP), that adapts to varying conditions via domain adaptation while being deployable on embedded systems for real-time monitoring. Using Maximum Classifier Discrepancy (MCD) for unsupervised domain adaptation, the approach mitigates discrepancies between training and deployment environments, ensuring robust performance. Quantization reduces the model's computational and memory footprint, while sensitivity analysis and layer- wise pruning further compress the model without sacrificing diagnostic accuracy. The resulting model (HQ-DANP) is lightweight, efficient, and capable of real-time fault diagnosis, making it ideal for industrial applications. This work advances scalable health management strategies for CPs, meeting modern industrial demands. Key Words Centrifugal Pumps, Fault Diagnosis, Domain Adaptation, Quantization Aware Training, Pruning

more

목차

Chapter 1. Introduction 1
1.1 Background 1
1.2 Problem Definition 3
1.3 Research Motivation 4
1.4 Research Objectives 6
1.5 Research Contributions 6
1.6 Thesis Structure Overview 7
Chapter 2. Literature Review 8
2.1 Overview of Fault Diagnosis in Centrifugal Pumps 8
2.2 Domain Adaptation in Fault Diagnosis 11
2.3 Quantization Techniques in Deep Learning 12
2.4 Pruning and Sensitivity Analysis 13
2.5 Gaps in Existing Research 15
Chapter 3. Proposed Methodology 17
3.1 Research Framework 17
3.2 Model Architecture 18
3.3 Quantization Aware Training (QAT) 24
3.4 Sensitivity Analysis and Pruning 26
3.5 Summary of Methodology 28
Chapter 4. Implementation and Experimentation 30
4.1 Datasets Description 30
4.2 Data Preprocessing & Model Training Strategy 46
4.3 Experimental Setup 52
4.4 Insights into Training Dynamics and Adaptation 55
4.5 Summary of Experiments 57
Chapter 5. Results & Analysis 59
5.1 Quantitative Results 59
5.2 Model Efficiency Comparison 61
5.3 Impact of Sensitivity Analysis and Pruning 63
5.4 Summary of Results 65
Chapter 6. Conclusion & Future Works 66
References 67

more