검색 상세

Requirements Engineering Framework for Provenance-based Trust Awareness in Self-Adaptive System

초록/요약

The development of artificial intelligence (AI) technology has opened up new opportunities for the creation of intelligent systems. Intelligent system is the system which is able to adapt to the environment and work together with various systems. Such systems have become increasingly essential in various domains, including transportation, healthcare, and finance. During the process of cooperation, trust is considered as one of the most important quality attributes. Trust means a social concept assuming that working with the selected system will lead to positive outcomes in the direction that we look forward. It involves the belief that a system will behave in a predictable and reliable manner, and that it will not cause harm or negative consequences. In the context of AI systems, trust is essential for ensuring the safety and reliability of these systems, especially as they become more complex and autonomous. However, trust is not a static concept, and it can be influenced by various factors. For example, a system's behavior may change over time, and it may interact with other systems that are not trustworthy. Therefore, it is important to design AI systems that can recognize trustworthy cooperation partners without human intervention. To address this challenge, it is important to consider trust during the requirement engineering phase of system development. Requirements engineering is the process of eliciting, analyzing, and specifying the requirements of a system. By considering trust during this phase, system engineers can ensure that the system is designed to meet the trust requirements of its intended users and stakeholders. In addition to requirement engineering for trust, we also have proposed approach for building novel trust evidence models. This model can be used to evaluate the trustworthiness of a system based on various types of evidence. This evidence can be derived from various sources, such as user feedback, system logs, and sensor data. To achieve this goal, we have proposed a provenance-based trust-aware requirement engineering framework for self-adaptive systems. This framework allows system engineers to design trust-aware goal models from user requirements by analyzing trust in the requirements engineering process. A method to define a provenance-based trust evidence model for the specific domain is also proposed to evaluate trust. By using this framework, system engineers can deal with trust as an emerging factor in the requirements engineering perspective and understand the factors affecting trust using a standardized format. The proposed approach has been evaluated theoretically and empirically in the context of a crowd navigation system for unmanned vehicles and a review verification system. The evaluation results show that the proposed approach can effectively capture and evaluate trust requirements and can help system engineers design more trustworthy and reliable self-adaptive systems.

more

목차

CHAPTER. I Introduction 1
A. Background and Motivation 1
B. Challenges 2
C. Contribution 3
D. Scope 4
E. Thesis Organization 5
CHAPTER. II Related Work 6
A. Requirements Engineering for SASs and Trust 6
B. Trust Evidence Models for Evaluation 9
CHAPTER. III Proposed Approach 15
A. Phase 1: Trust-aware Requirements Modeling 17
1. Step 1: Requirements Analysis 18
2. Step 2: Partial Goal Model Analysis 20
3. Step 3: Trust-aware Goal Analysis 23
4. Step 4: Goal Integration 25
B. Phase 2: Provenance-based Trust Evaluation 29
1. Step 1: Provenance Model Analysis 30
2. Step 2: Provenance-based Trust Evaluation 34
3. Step 3: Cooperation Pattern Analysis 35
CHAPTER. IV Case Study Design 39
CHAPTER. V Theoretical Evaluation 44
A. Domain 1: CrowdNav-UV 44
B. Domain 2: Reviewer Verification Service 60
CHAPTER. VI Empirical Evaluation 74
CHAPTER. VII Evaluation Result 80
CHAPTER. VIII Conclusion and Future Work 90
REFERENCES 94
Appendix 100

more