Spelling suggestions: "subject:"causal artificial intelligence"" "subject:"causal artificial lntelligence""
1 |
Trustworthy and Causal Artificial Intelligence in Environmental Decision MakingSuleyman Uslu (18403641) 03 June 2024 (has links)
<p dir="ltr">We present a framework for Trustworthy Artificial Intelligence (TAI) that dynamically assesses trust and scrutinizes past decision-making, aiming to identify both individual and community behavior. The modeling of behavior incorporates proposed concepts, namely trust pressure and trust sensitivity, laying the foundation for predicting future decision-making regarding community behavior, consensus level, and decision-making duration. Our framework involves the development and mathematical modeling of trust pressure and trust sensitivity, drawing on social validation theory within the context of environmental decision-making. To substantiate our approach, we conduct experiments encompassing (i) dynamic trust sensitivity to reveal the impact of learning actors between decision-making, (ii) multi-level trust measurements to capture disruptive ratings, and (iii) different distributions of trust sensitivity to emphasize the significance of individual progress as well as overall progress.</p><p dir="ltr">Additionally, we introduce TAI metrics, trustworthy acceptance, and trustworthy fairness, designed to evaluate the acceptance of decisions proposed by AI or humans and the fairness of such proposed decisions. The dynamic trust management within the framework allows these TAI metrics to discern support for decisions among individuals with varying levels of trust. We propose both the metrics and their measurement methodology as contributions to the standardization of trustworthy AI.</p><p dir="ltr">Furthermore, our trustability metric incorporates reliability, resilience, and trust to evaluate systems with multiple components. We illustrate experiments showcasing the effects of different trust declines on the overall trustability of the system. Notably, we depict the trade-off between trustability and cost, resulting in net utility, which facilitates decision-making in systems and cloud security. This represents a pivotal step toward an artificial control model involving multiple agents engaged in negotiation.</p><p dir="ltr">Lastly, the dynamic management of trust and trustworthy acceptance, particularly in varying criteria, serves as a foundation for causal AI by providing inference methods. We outline a mechanism and present an experiment on human-driven causal inference, where participant discussions act as interventions, enabling counterfactual evaluations once actor and community behavior are modeled.</p>
|
Page generated in 0.1188 seconds