Spelling suggestions: "subject:"trustworthy"" "subject:"rustworthy""
31 |
Intelligent Data and Potential Analysis in the Mechatronic Product DevelopmentNüssgen, Alexander January 2024 (has links)
This thesis explores the imperative of intelligent data and potential analysis in the realm of mechatronic product development. The persistent challenges of synchronization and efficiency underscore the need for advanced methodologies. Leveraging the substantial advancements in Artificial Intelligence (AI), particularly in generative AI, presents unprecedented opportunities. However, significant challenges, especially regarding robustness and trustworthiness, remain unaddressed. In response to this critical need, a comprehensive methodology is introduced, examining the entire development process through the illustrative V-Model and striving to establish a robust AI landscape. The methodology explores acquiring suitable and efficient knowledge, along with methodical implementation, addressing diverse requirements for accuracy at various stages of development. As the landscape of mechatronic product development evolves, integrating intelligent data and harnessing the power of AI not only addresses current challenges but also positions organizations for greater innovation and competitiveness in the dynamic market landscape.
|
32 |
Trustworthy and Causal Artificial Intelligence in Environmental Decision MakingSuleyman Uslu (18403641) 03 June 2024 (has links)
<p dir="ltr">We present a framework for Trustworthy Artificial Intelligence (TAI) that dynamically assesses trust and scrutinizes past decision-making, aiming to identify both individual and community behavior. The modeling of behavior incorporates proposed concepts, namely trust pressure and trust sensitivity, laying the foundation for predicting future decision-making regarding community behavior, consensus level, and decision-making duration. Our framework involves the development and mathematical modeling of trust pressure and trust sensitivity, drawing on social validation theory within the context of environmental decision-making. To substantiate our approach, we conduct experiments encompassing (i) dynamic trust sensitivity to reveal the impact of learning actors between decision-making, (ii) multi-level trust measurements to capture disruptive ratings, and (iii) different distributions of trust sensitivity to emphasize the significance of individual progress as well as overall progress.</p><p dir="ltr">Additionally, we introduce TAI metrics, trustworthy acceptance, and trustworthy fairness, designed to evaluate the acceptance of decisions proposed by AI or humans and the fairness of such proposed decisions. The dynamic trust management within the framework allows these TAI metrics to discern support for decisions among individuals with varying levels of trust. We propose both the metrics and their measurement methodology as contributions to the standardization of trustworthy AI.</p><p dir="ltr">Furthermore, our trustability metric incorporates reliability, resilience, and trust to evaluate systems with multiple components. We illustrate experiments showcasing the effects of different trust declines on the overall trustability of the system. Notably, we depict the trade-off between trustability and cost, resulting in net utility, which facilitates decision-making in systems and cloud security. This represents a pivotal step toward an artificial control model involving multiple agents engaged in negotiation.</p><p dir="ltr">Lastly, the dynamic management of trust and trustworthy acceptance, particularly in varying criteria, serves as a foundation for causal AI by providing inference methods. We outline a mechanism and present an experiment on human-driven causal inference, where participant discussions act as interventions, enabling counterfactual evaluations once actor and community behavior are modeled.</p>
|
33 |
Perceptions of Word-of-Mouth Referral Programs on Recruiting ClientsGoers, Jean Louise 01 January 2018 (has links)
Abstract
Word-of-mouth (WOM) personal referrals are more efficient and influential than other forms of
advertising; however, there is a lack of information regarding the value of referral programs. The
purpose of this qualitative case study was to explore the perceptions of business owners, staff,
and customers of alternative health care organizations in a Midwestern U.S. state about efficient
referral strategies, measuring the effect of those strategies, and motivations of consumers to
make referrals. Maslow's hierarchy of needs theory of motivation and customer decision-making
theories provided the conceptual framework. The research questions addressed how industry
leaders perceived and ranked referral strategies and addressed customers' perceptions and
motivations to make personal referrals. Data collection consisted of semistructured interviews
with 4 business owners, 2 staff members, and 10 client participants. Data were analyzed using
constant comparative analysis methods, and member checking enhanced the accuracy of the
findings. Results indicated that participants viewed WOM personal referrals as the most efficient
nontraditional strategy to make or receive referrals, and they perceived referrals from impartial
and trustworthy sources as the most valued information. This research has implications for
positive social change. Findings may be used to enhance business owners' understanding of the
value of personal referrals in their marketing mix, and of the motivation for customers to make
referrals. WOM personal referrals may be used as a marketing strategy to increase sales and
lower costs of formal advertising, which may contribute to the growth of the business.
|
34 |
Transparent ML Systems for the Process Industry : How can a recommendation system perceived as transparent be designed for experts in the process industry?Fikret, Eliz January 2023 (has links)
Process monitoring is a field that can greatly benefit from the adoption of machine learning solutions like recommendation systems. However, for domain experts to embrace these technologies within their work processes, clear explanations are crucial. Therefore, it is important to adopt user-centred methods for designing more transparent recommendation systems. This study explores this topic through a case study in the pulp and paper industry. By employing a user-centred and design-first adaptation of the question-driven design process, this study aims to uncover the explanation needs and requirements of industry experts, as well as formulate design visions and recommendations for transparent recommendation systems. The results of the study reveal five common explanation types that are valuable for domain experts while also highlighting limitations in previous studies on explanation types. Additionally, nine requirements are identified and utilised in the creation of a prototype, which domain experts evaluate. The evaluation process leads to the development of several design recommendations that can assist HCI researchers and designers in creating effective, transparent recommendation systems. Overall, this research contributes to the field of HCI by enhancing the understanding of transparent recommendation systems from a user-centred perspective.
|
35 |
Guidelines for Designing Trustworthy AI Services in the Public Sector.Drobotowicz, Karolina January 2020 (has links)
Artificial Intelligence (AI) is a popular topic in different areas of the current world. Thus, it is natural that its use is considered in the public sector. AI brings many opportunities for public institutions and citizens, like more attractive, accessible and flexible services. However, existing stories also show that the unethical or opaque use of AI can reduce significantly citizens’ trust in responsible public institutions. As it is important to maintain such trust, trustworthy AI services are gaining more and more interest. This work aims to answer the question of what needs to be taken into consideration while designing trustworthy public sector AI services. The study was done in Finland. The design process was used as a study method and it consisted of qualitative interviews, design workshop and validation with user testing. Altogether more than 30 Finnish residents participated in the study. Currently, there are more positive than negative voices about the usage of AI in the public sector, however, the number of the latter is significant. The most negative voices were coming from older people of low education and from younger AI specialists. Moreover, strong trust exists in the public sector. Nevertheless, citizens are voicing multiple concerns, such as security or privacy. It is important to keep the public sector services transparent, in order to keep trust in the public sector and build trust in AI. Citizens need to know when AI is used, how and for what purpose, as well as, what data is used and why they receive specific results. Citizens’ needs and concerns, as well as ethical requirements, ought to be addressed in the design and development of trustworthy public sector AI services. Those are, for example, mitigating discrimination risks, providing citizens with control over their data and having a person involved in AI processes. Designers and developers of trustworthy public sector AI services should aim to understand citizens and ensure them about their needs and concerns being met, through the transparent service and the positive experience of using the service. / Artificiell intelligens (AI) är ett populärt ämne inom olika områden i världen. Således är det naturligt att dess användning beaktas i den offentliga sektorn. AI ger många möjligheter för offentliga institutioner och medborgare, som till exempel, mer attraktiva, tillgängliga och flexibla tjänster. Men befintliga berättelser från användare visar också att oetisk eller ogenomskinlig användning av AI kan avsevärt minska medborgarnas förtroende för ansvariga offentliga institutioner. Eftersom det är viktigt att upprätthålla ett sådant förtroende, får pålitliga AI-tjänster mer och mer intresse. Detta arbete syftar till att svara på frågan om vad som måste beaktas vid utformningen pålitliga AI-tjänster inom offentlig sektor. Studien gjordes i Finland. Forskningsmetoden som användes var en designprocess och den bestod av kvalitativa intervjuer, en design workshop samt validering med användartestning. Sammanlagt deltog mer än 30 finländska invånare i studien. För närvarande finns det mer positiva än negativa röster om användningen av AI i den offentliga sektorn, dock är antalet i den senare kategorin betydande. De mest negativa rösterna kommer från äldre personer med låg utbildning och från yngre AI-specialister. Dessutom finns starkt förtroende för den offentliga sektorn. Ändå uttryckte medborgarna flera problem, såsom säkerhet eller integritet. Det är viktigt att offentliga tjänster är transparenta för att behålla förtroendet för den offentliga sektorn och bygga förtroende för AI. Medborgarna behöver veta när AI används, hur och i vilket syfte samt vilka uppgifter som används och varför de får specifika resultat. Medborgarnas behov och bekymmer, såväl som etiska krav, borde tas upp i utformningen och utvecklingen av en pålitlig AI-tjänster i offentlig sektor. Exempelvis genom att mildra diskrimineringsrisker, ge medborgare kontroll över sina uppgifter och att ha en person involverad i AI processer. Utformare och utvecklare av pålitliga AI-tjänster inom offentlig sektor bör syfta till att förstå medborgarna och säkerställa dem om deras behov och bekymmer genom den transparenta tjänsten och den positiva upplevelsen att använda tjänsten.
|
36 |
網路招募廣告的負向訊息比例與重要性對組織吸引力之影響及其相關中介效果 / The effects of proportion and importance of negative information of webpage recruitment advertisements on organizational attractiveness蔡志明 Unknown Date (has links)
本研究之目的在瞭解網路招募情境中,具預告真實工作情境(Realistic Job Preview,簡稱RJP)效果的廣告,其負向訊息比例與重要性對組織吸引力之影響,並探討求職者「對工作的期望」與「對組織的信任」在此關係的中介效果以及「求職者知覺的市場競爭力」在此關係的調節效果。本研究採用二因子受試者間實驗設計,所操弄的獨變項為招募廣告負向訊息佔總訊息量的比例,分為10%、20%、30%、40%、50%五種程度;以及負向訊息的重要性程度(高、低),依變項為組織吸引力。
本研究透過網際網路建置虛擬組織的招募網頁,吸引正欲求職的大學四年級及研究所學生經由網路進行實驗,得到466筆有效的實驗資料。研究結果顯示不同負向訊息比例對整體組織吸引力與各分量表的影響有顯著差異,而其在整體組織吸引力、組織正向情感、與工作吸引力有非線性的影響效果,即負向訊息的比例為20%者,其效果最高;負向訊息的高、低重要性程度會對整體組織吸引力與各分量表有不同的影響效果,越重要的負向訊息導致越高的組織吸引力。研究者並以ANCOVA檢驗工作期望的中介效果,結果顯示整體工作期望、工作內容期望、與一般性期望在負向訊息比例與組織吸引力之間有中介效果,組織信任的中介效果則沒有獲得驗證;負向訊息重要性會透過工作內容期望的中介效果影響組織吸引力,但無法確認組織信任有無中介效果。研究者以二因子變異數分析求職者知覺的市場競爭力的調節效果,結果顯示求職競爭力僅在公司期望與組織吸引力之間的關係有顯著的調節效果。研究者分別就結果加以討論,提出可能的解釋,並說明本研究之限制與貢獻。 / The purpose of this study is to empirically examine the effects of proportion and importance of negative information (having the function of realistic job preview with respect to web recruitment advertisements) on organizational attractiveness. The mediation effects of job expectation and trust toward organization on the previous relationships and the moderation effects of perceived job competitiveness on the previous relationships were examined. The independent variables of this study are proportion of the negative information (10%, 20%, 30%, 40%, or 50%) and importance of the negative information (low vs. high). The dependent variable is organizational attractiveness.
Totally 466 seniors and graduate students who were hunting for a job joined in the experiment through the fictitious organization recruiting webpage. The result reveals that proportion of negative information has non-linear effect on organizational attractiveness. Recruitment ads with 20% of negative information had the strongest effect on organizational attractiveness. Negative information of higher importance induced more organizational attractiveness than that of lower importance.
ANCOVA was used to examine the mediation effects. It shows that the job expectation mediates the relationship between proportion of negative information and organizational attractiveness. However, the mediation effect of trust toward organization on the relationship between proportion of negative information and organizational attractiveness hasn’t been confirmed. Via the mediation effect of job content expectation and trust toward organization, importance of negative information can affect organizational attractiveness.
|
37 |
Trustworthy AI: Ensuring Explainability and AcceptanceDavinder Kaur (17508870) 03 January 2024 (has links)
<p dir="ltr">In the dynamic realm of Artificial Intelligence (AI), this study explores the multifaceted landscape of Trustworthy AI with a dedicated focus on achieving both explainability and acceptance. The research addresses the evolving dynamics of AI, emphasizing the essential role of human involvement in shaping its trajectory.</p><p dir="ltr">A primary contribution of this work is the introduction of a novel "Trustworthy Explainability Acceptance Metric", tailored for the evaluation of AI-based systems by field experts. Grounded in a versatile distance acceptance approach, this metric provides a reliable measure of acceptance value. Practical applications of this metric are illustrated, particularly in a critical domain like medical diagnostics. Another significant contribution is the proposal of a trust-based security framework for 5G social networks. This framework enhances security and reliability by incorporating community insights and leveraging trust mechanisms, presenting a valuable advancement in social network security.</p><p dir="ltr">The study also introduces an artificial conscience-control module model, innovating with the concept of "Artificial Feeling." This model is designed to enhance AI system adaptability based on user preferences, ensuring controllability, safety, reliability, and trustworthiness in AI decision-making. This innovation contributes to fostering increased societal acceptance of AI technologies. Additionally, the research conducts a comprehensive survey of foundational requirements for establishing trustworthiness in AI. Emphasizing fairness, accountability, privacy, acceptance, and verification/validation, this survey lays the groundwork for understanding and addressing ethical considerations in AI applications. The study concludes with exploring quantum alternatives, offering fresh perspectives on algorithmic approaches in trustworthy AI systems. This exploration broadens the horizons of AI research, pushing the boundaries of traditional algorithms.</p><p dir="ltr">In summary, this work significantly contributes to the discourse on Trustworthy AI, ensuring both explainability and acceptance in the intricate interplay between humans and AI systems. Through its diverse contributions, the research offers valuable insights and practical frameworks for the responsible and ethical deployment of AI in various applications.</p>
|
38 |
ANALYSIS OF LATENT SPACE REPRESENTATIONS FOR OBJECT DETECTIONAshley S Dale (8771429) 03 September 2024 (has links)
<p dir="ltr">Deep Neural Networks (DNNs) successfully perform object detection tasks, and the Con- volutional Neural Network (CNN) backbone is a commonly used feature extractor before secondary tasks such as detection, classification, or segmentation. In a DNN model, the relationship between the features learned by the model from the training data and the features leveraged by the model during test and deployment has motivated the area of feature interpretability studies. The work presented here applies equally to white-box and black-box models and to any DNN architecture. The metrics developed do not require any information beyond the feature vector generated by the feature extraction backbone. These methods are therefore the first methods capable of estimating black-box model robustness in terms of latent space complexity and the first methods capable of examining feature representations in the latent space of black box models.</p><p dir="ltr">This work contributes the following four novel methodologies and results. First, a method for quantifying the invariance and/or equivariance of a model using the training data shows that the representation of a feature in the model impacts model performance. Second, a method for quantifying an observed domain gap in a dataset using the latent feature vectors of an object detection model is paired with pixel-level augmentation techniques to close the gap between real and synthetic data. This results in an improvement in the model’s F1 score on a test set of outliers from 0.5 to 0.9. Third, a method for visualizing and quantifying similarities of the latent manifolds of two black-box models is used to correlate similar feature representation with increase success in the transferability of gradient-based attacks. Finally, a method for examining the global complexity of decision boundaries in black-box models is presented, where more complex decision boundaries are shown to correlate with increased model robustness to gradient-based and random attacks.</p>
|
39 |
Transparent and Scalable Knowledge-based Geospatial Mapping Systems for Trustworthy Urban StudiesHunsoo Song (18508821) 07 May 2024 (has links)
<p dir="ltr">This dissertation explores the integration of remote sensing and artificial intelligence (AI) in geospatial mapping, specifically through the development of knowledge-based mapping systems. Remote sensing has revolutionized Earth observation by providing data that far surpasses traditional in-situ measurements. Over the last decade, significant advancements in inferential capabilities have been achieved through the fusion of geospatial sciences and AI (GeoAI), particularly with the application of deep learning. Despite its benefits, the reliance on data-driven AI has introduced challenges, including unpredictable errors and biases due to imperfect labeling and the opaque nature of the processes involved.</p><p dir="ltr">The research highlights the limitations of solely using data-driven AI methods for geospatial mapping, which tend to produce spatially heterogeneous errors and lack transparency, thus compromising the trustworthiness of the outputs. In response, it proposes novel knowledge-based mapping systems that prioritize transparency and scalability. This research has developed comprehensive techniques to extract key Earth and urban features and has introduced a 3D urban land cover mapping system, including a 3D Landscape Clustering framework aimed at enhancing urban climate studies. The developed systems utilize universally applicable physical knowledge of targets, captured through remote sensing, to enhance mapping accuracy and reliability without the typical drawbacks of data-driven approaches.</p><p dir="ltr">The dissertation emphasizes the importance of moving beyond mere accuracy to consider the broader implications of error patterns in geospatial mappings. It demonstrates the value of integrating generalizable target knowledge, explicitly represented in remote sensing data, into geospatial mapping to address the trustworthiness challenges in AI mapping systems. By developing mapping systems that are open, transparent, and scalable, this work aims to mitigate the effects of spatially heterogeneous errors, thereby improving the trustworthiness of geospatial mapping and analysis across various fields. Additionally, the dissertation introduces methodologies to support urban pathway accessibility and flood management studies through dependable geospatial systems. These efforts aim to establish a robust foundation for informed urban planning, efficient resource allocation, and enriched environmental insights, contributing to the development of more sustainable, resilient, and smart cities.</p>
|
Page generated in 0.0501 seconds