• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • Tagged with
  • 9
  • 9
  • 5
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Adversarial Attacks and Defense Mechanisms to Improve Robustness of Deep Temporal Point Processes

Khorshidi, Samira 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Temporal point processes (TPP) are mathematical approaches for modeling asynchronous event sequences by considering the temporal dependency of each event on past events and its instantaneous rate. Temporal point processes can model various problems, from earthquake aftershocks, trade orders, gang violence, and reported crime patterns, to network analysis, infectious disease transmissions, and virus spread forecasting. In each of these cases, the entity’s behavior with the corresponding information is noted over time as an asynchronous event sequence, and the analysis is done using temporal point processes, which provides a means to define the generative mechanism of the sequence of events and ultimately predict events and investigate causality. Among point processes, Hawkes process as a stochastic point process is able to model a wide range of contagious and self-exciting patterns. One of Hawkes process’s well-known applications is predicting the evolution of viral processes on networks, which is an important problem in biology, the social sciences, and the study of the Internet. In existing works, mean-field analysis based upon degree distribution is used to predict viral spreading across networks of different types. However, it has been shown that degree distribution alone fails to predict the behavior of viruses on some real-world networks. Recent attempts have been made to use assortativity to address this shortcoming. This thesis illustrates how the evolution of such a viral process is sensitive to the underlying network’s structure. In Chapter 3 , we show that adding assortativity does not fully explain the variance in the spread of viruses for a number of real-world networks. We propose using the graphlet frequency distribution combined with assortativity to explain variations in the evolution of viral processes across networks with identical degree distribution. Using a data-driven approach, by coupling predictive modeling with viral process simulation on real-world networks, we show that simple regression models based on graphlet frequency distribution can explain over 95% of the variance in virality on networks with the same degree distribution but different network topologies. Our results highlight the importance of graphlets and identify a small collection of graphlets that may have the most significant influence over the viral processes on a network. Due to the flexibility and expressiveness of deep learning techniques, several neural network-based approaches have recently shown promise for modeling point process intensities. However, there is a lack of research on the possible adversarial attacks and the robustness of such models regarding adversarial attacks and natural shocks to systems. Furthermore, while neural point processes may outperform simpler parametric models on in-sample tests, how these models perform when encountering adversarial examples or sharp non-stationary trends remains unknown. In Chapter 4 , we propose several white-box and black-box adversarial attacks against deep temporal point processes. Additionally, we investigate the transferability of whitebox adversarial attacks against point processes modeled by deep neural networks, which are considered a more elevated risk. Extensive experiments confirm that neural point processes are vulnerable to adversarial attacks. Such a vulnerability is illustrated both in terms of predictive metrics and the effect of attacks on the underlying point process’s parameters. Expressly, adversarial attacks successfully transform the temporal Hawkes process regime from sub-critical to into a super-critical and manipulate the modeled parameters that is considered a risk against parametric modeling approaches. Additionally, we evaluate the vulnerability and performance of these models in the presence of non-stationary abrupt changes, using the crimes and Covid-19 pandemic dataset as an example. Considering the security vulnerability of deep-learning models, including deep temporal point processes, to adversarial attacks, it is essential to ensure the robustness of the deployed algorithms that is despite the success of deep learning techniques in modeling temporal point processes. In Chapter 5 , we study the robustness of deep temporal point processes against several proposed adversarial attacks from the adversarial defense viewpoint. Specifically, we investigate the effectiveness of adversarial training using universal adversarial samples in improving the robustness of the deep point processes. Additionally, we propose a general point process domain-adopted (GPDA) regularization, which is strictly applicable to temporal point processes, to reduce the effect of adversarial attacks and acquire an empirically robust model. In this approach, unlike other computationally expensive approaches, there is no need for additional back-propagation in the training step, and no further network isrequired. Ultimately, we propose an adversarial detection framework that has been trained in the Generative Adversarial Network (GAN) manner and solely on clean training data. Finally, in Chapter 6 , we discuss implications of the research and future research directions.
2

A Model-Based Systems Engineering Approach to Refueling Satellites

Rochford, Elizabeth 05 June 2023 (has links)
No description available.
3

Capabilities and Processes to Mitigate Risks Associated with Machine Learning in Credit Scoring Systems : A Case Study at a Financial Technology Firm / Förmågor och processer för att mitigera risker associerade med maskininlärning inom kreditvärdering : En fallstudie på ett fintech-bolag

Pehrson, Jakob, Lindstrand, Sara January 2022 (has links)
Artificial intelligence and machine learning has become an important part of society and today businesses compete in a new digital environment. However, scholars and regulators are concerned with these technologies' societal impact as their use does not come without risks, such as those stemming from transparency and accountability issues. The potential wrongdoing of these technologies has led to guidelines and future regulations on how they can be used in a trustworthy way. However, these guidelines are argued to lack practicality and they have sparked concern that they will hamper organisations' digital pursuit for innovation and competitiveness. This master’s thesis aims to contribute to this field by studying how teams can work with risk mitigation of risks associated with machine learning. The scope was set on capturing insights on the perception of employees, on what they consider to be important and challenging with machine learning risk mitigation, and then put it in relation to research to develop practical recommendations. The master’s thesis specifically focused on the financial technology sector and the use of machine learning in credit scoring. To achieve the aim, a qualitative single case study was conducted. The master’s thesis found that a combination of processes and capabilities are perceived as important in this work. Moreover, current barriers are also found in the single case. The findings indicate that strong responsiveness is important, and this is achieved in the single case by having separation of responsibilities and strong team autonomy. Moreover, standardisation is argued to be needed for higher control, but that it should be implemented in a way that allows for flexibility. Furthermore, monitoring and validation are important processes for mitigating machine learning risks. Additionally, the capability of extracting as much information from data as possible is an essential component in daily work, both in order to create value but also to mitigate risks. One barrier in this work is that the needed knowledge takes time to develop and that knowledge transferring is sometimes restricted by resource allocation. However, knowledge transfer is argued to be important for long term sustainability. Organisational culture and societal awareness are also indicated to play a role in machine learning risk mitigations. / Artificiell intelligens och maskininlärning har blivit en betydelsefull del av samhället och idag konkurrerar organisationer i en ny digital miljö. Forskare och regulatorer är däremot bekymrade gällande den samhällspåverkan som sådan teknik har eftersom användningen av dem inte kommer utan risker, såsom exempelvis risker som uppkommer från brister i transparens och ansvarighet. Det potentiella olämpliga användandet av dessa tekniker har resulterat i riktlinjer samt framtida föreskrifter på hur de kan användas på ett förtroendefullt och etiskt sätt. Däremot så anses dessa riktlinjer sakna praktisk tillämpning och de har väckt oro då de möjligen kan hindra organisationers digitala strävan efter innovation och konkurrenskraft. Denna masteruppsats syftar till att bidra till detta område genom att studera hur team kan arbeta med riskreducering av risker kopplade till maskininlärning. Uppsatsens omfång lades på att fånga insikter på medarbetares uppfattning, för att sedan ställa dessa i relation till forskning och utveckla praktiska rekommendationer. Denna masteruppsats fokuserade specifikt på finansteknologisektorn och användandet av maskininlärning inom kreditvärdering. En kvalitativ singelfallstudie genomfördes för att uppnå detta mål. Masteruppsatsen fann att en kombination av processer och förmågor uppfattas som viktiga inom detta arbete. Dessutom fann fallstudien några barriärer. Resultaten indikerar att en stark förmåga att reagera är essentiellt och att detta uppnås i fallstudien genom att ha tydlig ansvarsfördelning och att teamen har stark autonomi. Vidare så anses standardisering behövas för en högre nivå av kontroll, samtidigt som det bör vara implementerat på ett sådant sätt som möjliggör flexibilitet. Fortsättningsvis anses monitorering och validering vara viktiga processer för att mitigera maskininlärningsrisker. Dessutom är förmågan att extrahera så mycket information från data som möjligt en väsentlig komponent i det dagliga arbetet, både för värdeskapande och för att minska risker. En barriär inom detta arbetet är att det tar tid för den behövda kunskapen att utvecklas och att kunskapsöverföring ibland hindras av resursallokering. Kunskapsöverföring anses däremot vara viktigt för långsiktig hållbarhet. Organisationskultur och samhällsmedvetenhet indikeras också påverka minskningen av risker kring maskininlärning.
4

NOVEL APPROACHES TO MITIGATE DATA BIAS AND MODEL BIAS FOR FAIR MACHINE LEARNING PIPELINES

Taeuk Jang (18333504) 28 April 2024 (has links)
<p dir="ltr">Despite the recent advancement and exponential growth in the utility of deep learning models across various fields and tasks, we are confronted with emerging challenges. Among them, one prevalent issue is the biases inherent in deep models, which often mimic stereotypical or subjective behavior observed in data, potentially resulting in negative societal impact or disadvantaging certain subpopulations based on race, gender, etc. This dissertation addresses the critical problem of fairness and bias in machine learning from diverse perspectives, encompassing both data biases and model biases.</p><p dir="ltr">First, we study the multifaceted nature of data biases to comprehensively address the challenges. Specifically, the proposed approaches include the development of a generative model for balancing data distribution with counterfactual samples to address data skewness. In addition, we introduce a novel feature selection method aimed at eliminating sensitive-relevant features that could potentially convey sensitive information, e.g., race, considering the interrelationship between features. Moreover, we present a scalable thresholding method to appropriately binarize model outputs or regression data considering fairness constraints for fairer decision-making, extending fairness beyond categorical data.</p><p dir="ltr">However, addressing fairness problem solely by correcting data bias often encounters several challenges. Particularly, establishing fairness-curated data demands substantial resources and may be restricted by regal constraints, while explicitly identifying the biases is non-trivial due to their intertwined nature. Further, it is important to recognize that models may interpret data differently by their architectures or downstream tasks. In response, we propose a line of methods to address model bias, on top of addressing the data bias mentioned above, by learning fair latent representations. These methods include fair disentanglement learning, which projects latent subspace independent of sensitive information by employing conditional mutual information, and a debiased contrastive learning method for fair self-supervised learning without sensitive attribute annotations. Lastly, we introduce a novel approach to debias the multimodal embedding of pretrained vision-language models (VLMs) without requiring downstream annotated datasets, retraining, or fine-tuning of the large model considering the constrained resource of research labs.</p>
5

Finding differences in perspectives between designers and engineers to develop trustworthyAI for autonomous cars

Larsson, Karl Rikard, Jönelid, Gustav January 2023 (has links)
In the context of designing and implementing ethical Artificial Intelligence (AI), varying perspectives exist regarding developing trustworthy AI for autonomous cars. This study sheds light on the differences in perspectives and provides recommendations to minimize such divergences. By exploring the diverse viewpoints, we identify key factors contributing to the differences and propose strategies to bridge the gaps. This study goes beyond the trolley problem to visualize the complex challenges of trustworthy and ethical AI. Three pillars of trustworthy AI have been defined: transparency, reliability, and safety. This research contributes to the field of trustworthy AI for autonomous cars, providing practical recommendations to enhance the development of AI systems that prioritize both technological advancement and ethical principles.
6

Taking Responsible AI from Principle to Practice : A study of challenges when implementing Responsible AI guidelines in an organization and how to overcome them

Hedlund, Matilda, Henriksson, Hanna January 2023 (has links)
The rapid advancement of AI technology emphasizes the importance of developing practical and ethical frameworks to guide its evolution and deployment in a responsible manner. In light of more complex AI and its capacity to influence society, AI researchers and other prominent individuals are now indicating that AI evolution has to be regulated to a greater extent. This study examines the practical implementation of Responsible AI guidelines in an organization by investigating the challenges encountered and proposing solutions to overcome them. Previous research has primarily focused on conceptualizing Responsible AI guidelines, resulting in a tremendous number of abstract and high-level recommendations. However, there is an emerging demand to shift the focus toward studying the practical implementation of these. This study addresses the research question: ‘How can an organization overcome challenges that may arise when implementing Responsible AI guidelines in practice?’. The study utilizes the guidelines produced by the European Commission’s High-Level Expert Group on AI as a reference point, considering their influence on shaping future AI policy and regulation in the EU. The study is conducted in collaboration with the telecommunications company Ericsson, which henceforth will be referred to as 'the case organization’, which possesses a large global workforce and headquarters in Sweden. Specific focus is delineated to the department that works on developing AI internally for other units with the purpose of simplifying operations and processes, which henceforth in this study will be referred to as 'the AI unit'. Through an inductive interpretive approach, data from 16 semi-structured interviews and organization-specific documents were analyzed through a thematic analysis. The findings reveal challenges related to (1) understanding and defining Responsible AI, (2) technical conditions and complexity, (3) organizational structures and barriers, as well as (4) inconsistent and overlooked ethics. Proposed solutions include (1) education and awareness, (2) integration and implementation, (3) governance and accountability, and (4) alignment and values. The findings contribute to a deeper understanding of Responsible AI implementation and offer practical recommendations for organizations navigating the rapidly evolving landscape of AI technology.
7

Intelligent Data and Potential Analysis in the Mechatronic Product Development

Nüssgen, Alexander January 2024 (has links)
This thesis explores the imperative of intelligent data and potential analysis in the realm of mechatronic product development. The persistent challenges of synchronization and efficiency underscore the need for advanced methodologies. Leveraging the substantial advancements in Artificial Intelligence (AI), particularly in generative AI, presents unprecedented opportunities. However, significant challenges, especially regarding robustness and trustworthiness, remain unaddressed. In response to this critical need, a comprehensive methodology is introduced, examining the entire development process through the illustrative V-Model and striving to establish a robust AI landscape. The methodology explores acquiring suitable and efficient knowledge, along with methodical implementation, addressing diverse requirements for accuracy at various stages of development.  As the landscape of mechatronic product development evolves, integrating intelligent data and harnessing the power of AI not only addresses current challenges but also positions organizations for greater innovation and competitiveness in the dynamic market landscape.
8

Transparent ML Systems for the Process Industry : How can a recommendation system perceived as transparent be designed for experts in the process industry?

Fikret, Eliz January 2023 (has links)
Process monitoring is a field that can greatly benefit from the adoption of machine learning solutions like recommendation systems. However, for domain experts to embrace these technologies within their work processes, clear explanations are crucial. Therefore, it is important to adopt user-centred methods for designing more transparent recommendation systems. This study explores this topic through a case study in the pulp and paper industry. By employing a user-centred and design-first adaptation of the question-driven design process, this study aims to uncover the explanation needs and requirements of industry experts, as well as formulate design visions and recommendations for transparent recommendation systems. The results of the study reveal five common explanation types that are valuable for domain experts while also highlighting limitations in previous studies on explanation types. Additionally, nine requirements are identified and utilised in the creation of a prototype, which domain experts evaluate. The evaluation process leads to the development of several design recommendations that can assist HCI researchers and designers in creating effective, transparent recommendation systems. Overall, this research contributes to the field of HCI by enhancing the understanding of transparent recommendation systems from a user-centred perspective.
9

Trustworthy AI: Ensuring Explainability and Acceptance

Davinder Kaur (17508870) 03 January 2024 (has links)
<p dir="ltr">In the dynamic realm of Artificial Intelligence (AI), this study explores the multifaceted landscape of Trustworthy AI with a dedicated focus on achieving both explainability and acceptance. The research addresses the evolving dynamics of AI, emphasizing the essential role of human involvement in shaping its trajectory.</p><p dir="ltr">A primary contribution of this work is the introduction of a novel "Trustworthy Explainability Acceptance Metric", tailored for the evaluation of AI-based systems by field experts. Grounded in a versatile distance acceptance approach, this metric provides a reliable measure of acceptance value. Practical applications of this metric are illustrated, particularly in a critical domain like medical diagnostics. Another significant contribution is the proposal of a trust-based security framework for 5G social networks. This framework enhances security and reliability by incorporating community insights and leveraging trust mechanisms, presenting a valuable advancement in social network security.</p><p dir="ltr">The study also introduces an artificial conscience-control module model, innovating with the concept of "Artificial Feeling." This model is designed to enhance AI system adaptability based on user preferences, ensuring controllability, safety, reliability, and trustworthiness in AI decision-making. This innovation contributes to fostering increased societal acceptance of AI technologies. Additionally, the research conducts a comprehensive survey of foundational requirements for establishing trustworthiness in AI. Emphasizing fairness, accountability, privacy, acceptance, and verification/validation, this survey lays the groundwork for understanding and addressing ethical considerations in AI applications. The study concludes with exploring quantum alternatives, offering fresh perspectives on algorithmic approaches in trustworthy AI systems. This exploration broadens the horizons of AI research, pushing the boundaries of traditional algorithms.</p><p dir="ltr">In summary, this work significantly contributes to the discourse on Trustworthy AI, ensuring both explainability and acceptance in the intricate interplay between humans and AI systems. Through its diverse contributions, the research offers valuable insights and practical frameworks for the responsible and ethical deployment of AI in various applications.</p>

Page generated in 0.0474 seconds