• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • Tagged with
  • 12
  • 12
  • 6
  • 6
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Adversarial Attacks and Defense Mechanisms to Improve Robustness of Deep Temporal Point Processes

Khorshidi, Samira 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Temporal point processes (TPP) are mathematical approaches for modeling asynchronous event sequences by considering the temporal dependency of each event on past events and its instantaneous rate. Temporal point processes can model various problems, from earthquake aftershocks, trade orders, gang violence, and reported crime patterns, to network analysis, infectious disease transmissions, and virus spread forecasting. In each of these cases, the entity’s behavior with the corresponding information is noted over time as an asynchronous event sequence, and the analysis is done using temporal point processes, which provides a means to define the generative mechanism of the sequence of events and ultimately predict events and investigate causality. Among point processes, Hawkes process as a stochastic point process is able to model a wide range of contagious and self-exciting patterns. One of Hawkes process’s well-known applications is predicting the evolution of viral processes on networks, which is an important problem in biology, the social sciences, and the study of the Internet. In existing works, mean-field analysis based upon degree distribution is used to predict viral spreading across networks of different types. However, it has been shown that degree distribution alone fails to predict the behavior of viruses on some real-world networks. Recent attempts have been made to use assortativity to address this shortcoming. This thesis illustrates how the evolution of such a viral process is sensitive to the underlying network’s structure. In Chapter 3 , we show that adding assortativity does not fully explain the variance in the spread of viruses for a number of real-world networks. We propose using the graphlet frequency distribution combined with assortativity to explain variations in the evolution of viral processes across networks with identical degree distribution. Using a data-driven approach, by coupling predictive modeling with viral process simulation on real-world networks, we show that simple regression models based on graphlet frequency distribution can explain over 95% of the variance in virality on networks with the same degree distribution but different network topologies. Our results highlight the importance of graphlets and identify a small collection of graphlets that may have the most significant influence over the viral processes on a network. Due to the flexibility and expressiveness of deep learning techniques, several neural network-based approaches have recently shown promise for modeling point process intensities. However, there is a lack of research on the possible adversarial attacks and the robustness of such models regarding adversarial attacks and natural shocks to systems. Furthermore, while neural point processes may outperform simpler parametric models on in-sample tests, how these models perform when encountering adversarial examples or sharp non-stationary trends remains unknown. In Chapter 4 , we propose several white-box and black-box adversarial attacks against deep temporal point processes. Additionally, we investigate the transferability of whitebox adversarial attacks against point processes modeled by deep neural networks, which are considered a more elevated risk. Extensive experiments confirm that neural point processes are vulnerable to adversarial attacks. Such a vulnerability is illustrated both in terms of predictive metrics and the effect of attacks on the underlying point process’s parameters. Expressly, adversarial attacks successfully transform the temporal Hawkes process regime from sub-critical to into a super-critical and manipulate the modeled parameters that is considered a risk against parametric modeling approaches. Additionally, we evaluate the vulnerability and performance of these models in the presence of non-stationary abrupt changes, using the crimes and Covid-19 pandemic dataset as an example. Considering the security vulnerability of deep-learning models, including deep temporal point processes, to adversarial attacks, it is essential to ensure the robustness of the deployed algorithms that is despite the success of deep learning techniques in modeling temporal point processes. In Chapter 5 , we study the robustness of deep temporal point processes against several proposed adversarial attacks from the adversarial defense viewpoint. Specifically, we investigate the effectiveness of adversarial training using universal adversarial samples in improving the robustness of the deep point processes. Additionally, we propose a general point process domain-adopted (GPDA) regularization, which is strictly applicable to temporal point processes, to reduce the effect of adversarial attacks and acquire an empirically robust model. In this approach, unlike other computationally expensive approaches, there is no need for additional back-propagation in the training step, and no further network isrequired. Ultimately, we propose an adversarial detection framework that has been trained in the Generative Adversarial Network (GAN) manner and solely on clean training data. Finally, in Chapter 6 , we discuss implications of the research and future research directions.
2

A Model-Based Systems Engineering Approach to Refueling Satellites

Rochford, Elizabeth 05 June 2023 (has links)
No description available.
3

<b>Learning-Based Planning for Connected and Autonomous Vehicles: Towards Information Fusion and Trustworthy AI</b>

Jiqian Dong (18505497) 08 May 2024 (has links)
<p dir="ltr">Motion planning for Autonomous Vehicles (AVs) and Connected Autonomous Vehicles (CAVs) involves the crucial task of translating road environmental data obtained from sensors and connectivity devices into a sequence of executable vehicle actions. This task is critical for AVs and CAVs, because the efficacy of their driving decisions and overall performance depend on the quality of motion planning.</p><p dir="ltr">In the context of motion planning technologies, several fundamental questions and challenges remain despite the widespread adoption of advanced learning-based methods, including deep learning (DL) and deep reinforcement learning (DRL). In this regard, the following critical questions need to be answered: 1) How to design suitable DL architectures to comprehensively understand the driving scenario by integrating data from diverse sources including sensors and connectivity devices? 2) How to effectively use the fused information to make improved driving decisions, accounting for various optimality criteria? 3) How to leverage vehicle connectivity to generate cooperative decisions for multiple CAVs, in a manner that optimizes system-wide utility? 4) How to address the inherent interpretability limitations of DL-based methods to enhance user trust in AVs and CAVs? 5) Is it possible to extend learning-based approaches to operational-level decisions in a way that overcomes the inherent disadvantage of low explainability and lack of safety guarantee?</p><p dir="ltr">In an effort to address these questions and expand the existing knowledge in this domain, this dissertation introduces several learning-based motion planning frameworks tailored towards different driving scenarios of AV and CAV. Technically, these efforts target on developing trustworthy AI systems with a focus on the information fusion, “explainable AI” or XAI and safety critical AI. From a computational perspective, these frameworks introduce new learning-based models with state-of-the-art (SOTA) structures, including Convolutional Neural Network (CNN). Recurrent Neural Networks (RNN), Graph Neural Networks (GNN), Attention networks, and Transformers. They also incorporate reinforcement learning (RL) agents, such as Deep Q Networks (DQN) and Model-based RL. From an application standpoint, these developed frameworks can be deployed directly in AVs and CAVs at Level 3 and above. This can enhance the AV/CAV performance in terms of individual and system performance metrics, including safety, mobility, efficiency, and driving comfort.</p>
4

Trustworthy Soft Sensing in Water Supply Systems using Deep Learning

Sreng, Chhayly 22 May 2024 (has links)
In many industrial and scientific applications, accurate sensor measurements are crucial. Instruments such as nitrate sensors are vulnerable to environmental conditions, calibration drift, high maintenance costs, and degrading. Researchers have turned to advanced computational methods, including mathematical modeling, statistical analysis, and machine learning, to overcome these limitations. Deep learning techniques have shown promise in outperforming traditional methods in many applications by achieving higher accuracy, but they are often criticized as 'black-box' models due to their lack of transparency. This thesis presents a framework for deep learning-based soft sensors that can quantify the robustness of soft sensors by estimating predictive uncertainty and evaluating performance across various scenarios. The framework facilitates comparisons between hard and soft sensors. To validate the framework, I conduct experiments using data generated by AI and Cyber for Water and Ag (ACWA), a cyber-physical system water-controlled environment testbed. Afterwards, the framework is tested on real-world environment data from Alexandria Renew Enterprise (AlexRenew), establishing its applicability and effectiveness in practical settings. / Master of Science / Sensors are essential in various industrial systems and offer numerous advantages. Essential to measurement science and technology, it allows reliable high-resolution low-cost measurement and impacts areas such as environmental monitoring, medical applications and security. The importance of sensors extends to Internet of Things (IoT) and large-scale data analytics fields. In these areas, sensors are vital to the generation of data that is used in industries such as health care, transportation and surveillance. Big Data analytics processes this data for a variety of purposes, including health management and disease prediction, demonstrating the growing importance of sensors in data-driven decision making. In many industrial and scientific applications, precision and trustworthiness in measurements are crucial for informed decision-making and maintaining high-quality processes. Instruments such as nitrate sensors are particularly susceptible to environmental conditions, calibration drift, high maintenance costs, and a tendency to become less reliable over time due to aging. The lifespan of these instruments can be as short as two weeks, posing significant challenges. To overcome these limitations, researchers have turned to advanced computational methods, including mathematical modeling, statistical analysis, and machine learning. Traditional methods have had some success, but they often struggle to fully capture the complex dynamics of natural environments. This has led to increased interest in more sophisticated approaches, such as deep learning techniques. Deep learning-based soft sensors have shown promise in outperforming traditional methods in many applications by achieving higher accuracy. However, they are often criticized as "black-box" models due to their lack of transparency. This raises questions about their reliability and trustworthiness, making it critical to assess these aspects. This thesis presents a comprehensive framework for deep learning-based soft sensors. The framework will quantify the robustness of soft sensors by estimating predictive uncertainty and evaluating performance across a range of contextual scenarios, such as weather conditions, flood events, and water parameters. These evaluations will help define the trustworthiness of the soft sensor and facilitate comparisons between hard and soft sensors. To validate the framework, we will conduct experiments using data generated by ACWA, a cyber-physical system water-controlled environment testbed we developed. This will provide a controlled environment to test and refine our framework. Subsequently, we will test the framework on real-world environment data from AlexRenew. This will further establish its applicability and effectiveness in practical settings, providing a robust and reliable tool for sensor data analysis and prediction. Ultimately, this work aims to contribute to the broader field of sensor technology, enhancing our ability to make informed decisions based on reliable and accurate sensor data.
5

Capabilities and Processes to Mitigate Risks Associated with Machine Learning in Credit Scoring Systems : A Case Study at a Financial Technology Firm / Förmågor och processer för att mitigera risker associerade med maskininlärning inom kreditvärdering : En fallstudie på ett fintech-bolag

Pehrson, Jakob, Lindstrand, Sara January 2022 (has links)
Artificial intelligence and machine learning has become an important part of society and today businesses compete in a new digital environment. However, scholars and regulators are concerned with these technologies' societal impact as their use does not come without risks, such as those stemming from transparency and accountability issues. The potential wrongdoing of these technologies has led to guidelines and future regulations on how they can be used in a trustworthy way. However, these guidelines are argued to lack practicality and they have sparked concern that they will hamper organisations' digital pursuit for innovation and competitiveness. This master’s thesis aims to contribute to this field by studying how teams can work with risk mitigation of risks associated with machine learning. The scope was set on capturing insights on the perception of employees, on what they consider to be important and challenging with machine learning risk mitigation, and then put it in relation to research to develop practical recommendations. The master’s thesis specifically focused on the financial technology sector and the use of machine learning in credit scoring. To achieve the aim, a qualitative single case study was conducted. The master’s thesis found that a combination of processes and capabilities are perceived as important in this work. Moreover, current barriers are also found in the single case. The findings indicate that strong responsiveness is important, and this is achieved in the single case by having separation of responsibilities and strong team autonomy. Moreover, standardisation is argued to be needed for higher control, but that it should be implemented in a way that allows for flexibility. Furthermore, monitoring and validation are important processes for mitigating machine learning risks. Additionally, the capability of extracting as much information from data as possible is an essential component in daily work, both in order to create value but also to mitigate risks. One barrier in this work is that the needed knowledge takes time to develop and that knowledge transferring is sometimes restricted by resource allocation. However, knowledge transfer is argued to be important for long term sustainability. Organisational culture and societal awareness are also indicated to play a role in machine learning risk mitigations. / Artificiell intelligens och maskininlärning har blivit en betydelsefull del av samhället och idag konkurrerar organisationer i en ny digital miljö. Forskare och regulatorer är däremot bekymrade gällande den samhällspåverkan som sådan teknik har eftersom användningen av dem inte kommer utan risker, såsom exempelvis risker som uppkommer från brister i transparens och ansvarighet. Det potentiella olämpliga användandet av dessa tekniker har resulterat i riktlinjer samt framtida föreskrifter på hur de kan användas på ett förtroendefullt och etiskt sätt. Däremot så anses dessa riktlinjer sakna praktisk tillämpning och de har väckt oro då de möjligen kan hindra organisationers digitala strävan efter innovation och konkurrenskraft. Denna masteruppsats syftar till att bidra till detta område genom att studera hur team kan arbeta med riskreducering av risker kopplade till maskininlärning. Uppsatsens omfång lades på att fånga insikter på medarbetares uppfattning, för att sedan ställa dessa i relation till forskning och utveckla praktiska rekommendationer. Denna masteruppsats fokuserade specifikt på finansteknologisektorn och användandet av maskininlärning inom kreditvärdering. En kvalitativ singelfallstudie genomfördes för att uppnå detta mål. Masteruppsatsen fann att en kombination av processer och förmågor uppfattas som viktiga inom detta arbete. Dessutom fann fallstudien några barriärer. Resultaten indikerar att en stark förmåga att reagera är essentiellt och att detta uppnås i fallstudien genom att ha tydlig ansvarsfördelning och att teamen har stark autonomi. Vidare så anses standardisering behövas för en högre nivå av kontroll, samtidigt som det bör vara implementerat på ett sådant sätt som möjliggör flexibilitet. Fortsättningsvis anses monitorering och validering vara viktiga processer för att mitigera maskininlärningsrisker. Dessutom är förmågan att extrahera så mycket information från data som möjligt en väsentlig komponent i det dagliga arbetet, både för värdeskapande och för att minska risker. En barriär inom detta arbetet är att det tar tid för den behövda kunskapen att utvecklas och att kunskapsöverföring ibland hindras av resursallokering. Kunskapsöverföring anses däremot vara viktigt för långsiktig hållbarhet. Organisationskultur och samhällsmedvetenhet indikeras också påverka minskningen av risker kring maskininlärning.
6

NOVEL APPROACHES TO MITIGATE DATA BIAS AND MODEL BIAS FOR FAIR MACHINE LEARNING PIPELINES

Taeuk Jang (18333504) 28 April 2024 (has links)
<p dir="ltr">Despite the recent advancement and exponential growth in the utility of deep learning models across various fields and tasks, we are confronted with emerging challenges. Among them, one prevalent issue is the biases inherent in deep models, which often mimic stereotypical or subjective behavior observed in data, potentially resulting in negative societal impact or disadvantaging certain subpopulations based on race, gender, etc. This dissertation addresses the critical problem of fairness and bias in machine learning from diverse perspectives, encompassing both data biases and model biases.</p><p dir="ltr">First, we study the multifaceted nature of data biases to comprehensively address the challenges. Specifically, the proposed approaches include the development of a generative model for balancing data distribution with counterfactual samples to address data skewness. In addition, we introduce a novel feature selection method aimed at eliminating sensitive-relevant features that could potentially convey sensitive information, e.g., race, considering the interrelationship between features. Moreover, we present a scalable thresholding method to appropriately binarize model outputs or regression data considering fairness constraints for fairer decision-making, extending fairness beyond categorical data.</p><p dir="ltr">However, addressing fairness problem solely by correcting data bias often encounters several challenges. Particularly, establishing fairness-curated data demands substantial resources and may be restricted by regal constraints, while explicitly identifying the biases is non-trivial due to their intertwined nature. Further, it is important to recognize that models may interpret data differently by their architectures or downstream tasks. In response, we propose a line of methods to address model bias, on top of addressing the data bias mentioned above, by learning fair latent representations. These methods include fair disentanglement learning, which projects latent subspace independent of sensitive information by employing conditional mutual information, and a debiased contrastive learning method for fair self-supervised learning without sensitive attribute annotations. Lastly, we introduce a novel approach to debias the multimodal embedding of pretrained vision-language models (VLMs) without requiring downstream annotated datasets, retraining, or fine-tuning of the large model considering the constrained resource of research labs.</p>
7

Finding differences in perspectives between designers and engineers to develop trustworthyAI for autonomous cars

Larsson, Karl Rikard, Jönelid, Gustav January 2023 (has links)
In the context of designing and implementing ethical Artificial Intelligence (AI), varying perspectives exist regarding developing trustworthy AI for autonomous cars. This study sheds light on the differences in perspectives and provides recommendations to minimize such divergences. By exploring the diverse viewpoints, we identify key factors contributing to the differences and propose strategies to bridge the gaps. This study goes beyond the trolley problem to visualize the complex challenges of trustworthy and ethical AI. Three pillars of trustworthy AI have been defined: transparency, reliability, and safety. This research contributes to the field of trustworthy AI for autonomous cars, providing practical recommendations to enhance the development of AI systems that prioritize both technological advancement and ethical principles.
8

Taking Responsible AI from Principle to Practice : A study of challenges when implementing Responsible AI guidelines in an organization and how to overcome them

Hedlund, Matilda, Henriksson, Hanna January 2023 (has links)
The rapid advancement of AI technology emphasizes the importance of developing practical and ethical frameworks to guide its evolution and deployment in a responsible manner. In light of more complex AI and its capacity to influence society, AI researchers and other prominent individuals are now indicating that AI evolution has to be regulated to a greater extent. This study examines the practical implementation of Responsible AI guidelines in an organization by investigating the challenges encountered and proposing solutions to overcome them. Previous research has primarily focused on conceptualizing Responsible AI guidelines, resulting in a tremendous number of abstract and high-level recommendations. However, there is an emerging demand to shift the focus toward studying the practical implementation of these. This study addresses the research question: ‘How can an organization overcome challenges that may arise when implementing Responsible AI guidelines in practice?’. The study utilizes the guidelines produced by the European Commission’s High-Level Expert Group on AI as a reference point, considering their influence on shaping future AI policy and regulation in the EU. The study is conducted in collaboration with the telecommunications company Ericsson, which henceforth will be referred to as 'the case organization’, which possesses a large global workforce and headquarters in Sweden. Specific focus is delineated to the department that works on developing AI internally for other units with the purpose of simplifying operations and processes, which henceforth in this study will be referred to as 'the AI unit'. Through an inductive interpretive approach, data from 16 semi-structured interviews and organization-specific documents were analyzed through a thematic analysis. The findings reveal challenges related to (1) understanding and defining Responsible AI, (2) technical conditions and complexity, (3) organizational structures and barriers, as well as (4) inconsistent and overlooked ethics. Proposed solutions include (1) education and awareness, (2) integration and implementation, (3) governance and accountability, and (4) alignment and values. The findings contribute to a deeper understanding of Responsible AI implementation and offer practical recommendations for organizations navigating the rapidly evolving landscape of AI technology.
9

Intelligent Data and Potential Analysis in the Mechatronic Product Development

Nüssgen, Alexander January 2024 (has links)
This thesis explores the imperative of intelligent data and potential analysis in the realm of mechatronic product development. The persistent challenges of synchronization and efficiency underscore the need for advanced methodologies. Leveraging the substantial advancements in Artificial Intelligence (AI), particularly in generative AI, presents unprecedented opportunities. However, significant challenges, especially regarding robustness and trustworthiness, remain unaddressed. In response to this critical need, a comprehensive methodology is introduced, examining the entire development process through the illustrative V-Model and striving to establish a robust AI landscape. The methodology explores acquiring suitable and efficient knowledge, along with methodical implementation, addressing diverse requirements for accuracy at various stages of development.  As the landscape of mechatronic product development evolves, integrating intelligent data and harnessing the power of AI not only addresses current challenges but also positions organizations for greater innovation and competitiveness in the dynamic market landscape.
10

Transparent ML Systems for the Process Industry : How can a recommendation system perceived as transparent be designed for experts in the process industry?

Fikret, Eliz January 2023 (has links)
Process monitoring is a field that can greatly benefit from the adoption of machine learning solutions like recommendation systems. However, for domain experts to embrace these technologies within their work processes, clear explanations are crucial. Therefore, it is important to adopt user-centred methods for designing more transparent recommendation systems. This study explores this topic through a case study in the pulp and paper industry. By employing a user-centred and design-first adaptation of the question-driven design process, this study aims to uncover the explanation needs and requirements of industry experts, as well as formulate design visions and recommendations for transparent recommendation systems. The results of the study reveal five common explanation types that are valuable for domain experts while also highlighting limitations in previous studies on explanation types. Additionally, nine requirements are identified and utilised in the creation of a prototype, which domain experts evaluate. The evaluation process leads to the development of several design recommendations that can assist HCI researchers and designers in creating effective, transparent recommendation systems. Overall, this research contributes to the field of HCI by enhancing the understanding of transparent recommendation systems from a user-centred perspective.

Page generated in 0.0454 seconds