• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 37
  • 14
  • 10
  • 10
  • 10
  • 10
  • 8
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Adversarial Attacks and Defense Mechanisms to Improve Robustness of Deep Temporal Point Processes

Khorshidi, Samira 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Temporal point processes (TPP) are mathematical approaches for modeling asynchronous event sequences by considering the temporal dependency of each event on past events and its instantaneous rate. Temporal point processes can model various problems, from earthquake aftershocks, trade orders, gang violence, and reported crime patterns, to network analysis, infectious disease transmissions, and virus spread forecasting. In each of these cases, the entity’s behavior with the corresponding information is noted over time as an asynchronous event sequence, and the analysis is done using temporal point processes, which provides a means to define the generative mechanism of the sequence of events and ultimately predict events and investigate causality. Among point processes, Hawkes process as a stochastic point process is able to model a wide range of contagious and self-exciting patterns. One of Hawkes process’s well-known applications is predicting the evolution of viral processes on networks, which is an important problem in biology, the social sciences, and the study of the Internet. In existing works, mean-field analysis based upon degree distribution is used to predict viral spreading across networks of different types. However, it has been shown that degree distribution alone fails to predict the behavior of viruses on some real-world networks. Recent attempts have been made to use assortativity to address this shortcoming. This thesis illustrates how the evolution of such a viral process is sensitive to the underlying network’s structure. In Chapter 3 , we show that adding assortativity does not fully explain the variance in the spread of viruses for a number of real-world networks. We propose using the graphlet frequency distribution combined with assortativity to explain variations in the evolution of viral processes across networks with identical degree distribution. Using a data-driven approach, by coupling predictive modeling with viral process simulation on real-world networks, we show that simple regression models based on graphlet frequency distribution can explain over 95% of the variance in virality on networks with the same degree distribution but different network topologies. Our results highlight the importance of graphlets and identify a small collection of graphlets that may have the most significant influence over the viral processes on a network. Due to the flexibility and expressiveness of deep learning techniques, several neural network-based approaches have recently shown promise for modeling point process intensities. However, there is a lack of research on the possible adversarial attacks and the robustness of such models regarding adversarial attacks and natural shocks to systems. Furthermore, while neural point processes may outperform simpler parametric models on in-sample tests, how these models perform when encountering adversarial examples or sharp non-stationary trends remains unknown. In Chapter 4 , we propose several white-box and black-box adversarial attacks against deep temporal point processes. Additionally, we investigate the transferability of whitebox adversarial attacks against point processes modeled by deep neural networks, which are considered a more elevated risk. Extensive experiments confirm that neural point processes are vulnerable to adversarial attacks. Such a vulnerability is illustrated both in terms of predictive metrics and the effect of attacks on the underlying point process’s parameters. Expressly, adversarial attacks successfully transform the temporal Hawkes process regime from sub-critical to into a super-critical and manipulate the modeled parameters that is considered a risk against parametric modeling approaches. Additionally, we evaluate the vulnerability and performance of these models in the presence of non-stationary abrupt changes, using the crimes and Covid-19 pandemic dataset as an example. Considering the security vulnerability of deep-learning models, including deep temporal point processes, to adversarial attacks, it is essential to ensure the robustness of the deployed algorithms that is despite the success of deep learning techniques in modeling temporal point processes. In Chapter 5 , we study the robustness of deep temporal point processes against several proposed adversarial attacks from the adversarial defense viewpoint. Specifically, we investigate the effectiveness of adversarial training using universal adversarial samples in improving the robustness of the deep point processes. Additionally, we propose a general point process domain-adopted (GPDA) regularization, which is strictly applicable to temporal point processes, to reduce the effect of adversarial attacks and acquire an empirically robust model. In this approach, unlike other computationally expensive approaches, there is no need for additional back-propagation in the training step, and no further network isrequired. Ultimately, we propose an adversarial detection framework that has been trained in the Generative Adversarial Network (GAN) manner and solely on clean training data. Finally, in Chapter 6 , we discuss implications of the research and future research directions.
12

A Model-Based Systems Engineering Approach to Refueling Satellites

Rochford, Elizabeth 05 June 2023 (has links)
No description available.
13

Interpretation, Verification and Privacy Techniques for Improving the Trustworthiness of Neural Networks

Dethise, Arnaud 22 March 2023 (has links)
Neural Networks are powerful tools used in Machine Learning to solve complex problems across many domains, including biological classification, self-driving cars, and automated management of distributed systems. However, practitioners' trust in Neural Network models is limited by their inability to answer important questions about their behavior, such as whether they will perform correctly or if they can be entrusted with private data. One major issue with Neural Networks is their "black-box" nature, which makes it challenging to inspect the trained parameters or to understand the learned function. To address this issue, this thesis proposes several new ways to increase the trustworthiness of Neural Network models. The first approach focuses specifically on Piecewise Linear Neural Networks, a popular flavor of Neural Networks used to tackle many practical problems. The thesis explores several different techniques to extract the weights of trained networks efficiently and use them to verify and understand the behavior of the models. The second approach shows how strengthening the training algorithms can provide guarantees that are theoretically proven to hold even for the black-box model. The first part of the thesis identifies errors that can exist in trained Neural Networks, highlighting the importance of domain knowledge and the pitfalls to avoid with trained models. The second part aims to verify the outputs and decisions of the model by adapting the technique of Mixed Integer Linear Programming to efficiently explore the possible states of the Neural Network and verify properties of its outputs. The third part extends the Linear Programming technique to explain the behavior of a Piecewise Linear Neural Network by breaking it down into its linear components, generating model explanations that are both continuous on the input features and without approximations. Finally, the thesis addresses privacy concerns by using Trusted Execution and Differential Privacy during the training process. The techniques proposed in this thesis provide strong, theoretically provable guarantees about Neural Networks, despite their black-box nature, and enable practitioners to verify, extend, and protect the privacy of expert domain knowledge. By improving the trustworthiness of models, these techniques make Neural Networks more likely to be deployed in real-world applications.
14

The Role of AI in IoT Systems : A Semi-Systematic Literature Review

Anyonyi, Yvonne Ivakale, Katambi, Joan January 2023 (has links)
The Internet of Things (IoT) is a network of interconnected devices and objects that have various functions,such as sensing, identifying, computing, providing services and communicating. It is estimated that by the year 2030, there will be approximately 29.42 billion IoT devices globally, facilitating extensive data exchange among them. In response to this rapid growth of IoT, Artificial Intelligence (AI) has become a pivotal technology in automating key aspects of IoT systems, including decision-making, predictive data analysis among others. The widespread use of AI across various industries has brought about significant transformations in business ecosystems. Despite its immense potential, IoT systems still face several challenges. These challenges encompass concerns related to privacy and security, data management, standardization issues, trust among others. Looking at these challenges, AI emerges as an essential enabler, enhancing the intelligence and sophistication of IoT systems. Its diverse applications offer effective solutions to address the inherent challenges within IoT systems. This, in turn, leads to the optimization of processes and the development of more intelligent and smart IoT systems.This thesis presents a semi-systematic literature review (SSLR) that aims to explore the role of AI in IoT systems. A systematic search was performed on three (3) databases (Scopus, IEEE-Xplore and the ACM digital library), 29 scientific and peer reviewed studies published between 2018-2022 were selected and examined to provide answers to the research questions. This study also encompasses an additional study within the context of AI and trustworthiness in IoT systems, user acceptance within IoT systems and AIoT's impact on sustainable economic growth across industries. This thesis also presents the DIMACERI Framework which encompasses eight dimensions of IoT challenges and concludes with recommendations for stakeholders in AIoT systems. AI is strategically integrated across the DIMACERI dimensions to create reliable, secure and efficient IoT systems.
15

Optimizing Programmable Logic Design Security Strategies

Graf, Jonathan Peter 10 June 2019 (has links)
A wide variety of design security strategies have been developed for programmable logic devices, but less work has been done to determine which are optimal for any given design and any given security goal. To address this, we consider not only metrics related to the performance of the design security practice, but also the likely action of an adversary given their goals. We concern ourselves principally with adversaries attempting to make use of hardware Trojans, although we also show that our work can be generalized to adversaries and defenders using any of a variety of microelectronics exploitation and defense strategies. Trojans are inserted by an adversary in order to accomplish an end. This goal must be considered and quantified in order to predict the adversary's likely action. Our work here builds upon a security economic approach that models the adversary and defender motives and goals in the context of empirically derived countermeasure efficacy metrics. The approach supports formation of a two-player strategic game to determine optimal strategy selection for both adversary and defender. A game may be played in a variety of contexts, including consideration of the entire design lifecycle or only a step in product development. As a demonstration of the practicality of this approach, we present an experiment that derives efficacy metrics from a set of countermeasures (defender strategies) when tested against a taxonomy of Trojans (adversary strategies). We further present a software framework, GameRunner, that automates not only the solution to the game but also enables mathematical and graphical exploration of "what if" scenarios in the context of the game. GameRunner can also issue "prescriptions," sets of commands that allow the defender to automate the application of the optimal defender strategy to their circuit of concern. We also present how this work can be extended to adjacent security domains. Finally, we include a discussion of future work to include additional software, a more advanced experimental framework, and the application of irrationality models to account for players who make subrational decisions. / Doctor of Philosophy / We present a security economic model that informs the optimal selection of programmable logic design security strategies. Our model accurately represents the economics and effectiveness of available design security strategies and accounts for the varieties of available exploits. Paired with game theoretic analysis, this model informs microelectronics designers and associated policy makers of optimal defensive strategies. Treating the adversary and defender as opponents in a two-player game, our security economic model tells us how either player will play if it is known in advance how their opponent plays. The additional use of game theory allows us to determine the optimal play of both players simultaneously without prior knowledge other than models of players beliefs.
16

<b>Learning-Based Planning for Connected and Autonomous Vehicles: Towards Information Fusion and Trustworthy AI</b>

Jiqian Dong (18505497) 08 May 2024 (has links)
<p dir="ltr">Motion planning for Autonomous Vehicles (AVs) and Connected Autonomous Vehicles (CAVs) involves the crucial task of translating road environmental data obtained from sensors and connectivity devices into a sequence of executable vehicle actions. This task is critical for AVs and CAVs, because the efficacy of their driving decisions and overall performance depend on the quality of motion planning.</p><p dir="ltr">In the context of motion planning technologies, several fundamental questions and challenges remain despite the widespread adoption of advanced learning-based methods, including deep learning (DL) and deep reinforcement learning (DRL). In this regard, the following critical questions need to be answered: 1) How to design suitable DL architectures to comprehensively understand the driving scenario by integrating data from diverse sources including sensors and connectivity devices? 2) How to effectively use the fused information to make improved driving decisions, accounting for various optimality criteria? 3) How to leverage vehicle connectivity to generate cooperative decisions for multiple CAVs, in a manner that optimizes system-wide utility? 4) How to address the inherent interpretability limitations of DL-based methods to enhance user trust in AVs and CAVs? 5) Is it possible to extend learning-based approaches to operational-level decisions in a way that overcomes the inherent disadvantage of low explainability and lack of safety guarantee?</p><p dir="ltr">In an effort to address these questions and expand the existing knowledge in this domain, this dissertation introduces several learning-based motion planning frameworks tailored towards different driving scenarios of AV and CAV. Technically, these efforts target on developing trustworthy AI systems with a focus on the information fusion, “explainable AI” or XAI and safety critical AI. From a computational perspective, these frameworks introduce new learning-based models with state-of-the-art (SOTA) structures, including Convolutional Neural Network (CNN). Recurrent Neural Networks (RNN), Graph Neural Networks (GNN), Attention networks, and Transformers. They also incorporate reinforcement learning (RL) agents, such as Deep Q Networks (DQN) and Model-based RL. From an application standpoint, these developed frameworks can be deployed directly in AVs and CAVs at Level 3 and above. This can enhance the AV/CAV performance in terms of individual and system performance metrics, including safety, mobility, efficiency, and driving comfort.</p>
17

Trustworthy Embedded Computing for Cyber-Physical Control

Lerner, Lee Wilmoth 20 February 2015 (has links)
A cyber-physical controller (CPC) uses computing to control a physical process. Example CPCs can be found in self-driving automobiles, unmanned aerial vehicles, and other autonomous systems. They are also used in large-scale industrial control systems (ICSs) manufacturing and utility infrastructure. CPC operations rely on embedded systems having real-time, high-assurance interactions with physical processes. However, recent attacks like Stuxnet have demonstrated that CPC malware is not restricted to networks and general-purpose computers, rather embedded components are targeted as well. General-purpose computing and network approaches to security are failing to protect embedded controllers, which can have the direct effect of process disturbance or destruction. Moreover, as embedded systems increasingly grow in capability and find application in CPCs, embedded leaf node security is gaining priority. This work develops a root-of-trust design architecture, which provides process resilience to cyber attacks on, or from, embedded controllers: the Trustworthy Autonomic Interface Guardian Architecture (TAIGA). We define five trust requirements for building a fine-grained trusted computing component. TAIGA satisfies all requirements and addresses all classes of CPC attacks using an approach distinguished by adding resilience to the embedded controller, rather than seeking to prevent attacks from ever reaching the controller. TAIGA provides an on-chip, digital, security version of classic mechanical interlocks. This last line of defense monitors all of the communications of a controller using configurable or external hardware that is inaccessible to the controller processor. The interface controller is synthesized from C code, formally analyzed, and permits run-time checked, authenticated updates to certain system parameters but not code. TAIGA overrides any controller actions that are inconsistent with system specifications, including prediction and preemption of latent malwares attempts to disrupt system stability and safety. This material is based upon work supported by the National Science Foundation under Grant Number CNS-1222656. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. We are grateful for donations from Xilinx, Inc. and support from the Georgia Tech Research Institute. / Ph. D.
18

Trustworthy Soft Sensing in Water Supply Systems using Deep Learning

Sreng, Chhayly 22 May 2024 (has links)
In many industrial and scientific applications, accurate sensor measurements are crucial. Instruments such as nitrate sensors are vulnerable to environmental conditions, calibration drift, high maintenance costs, and degrading. Researchers have turned to advanced computational methods, including mathematical modeling, statistical analysis, and machine learning, to overcome these limitations. Deep learning techniques have shown promise in outperforming traditional methods in many applications by achieving higher accuracy, but they are often criticized as 'black-box' models due to their lack of transparency. This thesis presents a framework for deep learning-based soft sensors that can quantify the robustness of soft sensors by estimating predictive uncertainty and evaluating performance across various scenarios. The framework facilitates comparisons between hard and soft sensors. To validate the framework, I conduct experiments using data generated by AI and Cyber for Water and Ag (ACWA), a cyber-physical system water-controlled environment testbed. Afterwards, the framework is tested on real-world environment data from Alexandria Renew Enterprise (AlexRenew), establishing its applicability and effectiveness in practical settings. / Master of Science / Sensors are essential in various industrial systems and offer numerous advantages. Essential to measurement science and technology, it allows reliable high-resolution low-cost measurement and impacts areas such as environmental monitoring, medical applications and security. The importance of sensors extends to Internet of Things (IoT) and large-scale data analytics fields. In these areas, sensors are vital to the generation of data that is used in industries such as health care, transportation and surveillance. Big Data analytics processes this data for a variety of purposes, including health management and disease prediction, demonstrating the growing importance of sensors in data-driven decision making. In many industrial and scientific applications, precision and trustworthiness in measurements are crucial for informed decision-making and maintaining high-quality processes. Instruments such as nitrate sensors are particularly susceptible to environmental conditions, calibration drift, high maintenance costs, and a tendency to become less reliable over time due to aging. The lifespan of these instruments can be as short as two weeks, posing significant challenges. To overcome these limitations, researchers have turned to advanced computational methods, including mathematical modeling, statistical analysis, and machine learning. Traditional methods have had some success, but they often struggle to fully capture the complex dynamics of natural environments. This has led to increased interest in more sophisticated approaches, such as deep learning techniques. Deep learning-based soft sensors have shown promise in outperforming traditional methods in many applications by achieving higher accuracy. However, they are often criticized as "black-box" models due to their lack of transparency. This raises questions about their reliability and trustworthiness, making it critical to assess these aspects. This thesis presents a comprehensive framework for deep learning-based soft sensors. The framework will quantify the robustness of soft sensors by estimating predictive uncertainty and evaluating performance across a range of contextual scenarios, such as weather conditions, flood events, and water parameters. These evaluations will help define the trustworthiness of the soft sensor and facilitate comparisons between hard and soft sensors. To validate the framework, we will conduct experiments using data generated by ACWA, a cyber-physical system water-controlled environment testbed we developed. This will provide a controlled environment to test and refine our framework. Subsequently, we will test the framework on real-world environment data from AlexRenew. This will further establish its applicability and effectiveness in practical settings, providing a robust and reliable tool for sensor data analysis and prediction. Ultimately, this work aims to contribute to the broader field of sensor technology, enhancing our ability to make informed decisions based on reliable and accurate sensor data.
19

Creativity &amp; Leadership : The introduction of creative internal communication practices in organizations

Vétillart, Guillaume January 2014 (has links)
This thesis investigates the impacts of introducing creative experiences in a rigid organization. Based on the methodology suggested by Strauss and Corbin (1998) I have conducted a qualitative study through 8 semi-structured interviews of heterogeneous profiles in an organization where I worked for two years as an apprentice. Specific creative experiences were introduced in order to improve the internal communication, facilitate an organizational change transition and sustain a better social climate. I aimed at understanding the impacts resulting from experiencing such activities both at an individual and organizational level. My findings reveal three positive categories (well-being, corporate affiliation and organizational change facilitation) and three negative categories (individual irritations and a lack of coherence with the corporate identity). I conclude my work with the possible reasons justifying unexpected negative results, stating that trustworthy leadership and the corporate culture are essential when introducing such collaborative activities. My thesis might contribute to the discussions of creative problem solving for the sake of communication and values-added resulting from creative interventions in organizations.
20

EU’s Proposed AI Regulation in the context of Fundamental Rights : Analysing the Swedish approach through the lens of the principles of good administration

Yıldız, Melih Burak January 2021 (has links)
AI has become one of the most powerful drivers of social change, transforming economies, impacting politics and wars, and reshaping how citizens live and interact. Nevertheless, the implementation of AI can have adverse effects on peoples’ lives. This dissertation first examines the relationship between artificial intelligence and public law, mainly in two domains, administrative law and criminal law. It also provides a clear insight into the potential impact of AI applications on fundamental rights in the legal context of the European Union. Four selected fundamental rights, Human Dignity, Data Protection and Right to Privacy, Equality and Non-discrimination, and Access to Justice, are examined. The dissertation further explores the European Commission's new proposed AI regulation, which was proposed in April 2021. The proposal aims to put forward a risk- based approach for a harmonized EU legislation by considering the ethical and human sides and without unnecessarily restricting the development of AI technologies. The study focuses on examples from Sweden throughout the study and lastly, examines the Swedish approach in the context of the principles of good administration.

Page generated in 0.0334 seconds