• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 325
  • 128
  • 49
  • 36
  • 12
  • 10
  • 9
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 682
  • 182
  • 91
  • 86
  • 82
  • 71
  • 64
  • 53
  • 53
  • 53
  • 50
  • 46
  • 43
  • 38
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Defining and Measuring Robustness in Wireless Sensor Communication for Telemedicine

Bhattarai, Sudha 02 September 2008 (has links)
No description available.

Blind Deconvolution Based on Constrained Marginalized Particle Filters

Maryan, Krzysztof S. 09 1900 (has links)
This thesis presents a new approach to blind deconvolution algorithms. The proposed method is a combination of a classical blind deconvolution subspace method and a marginalized particle filter. It is shown that the new method provides better performance than just a marginalized particle filter, and better robustness than the classical subspace method. The properties of the new method make it a candidate for further exploration of its potential application in acoustic blind dereverberation. / Thesis / Master of Applied Science (MASc)


Sarada Krithivasan (14276069) 20 December 2022 (has links)
<p>Deep Neural Networks (DNNs) have greatly advanced several domains of machine learning including image, speech and natural language processing, leading to their usage in several real-world products and services. This success has been enabled by improvements in hardware platforms such as Graphics Processing Units (GPUs) and specialized accelerators. However, recent trends in state-of-the-art DNNs point to enormous increases in compute requirements during training and inference that far surpass the rate of advancements in deep learning hardware. For example, image-recognition DNNs require tens to hundreds of millions of parameters for reaching competitive accuracies on complex datasets, resulting in billions of operations performed when processing a single input. Furthermore, this growth in model complexity is supplemented by an increase in the training dataset size to achieve improved classification performance, with complex datasets often containing millions of training samples or more. Another challenge hindering the adoption of DNNs is their susceptibility to adversarial attacks. Recent research has demonstrated that DNNs are vulnerable to imperceptible, carefully-crafted input perturbations that can lead to severe consequences in safety-critical applications such as autonomous navigation and healthcare.</p> <p><br></p> <p>This thesis proposes techniques to improve the execution efficiency of DNNs during both inference and training. In the context of DNN training, we first consider the widely-used stochastic gradient descent (SGD) algorithm. We propose a method to use localized learning, which is computationally cheaper and incurs lower memory footprint, to accelerate a SGD-based training framework with minimal impact on accuracy. This is achieved by employing localized learning in a spatio-temporally selective manner, i.e., in selected network layers and epochs. Next, we address training dataset complexity by leveraging input mixing operators that combine multiple training inputs into a single composite input. To ensure that training on the mixed inputs is effective, we propose techniques to reduce the interference between the constituent samples in a mixed input. Furthermore, we also design metrics to identify training inputs that are amenable to mixing, and apply mixing only to these inputs. Moving on to inference, we explore DNN ensembles, where the output of multiple DNN models are combined to form the prediction for a particular input. While ensembles achieve improved classification performance compared to single (i.e., non-ensemble) models, their compute and storage costs scale with the number of models in the ensemble. To that end, we propose a novel ensemble strategy wherein the ensemble members share the same weights for the convolutional and fully-connected layers, but differ in the additive biases applied after every layer. This allows for ensemble inference to be treated like batch inference, with the associated computational efficiency benefits. We also propose techniques to train these ensembles with limited overheads. Finally, we consider spiking neural networks (SNNs), a class of biologically-inspired neural networks that represent and process information as discrete spikes. Motivated by the observation that the dominant fraction of energy consumption in SNN hardware is within the memory and interconnect network, we propose a novel spike-bundling strategy that reduces energy consumption by communicating temporally proximal spikes as a single event.</p> <p><br></p> <p>As a second direction, the thesis identifies a new challenge in the field of adversarial machine learning. In contrast to prior attacks which degrade accuracy, we propose attacks that degrade the execution efficiency (energy and time) of a DNN on a given hardware platform. As one specific embodiment of such attacks, we propose sparsity attacks, which perturb the inputs to a DNN so as to result in reduced sparsity within the network, causing it’s latency and energy to increase on sparsity-optimized platforms. We also extend these attacks to SNNs, which are known rely on sparsity of spikes for efficiency, and demonstrate that it is possible to greatly degrade latency and energy of these networks through adversarial input perturbations.</p> <p><br></p> <p>In summary, this dissertation demonstrates approaches for efficient deep learning for inference and training, while also opening up new classes of attacks that must be addressed.</p> <p><br></p>

Statistical Theory for Adversarial Robustness in Machine Learning

Yue Xing (14142297) 21 November 2022 (has links)
<p>Deep learning plays an important role in various disciplines, such as auto-driving, information technology, manufacturing, medical studies, and financial studies. In the past decade, there have been fruitful studies on deep learning in which training and testing data are assumed to follow the same distribution to humans. Recent studies reveal that these dedicated models are vulnerable to adversarial attack, i.e., the predicting label may be changed even if the testing input has an unaware perturbation. However, most existing studies aim to develop computationally efficient adversarial learning algorithms without a thorough understanding of the statistical properties of these algorithms. This dissertation aims to provide theoretical understandings of adversarial training to figure out potential improvements in this area of research. </p> <p><br></p> <p>The first part of this dissertation focuses on the algorithmic stability of adversarial training. We reveal that the algorithmic stability of the vanilla adversarial training method is sub-optimal, and we study the effectiveness of a simple noise injection method. While noise injection improves stability, it also does not deteriorate the consistency of adversarial training.</p> <p><br></p> <p>The second part of this dissertation reveals a phase transition phenomenon in adversarial training. When the attack strength increases, the training trajectory of adversarial training will deviate from its natural counterpart. Consequently, various properties of adversarial training are different from clean training. It is essential to have adaptations in the training configuration and the neural network structure to improve adversarial training.</p> <p><br></p> <p>The last part of this dissertation focuses on how artificially generated data improves adversarial training. It is observed that utilizing synthetic data improves adversarial robustness, even if the data are generated using the original training data, i.e., no extra information is introduced. We use a theory to explain the reason behind this observation and propose further adaptations to utilize the generated data better.</p>

CAMP-BDI : an approach for multiagent systems robustness through capability-aware agents maintaining plans

White, Alan Gordon January 2017 (has links)
Rational agent behaviour is frequently achieved through the use of plans, particularly within the widely used BDI (Belief-Desire-Intention) model for intelligent agents. As a consequence, preventing or handling failure of planned activity is a vital component in building robust multiagent systems; this is especially true in realistic environments, where unpredictable exogenous change during plan execution may threaten intended activities. Although reactive approaches can be employed to respond to activity failure through replanning or plan-repair, failure may have debilitative effects that act to stymie recovery and, potentially, hinder subsequent activity. A further factor is that BDI agents typically employ deterministic world and plan models, as probabilistic planning methods are typical intractable in realistically complex environments. However, deterministic operator preconditions may fail to represent world states which increase the risk of activity failure. The primary contribution of this thesis is the algorithmic design of the CAMP-BDI (Capability Aware, Maintaining Plans) approach; a modification of the BDI reasoning cycle which provides agents with beliefs and introspective reasoning to anticipate increased risk of failure and pro-actively modify intended plans in response. We define a capability meta-knowledge model, providing information to identify and address threats to activity success using precondition modelling and quantitative quality estimation. This also facilitates semantic-independent communication of capability information for general advertisement and of dependency information - we define use of the latter, within a structured messaging approach, to extend local agent algorithms towards decentralized, distributed robustness. Finally, we define a policy based approach for dynamic modification of maintenance behaviour, allowing response to observations made during runtime and with potential to improve re-usability of agents in alternate environments. An implementation of CAMP-BDI is compared against an equivalent reactive system through experimentation in multiple perturbation configurations, using a logistics domain. Our empirical evaluation indicates CAMP-BDI has significant benefit if activity failure carries a strong risk of debilitative consequence.

On Minmax Robustness for Multiobjective Optimization with Decision or Parameter Uncertainty

Krüger, Corinna 29 March 2018 (has links)
No description available.

Robustesse de la commande pr´edictive explicite / Robustness of Explicit MPC Solutions

Koduri, Rajesh 28 September 2017 (has links)
Les techniques de conception de lois de commande pour les systèmes linéaires ou hybrides avec contraintes conduisent souvent à des partitions de l’espace d’état avec des régions polyédriques convexes. Ceci correspond à des lois de commande par retour d’état affine (PWA) par morceaux associées `a une partition polyédrale de l’espace d’état. De telles lois de commande peuvent être effectivement mises en œuvre sur des plateformes matérielles pour des applications de commande en temps réel. Cependant, la robustesse des solutions explicites dépend de la précision du modèle mathématique des systèmes dynamiques. Les incertitudes dans le modèle du système posent de sérieux défis en ce qui concerne la stabilité et la mise en œuvre des lois de commande affines par morceaux. Motivé par les défis auxquels font face les solutions explicites par rapport aux incertitudes dans les modèles des systèmes dynamiques, cette thèse est principalement axée sur leur analyse et à leur retouche. La première partie de cette thèse vise à calculer les marges de robustesse pour une loi de commande PWA nominale donnée obtenue pour un système de temps discret linéaire. Les marges de robustesse classiques, c’est-à-dire la marge de gain et la marge de phase, considèrent la variation de gain et la variation de phase du modèle pour lequel la stabilité de la boucle fermée est préservée.La deuxième partie de la thèse vise à considérer des perturbations dans la représentation des sommets des régions polyédriques. Les partitions de l’espace d’état quantifiées perdent une partie des propriétés importantes des contrôleurs explicites: “non-chevauchement”, “convexité” et/ou “ invariance”. Deux ensembles différents appelés sensibilité aux sommets et marge de sensibilité sont déterminés pour caractériser les perturbations admissibles, en préservant respectivement la propriété de non-chevauchement et d’invariance du contrôleur. La troisième partie vise à analyser la complexité des solutions explicites en termes de temps de calcul et de mémoire. Une première comparaison entre les évaluations séquentielles et parallèles des fonctions PWA par l’algorithme ADMM (Alternating Direction Method of Multiplier) est faite. Ensuite, la complexité computationnelle des évaluations parallèles des fonctions PWA pour l’algorithme de couverture progressive (PHA) sur l’unit´e centrale de traitement (CPU) et l’unit´e de traitement graphique (GPU) est comparée. / The control design techniques for linear or hybrid systems with constraints lead often to off-line state-space partitions with non-overlapping convex polyhedral regions. This corresponds to a piecewise affine (PWA) state feedback control laws associated to polyhedral partition of the state-space. Such control laws can be effectively implemented on hardwares for real-time control applications. However, the robustness of the explicit solutions depends on the accuracy of the mathematical model of the dynamical systems. The uncertainties in the system model poses serious challenges concerning the stability and implementation of the piecewise affine control laws. Motivated by the challenges facing the explicit solutions for the uncertainties in the dynamical systems, this thesis is mostly related to their analysis and re-design. The first part of this thesis aims to compute robustness margins for a given nominal PWA control law obtained for a linear discrete-time system. Classical Robustness margin i.e., gain margin and phase margin, considers the gain variation and phase variation of the model for which the stability of the closed loop is preserved.The second part of the thesis aims to consider perturbation in the representation of the vertices of the polyhedral regions. The quantized state-space partitions lose some of the important property of the explicit controllers: “non-overlapping”, “convexity” and “invariant” characterization. Two different sets called vertex-sensitivity and sensitivity margin are defined and determined to characterize admissible perturbation preserving the non-overlapping and the invariance property of the controller respectively. The third part analyse the complexity of the explicit solutions in terms of computational time and memory storage. Sequential and parallel evaluations of the PWA functions for the Alternating Direction Method of Multiplier (ADMM) algorithm are compared. In addition a comparison of the computational complexity of the parallel evaluations of the PWA functions for the Progressive Hedging Algorithm (PHA) on the Central Processing Unit (CPU) and Graphical Processing Unit (GPU) is made.

Towards Designing Robust Deep Learning Models for 3D Understanding

Hamdi, Abdullah 04 1900 (has links)
This dissertation presents novel methods for addressing important challenges related to the robustness of Deep Neural Networks (DNNs) for 3D understanding and in 3D setups. Our research focuses on two main areas, adversarial robustness on 3D data and setups and the robustness of DNNs to realistic 3D scenarios. One paradigm for 3D understanding is to represent 3D as a set of 3D points and learn functions on this set directly. Our first work, AdvPC, addresses the issue of limited transferability and ease of defense against current 3D point cloud adversarial attacks. By using a point cloud Auto-Encoder to generate more transferable attacks, AdvPC surpasses state-of-the-art attacks by a large margin on 3D point cloud attack transferability. Additionally, AdvPC increases the ability to break defenses by up to 38\% as compared to other baseline attacks on the ModelNet40 dataset. Another paradigm of 3D understanding is to perform 2D processing of multiple images of the 3D data. The second work, MVTN, addresses the problem of selecting viewpoints for 3D shape recognition using a Multi-View Transformation Network (MVTN) to learn optimal viewpoints. It combines MVTN with multi-view approaches leading to state-of-the-art results on standard benchmarks ModelNet40, ShapeNet Core55, and ScanObjectNN. MVTN also improves robustness to realistic scenarios like rotation and occlusion. Our third work analyzes the Semantic Robustness of 2D Deep Neural Networks, addressing the problem of high sensitivity toward semantic primitives in DNNs by visualizing the DNN global behavior as semantic maps and observing the interesting behavior of some DNNs. Additionally, we develop a bottom-up approach to detect robust regions of DNNs for scalable semantic robustness analysis and benchmarking of different DNNs. The fourth work, SADA, showcases the problem of lack of robustness in DNNs specifically for the safety-critical applications of autonomous navigation, beyond the simple classification setup. We present a general framework (BBGAN) for black-box adversarial attacks on trained agents, which covers semantic perturbations to the environment of the agent performing the task. BBGAN is trained to generate failure cases that consistently fool a trained agent on tasks such as object detection, self-driving, and autonomous UAV racing.

Technique Comparisons for Estimating Fragility Analysis in the Central Mid-West

Walker, Kimberly Ann 01 April 2016 (has links)
Climate change studies and examinations of increasing sea levels and temperatures show storm intensity and frequency are increasing. As these storms are increasing in intensity and frequency, the effects of these storms must be monitored to determine the probable damages or impacts to critical infrastructure [2, 35]. These storms suddenly create new demands and requirements upon already stressed critical infrastructure sectors [1]. A combined and interdisciplinary effort must be made to identify these stresses and to mitigate any failures. This effort is needed so that the 21st Century Smart Grid is robust and resilient enough to ensure that the grid is secured against all hazards. This project focuses on anticipating loss of above ground electrical power due to extreme wind speeds. This thesis selected a study region of Indiana, Illinois, Kentucky, and Tennessee to investigate the skill of fragility curve generation for this region, during Hurricane Irene, in the Fall of 2011. Three published fragility techniques are compared within the Midwest study region to determine the best skilled technique for the low wind speeds experienced in this region in August 2011. The three techniques studied are: 1) Powerline Technique [6], a correlation between “as published” state based construction standards and surface wind speeds sustained for greater than one minute; 2) the ANL Headout Technique [37], a correlation of Hurricane Irene three second wind gusts with DOE situation reports of outages; and 3) the Walker Technique [1], a correlation of utility reported outages in the Eastern Seaboard counties with three second surface gusts. The deliverable outcomes for this project include: 1) metrics for determining the method best for the study region, from the archival data during Hurricane Irene timeframe; 2) a fragility curve methodology description for each technique; and 3) a mathematical representation for each technique suitable for inclusion in automated forecast algorithms. Overall, this project combines situational awareness modeling to provide distinct fragility techniques that can be used by the public and private sectors to improve emergency management, restoration processes, and critical infrastructure all-hands-preparedness. This work was supported by Western Kentucky University (WKU) and the National Oceanic Atmospheric Administration (NOAA)

Algorithmic Analysis of Complex Semantics for Timed and Hybrid Automata.

Doyen, Laurent 13 June 2006 (has links)
In the field of formal verification of real-time systems, major developments have been recorded in the last fifteen years. It is about logics, automata, process algebra, programming languages, etc. From the beginning, a formalism has played an important role: timed automata and their natural extension,hybrid automata. Those models allow the definition of real-time constraints using real-valued clocks, or more generally analog variables whose evolution is governed by differential equations. They generalize finite automata in that their semantics defines timed words where each symbol is associated with an occurrence timestamp. The decidability and algorithmic analysis of timed and hybrid automata have been intensively studied in the literature. The central result for timed automata is that they are positively decidable. This is not the case for hybrid automata, but semi-algorithmic methods are known when the dynamics is relatively simple, namely a linear relation between the derivatives of the variables. With the increasing complexity of nowadays systems, those models are however limited in their classical semantics, for modelling realistic implementations or dynamical systems. In this thesis, we study the algorithmics of complex semantics for timed and hybrid automata. On the one hand, we propose implementable semantics for timed automata and we study their computational properties: by contrast with other works, we identify a semantics that is implementable and that has decidable properties. On the other hand, we give new algorithmic approaches to the analysis of hybrid automata whose dynamics is given by an affine function of its variables.

Page generated in 0.1167 seconds