• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 351
  • 128
  • 49
  • 39
  • 12
  • 10
  • 9
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 716
  • 185
  • 96
  • 88
  • 87
  • 76
  • 69
  • 54
  • 54
  • 53
  • 53
  • 52
  • 49
  • 44
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Defining and Measuring Robustness in Wireless Sensor Communication for Telemedicine

Bhattarai, Sudha 02 September 2008 (has links)
No description available.
42

HARDWARE-AWARE EFFICIENT AND ROBUST DEEP LEARNING

Sarada Krithivasan (14276069) 20 December 2022 (has links)
<p>Deep Neural Networks (DNNs) have greatly advanced several domains of machine learning including image, speech and natural language processing, leading to their usage in several real-world products and services. This success has been enabled by improvements in hardware platforms such as Graphics Processing Units (GPUs) and specialized accelerators. However, recent trends in state-of-the-art DNNs point to enormous increases in compute requirements during training and inference that far surpass the rate of advancements in deep learning hardware. For example, image-recognition DNNs require tens to hundreds of millions of parameters for reaching competitive accuracies on complex datasets, resulting in billions of operations performed when processing a single input. Furthermore, this growth in model complexity is supplemented by an increase in the training dataset size to achieve improved classification performance, with complex datasets often containing millions of training samples or more. Another challenge hindering the adoption of DNNs is their susceptibility to adversarial attacks. Recent research has demonstrated that DNNs are vulnerable to imperceptible, carefully-crafted input perturbations that can lead to severe consequences in safety-critical applications such as autonomous navigation and healthcare.</p> <p><br></p> <p>This thesis proposes techniques to improve the execution efficiency of DNNs during both inference and training. In the context of DNN training, we first consider the widely-used stochastic gradient descent (SGD) algorithm. We propose a method to use localized learning, which is computationally cheaper and incurs lower memory footprint, to accelerate a SGD-based training framework with minimal impact on accuracy. This is achieved by employing localized learning in a spatio-temporally selective manner, i.e., in selected network layers and epochs. Next, we address training dataset complexity by leveraging input mixing operators that combine multiple training inputs into a single composite input. To ensure that training on the mixed inputs is effective, we propose techniques to reduce the interference between the constituent samples in a mixed input. Furthermore, we also design metrics to identify training inputs that are amenable to mixing, and apply mixing only to these inputs. Moving on to inference, we explore DNN ensembles, where the output of multiple DNN models are combined to form the prediction for a particular input. While ensembles achieve improved classification performance compared to single (i.e., non-ensemble) models, their compute and storage costs scale with the number of models in the ensemble. To that end, we propose a novel ensemble strategy wherein the ensemble members share the same weights for the convolutional and fully-connected layers, but differ in the additive biases applied after every layer. This allows for ensemble inference to be treated like batch inference, with the associated computational efficiency benefits. We also propose techniques to train these ensembles with limited overheads. Finally, we consider spiking neural networks (SNNs), a class of biologically-inspired neural networks that represent and process information as discrete spikes. Motivated by the observation that the dominant fraction of energy consumption in SNN hardware is within the memory and interconnect network, we propose a novel spike-bundling strategy that reduces energy consumption by communicating temporally proximal spikes as a single event.</p> <p><br></p> <p>As a second direction, the thesis identifies a new challenge in the field of adversarial machine learning. In contrast to prior attacks which degrade accuracy, we propose attacks that degrade the execution efficiency (energy and time) of a DNN on a given hardware platform. As one specific embodiment of such attacks, we propose sparsity attacks, which perturb the inputs to a DNN so as to result in reduced sparsity within the network, causing it’s latency and energy to increase on sparsity-optimized platforms. We also extend these attacks to SNNs, which are known rely on sparsity of spikes for efficiency, and demonstrate that it is possible to greatly degrade latency and energy of these networks through adversarial input perturbations.</p> <p><br></p> <p>In summary, this dissertation demonstrates approaches for efficient deep learning for inference and training, while also opening up new classes of attacks that must be addressed.</p> <p><br></p>
43

Statistical Theory for Adversarial Robustness in Machine Learning

Yue Xing (14142297) 21 November 2022 (has links)
<p>Deep learning plays an important role in various disciplines, such as auto-driving, information technology, manufacturing, medical studies, and financial studies. In the past decade, there have been fruitful studies on deep learning in which training and testing data are assumed to follow the same distribution to humans. Recent studies reveal that these dedicated models are vulnerable to adversarial attack, i.e., the predicting label may be changed even if the testing input has an unaware perturbation. However, most existing studies aim to develop computationally efficient adversarial learning algorithms without a thorough understanding of the statistical properties of these algorithms. This dissertation aims to provide theoretical understandings of adversarial training to figure out potential improvements in this area of research. </p> <p><br></p> <p>The first part of this dissertation focuses on the algorithmic stability of adversarial training. We reveal that the algorithmic stability of the vanilla adversarial training method is sub-optimal, and we study the effectiveness of a simple noise injection method. While noise injection improves stability, it also does not deteriorate the consistency of adversarial training.</p> <p><br></p> <p>The second part of this dissertation reveals a phase transition phenomenon in adversarial training. When the attack strength increases, the training trajectory of adversarial training will deviate from its natural counterpart. Consequently, various properties of adversarial training are different from clean training. It is essential to have adaptations in the training configuration and the neural network structure to improve adversarial training.</p> <p><br></p> <p>The last part of this dissertation focuses on how artificially generated data improves adversarial training. It is observed that utilizing synthetic data improves adversarial robustness, even if the data are generated using the original training data, i.e., no extra information is introduced. We use a theory to explain the reason behind this observation and propose further adaptations to utilize the generated data better.</p>
44

Evaluation under Real-world Distribution Shifts

Alhamoud, Kumail 07 1900 (has links)
Recent advancements in empirical and certified robustness have shown promising results in developing reliable and deployable Deep Neural Networks (DNNs). However, most evaluations of DNN robustness have focused on testing models on images from the same distribution they were trained on. In real-world scenarios, DNNs may encounter dynamic environments with significant distribution shifts. This thesis aims to investigate the interplay between empirical and certified adversarial robustness and domain generalization. We take the first step by training robust models on multiple domains and evaluating their accuracy and robustness on an unseen domain. Our findings reveal that: (1) both empirical and certified robustness exhibit generalization to unseen domains, and (2) the level of generalizability does not correlate strongly with the visual similarity of inputs, as measured by the Fréchet Inception Distance (FID) between source and target domains. Furthermore, we extend our study to a real-world medical application, where we demonstrate that adversarial augmentation significantly enhances robustness generalization while minimally affecting accuracy on clean data. This research sheds light on the importance of evaluating DNNs under real-world distribution shifts and highlights the potential of adversarial augmentation in improving robustness in practical applications.
45

Blind Deconvolution Based on Constrained Marginalized Particle Filters

Maryan, Krzysztof S. 09 1900 (has links)
This thesis presents a new approach to blind deconvolution algorithms. The proposed method is a combination of a classical blind deconvolution subspace method and a marginalized particle filter. It is shown that the new method provides better performance than just a marginalized particle filter, and better robustness than the classical subspace method. The properties of the new method make it a candidate for further exploration of its potential application in acoustic blind dereverberation. / Thesis / Master of Applied Science (MASc)
46

Essays in asset pricing with jump risks

Shang, Dapeng 22 May 2024 (has links)
This dissertation consists of two essays that focus on the topics related to asset pricing with jump risks. The first essay explores the effect of disaster risk on the beliefs and portfolio choices of ambiguity-averse agents. With the introduction of Cressie-Read discrepancies, a time-varying pessimism state variable arises endogenously, generating time-varying disaster risk. In the event of a disaster, agents heighten their pessimism, anticipating subsequent disasters to arrive sooner. Within this framework, we deduce optimal consumption and portfolio choices that are robust to model misspecification. Additionally, our measure of pessimism aids in understanding the stylized facts derived from Vanguard’s retail investor survey data, as reported in Giglio et al. (2021). In the second essay, I construct a novel measure to assess the impact of macro announcements on investors’ risk expectations using S&P 500 index and Treasury futures options. This measure corrects the systematic downward jumps in the option- implied variance measure and isolates innovations of investors’ risk expectations after macro-announcements. Applied to key economic releases, including FOMC meetings, GDP, PPI, and Employment data announcements, this measure reveals that macro announcements significantly increase investors’ risk expectations compared to pre-announcement levels. Furthermore, I show investor sentiment significantly declines following macro-announcements with heightened risk expectations, and tail risk positively correlates with risk expectations.
47

CAMP-BDI : an approach for multiagent systems robustness through capability-aware agents maintaining plans

White, Alan Gordon January 2017 (has links)
Rational agent behaviour is frequently achieved through the use of plans, particularly within the widely used BDI (Belief-Desire-Intention) model for intelligent agents. As a consequence, preventing or handling failure of planned activity is a vital component in building robust multiagent systems; this is especially true in realistic environments, where unpredictable exogenous change during plan execution may threaten intended activities. Although reactive approaches can be employed to respond to activity failure through replanning or plan-repair, failure may have debilitative effects that act to stymie recovery and, potentially, hinder subsequent activity. A further factor is that BDI agents typically employ deterministic world and plan models, as probabilistic planning methods are typical intractable in realistically complex environments. However, deterministic operator preconditions may fail to represent world states which increase the risk of activity failure. The primary contribution of this thesis is the algorithmic design of the CAMP-BDI (Capability Aware, Maintaining Plans) approach; a modification of the BDI reasoning cycle which provides agents with beliefs and introspective reasoning to anticipate increased risk of failure and pro-actively modify intended plans in response. We define a capability meta-knowledge model, providing information to identify and address threats to activity success using precondition modelling and quantitative quality estimation. This also facilitates semantic-independent communication of capability information for general advertisement and of dependency information - we define use of the latter, within a structured messaging approach, to extend local agent algorithms towards decentralized, distributed robustness. Finally, we define a policy based approach for dynamic modification of maintenance behaviour, allowing response to observations made during runtime and with potential to improve re-usability of agents in alternate environments. An implementation of CAMP-BDI is compared against an equivalent reactive system through experimentation in multiple perturbation configurations, using a logistics domain. Our empirical evaluation indicates CAMP-BDI has significant benefit if activity failure carries a strong risk of debilitative consequence.
48

On Minmax Robustness for Multiobjective Optimization with Decision or Parameter Uncertainty

Krüger, Corinna 29 March 2018 (has links)
No description available.
49

Robustesse de la commande pr´edictive explicite / Robustness of Explicit MPC Solutions

Koduri, Rajesh 28 September 2017 (has links)
Les techniques de conception de lois de commande pour les systèmes linéaires ou hybrides avec contraintes conduisent souvent à des partitions de l’espace d’état avec des régions polyédriques convexes. Ceci correspond à des lois de commande par retour d’état affine (PWA) par morceaux associées `a une partition polyédrale de l’espace d’état. De telles lois de commande peuvent être effectivement mises en œuvre sur des plateformes matérielles pour des applications de commande en temps réel. Cependant, la robustesse des solutions explicites dépend de la précision du modèle mathématique des systèmes dynamiques. Les incertitudes dans le modèle du système posent de sérieux défis en ce qui concerne la stabilité et la mise en œuvre des lois de commande affines par morceaux. Motivé par les défis auxquels font face les solutions explicites par rapport aux incertitudes dans les modèles des systèmes dynamiques, cette thèse est principalement axée sur leur analyse et à leur retouche. La première partie de cette thèse vise à calculer les marges de robustesse pour une loi de commande PWA nominale donnée obtenue pour un système de temps discret linéaire. Les marges de robustesse classiques, c’est-à-dire la marge de gain et la marge de phase, considèrent la variation de gain et la variation de phase du modèle pour lequel la stabilité de la boucle fermée est préservée.La deuxième partie de la thèse vise à considérer des perturbations dans la représentation des sommets des régions polyédriques. Les partitions de l’espace d’état quantifiées perdent une partie des propriétés importantes des contrôleurs explicites: “non-chevauchement”, “convexité” et/ou “ invariance”. Deux ensembles différents appelés sensibilité aux sommets et marge de sensibilité sont déterminés pour caractériser les perturbations admissibles, en préservant respectivement la propriété de non-chevauchement et d’invariance du contrôleur. La troisième partie vise à analyser la complexité des solutions explicites en termes de temps de calcul et de mémoire. Une première comparaison entre les évaluations séquentielles et parallèles des fonctions PWA par l’algorithme ADMM (Alternating Direction Method of Multiplier) est faite. Ensuite, la complexité computationnelle des évaluations parallèles des fonctions PWA pour l’algorithme de couverture progressive (PHA) sur l’unit´e centrale de traitement (CPU) et l’unit´e de traitement graphique (GPU) est comparée. / The control design techniques for linear or hybrid systems with constraints lead often to off-line state-space partitions with non-overlapping convex polyhedral regions. This corresponds to a piecewise affine (PWA) state feedback control laws associated to polyhedral partition of the state-space. Such control laws can be effectively implemented on hardwares for real-time control applications. However, the robustness of the explicit solutions depends on the accuracy of the mathematical model of the dynamical systems. The uncertainties in the system model poses serious challenges concerning the stability and implementation of the piecewise affine control laws. Motivated by the challenges facing the explicit solutions for the uncertainties in the dynamical systems, this thesis is mostly related to their analysis and re-design. The first part of this thesis aims to compute robustness margins for a given nominal PWA control law obtained for a linear discrete-time system. Classical Robustness margin i.e., gain margin and phase margin, considers the gain variation and phase variation of the model for which the stability of the closed loop is preserved.The second part of the thesis aims to consider perturbation in the representation of the vertices of the polyhedral regions. The quantized state-space partitions lose some of the important property of the explicit controllers: “non-overlapping”, “convexity” and “invariant” characterization. Two different sets called vertex-sensitivity and sensitivity margin are defined and determined to characterize admissible perturbation preserving the non-overlapping and the invariance property of the controller respectively. The third part analyse the complexity of the explicit solutions in terms of computational time and memory storage. Sequential and parallel evaluations of the PWA functions for the Alternating Direction Method of Multiplier (ADMM) algorithm are compared. In addition a comparison of the computational complexity of the parallel evaluations of the PWA functions for the Progressive Hedging Algorithm (PHA) on the Central Processing Unit (CPU) and Graphical Processing Unit (GPU) is made.
50

Towards Designing Robust Deep Learning Models for 3D Understanding

Hamdi, Abdullah 04 1900 (has links)
This dissertation presents novel methods for addressing important challenges related to the robustness of Deep Neural Networks (DNNs) for 3D understanding and in 3D setups. Our research focuses on two main areas, adversarial robustness on 3D data and setups and the robustness of DNNs to realistic 3D scenarios. One paradigm for 3D understanding is to represent 3D as a set of 3D points and learn functions on this set directly. Our first work, AdvPC, addresses the issue of limited transferability and ease of defense against current 3D point cloud adversarial attacks. By using a point cloud Auto-Encoder to generate more transferable attacks, AdvPC surpasses state-of-the-art attacks by a large margin on 3D point cloud attack transferability. Additionally, AdvPC increases the ability to break defenses by up to 38\% as compared to other baseline attacks on the ModelNet40 dataset. Another paradigm of 3D understanding is to perform 2D processing of multiple images of the 3D data. The second work, MVTN, addresses the problem of selecting viewpoints for 3D shape recognition using a Multi-View Transformation Network (MVTN) to learn optimal viewpoints. It combines MVTN with multi-view approaches leading to state-of-the-art results on standard benchmarks ModelNet40, ShapeNet Core55, and ScanObjectNN. MVTN also improves robustness to realistic scenarios like rotation and occlusion. Our third work analyzes the Semantic Robustness of 2D Deep Neural Networks, addressing the problem of high sensitivity toward semantic primitives in DNNs by visualizing the DNN global behavior as semantic maps and observing the interesting behavior of some DNNs. Additionally, we develop a bottom-up approach to detect robust regions of DNNs for scalable semantic robustness analysis and benchmarking of different DNNs. The fourth work, SADA, showcases the problem of lack of robustness in DNNs specifically for the safety-critical applications of autonomous navigation, beyond the simple classification setup. We present a general framework (BBGAN) for black-box adversarial attacks on trained agents, which covers semantic perturbations to the environment of the agent performing the task. BBGAN is trained to generate failure cases that consistently fool a trained agent on tasks such as object detection, self-driving, and autonomous UAV racing.

Page generated in 0.056 seconds