• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 345
  • 128
  • 49
  • 39
  • 12
  • 10
  • 9
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 709
  • 183
  • 94
  • 88
  • 87
  • 76
  • 69
  • 54
  • 53
  • 53
  • 53
  • 51
  • 49
  • 43
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Robustness Bounds For Uncertain Sampled Data Systems With Presence of Time Delays

Mulay, Siddharth Pradeep 09 August 2013 (has links)
No description available.
292

Coalition Robustness of Multiagent Systems

Tran, Nghia Cong 26 May 2009 (has links) (PDF)
Many multiagent systems are environments where distinct decision-makers compete, explicitly or implicitly, for scarce resources. In these competitive environments, itcan be advantageous for agents to cooperate and form teams, or coalitions; this cooperation gives agents strategic advantage to compete for scarce resources. Multiagent systems thus can be characterized in terms of competition and cooperation. To evaluate the effectiveness of cooperation for particular coalitions, we derive measures based on comparing these different coalitions at their respective equilibria. However, relying on equilibrium results leads to the interesting question of stability. Control theory and cooperative game theory have limitations that make it hard to apply them to study our questions about stabililty and evaluate cooperation in competitive environments. In this thesis we will lay a foundation towards a theory of coalition stability and robustness for multiagent systems. We then apply this condition to form a methodology toevaluate cooperation for market structure analysis.
293

TOWARD ROBUST AND INTERPRETABLE GRAPH AND IMAGE REPRESENTATION LEARNING

Juan Shu (14816524) 27 April 2023 (has links)
<p>Although deep learning models continue to gain momentum, their robustness and interpretability have always been a big concern because of the complexity of such models. In this dissertation, we studied several topics on the robustness and interpretability of convolutional neural networks (CNNs) and graph neural networks (GNNs). We first identified the structural problem of deep convolutional neural networks that leads to the adversarial examples and defined DNN uncertainty regions. We also argued that the generalization error, the large sample theoretical guarantee established for DNN, cannot adequately capture the phenomenon of adversarial examples. Secondly, we studied the dropout in GNNs, which is an effective regularization approach to prevent overfitting. Contrary to CNN, GNN usually has a shallow structure because a deep GNN normally sees performance degradation. We studied different dropout schemes and established a connection between dropout and over-smoothing in GNNs. Therefore we developed layer-wise compensation dropout, which allows GNN to go deeper without suffering performance degradation. We also developed a heteroscedastic dropout which effectively deals with a large number of missing node features due to heavy experimental noise or privacy issues. Lastly, we studied the interpretability of graph neural networks. We developed a self-interpretable GNN structure that denoises useless edges or features, leading to a more efficient message-passing process. The GNN prediction and explanation accuracy were boosted compared with baseline models. </p>
294

Bridging the gap between human and computer vision in machine learning, adversarial and manifold learning for high-dimensional data

Jungeum Kim (12957389) 01 July 2022 (has links)
<p>In this dissertation, we study three important problems in modern deep learning: adversarial robustness, visualization, and partially monotonic function modeling. In the first part, we study the trade-off between robustness and standard accuracy in deep neural network (DNN) classifiers. We introduce sensible adversarial learning and demonstrate the synergistic effect between pursuits of standard natural accuracy and robustness. Specifically, we define a sensible adversary which is useful for learning a robust model while keeping high natural accuracy. We theoretically establish that the Bayes classifier is the most robust multi-class classifier with the 0-1 loss under sensible adversarial learning. We propose a novel and efficient algorithm that trains a robust model using implicit loss truncation. Our  experiments demonstrate that our method is effective in promoting robustness against various attacks and keeping high natural accuracy. </p> <p>In the second part, we study nonlinear dimensional reduction with the manifold assumption, often called manifold learning. Despite the recent advances in manifold learning, current state-of-the-art techniques focus on preserving only local or global structure information of the data. Moreover, they are transductive; the dimensional reduction results cannot be generalized to unseen data. We propose iGLoMAP, a novel inductive manifold learning method for dimensional reduction and high-dimensional data visualization. iGLoMAP preserves both local and global structure information in the same algorithm by preserving geodesic distance between data points. We establish the consistency property of our geodesic distance estimators. iGLoMAP can provide the lower-dimensional embedding for an unseen, novel point without any additional optimization. We  successfully apply iGLoMAP to the simulated and real-data settings with competitive experiments against state-of-the-art methods.</p> <p>In the third part, we study partially monotonic DNNs. We model such a function by using the fundamental theorem for line integrals, where the gradient is parametrized by DNNs. For the validity of the model formulation, we develop a symmetric penalty for gradient modeling. Unlike existing methods, our method allows partially monotonic modeling for general DNN architectures and monotonic constraints on multiple variables. We empirically show the necessity of the symmetric penalty on a simulated dataset.</p>
295

Design of Novel Devices and Circuits for Electrostatic Discharge Protection Applications in Advanced Semiconductor Technologies

Wang, Zhixin 01 January 2015 (has links)
Electrostatic Discharge (ESD), as a subset of Electrical Overstress (EOS), was reported to be in charge of more than 35% of failure in integrated circuits (ICs). Especially in the manufacturing process, the silicon wafer turns out to be a functional ICs after numerous physical, chemical and mechanical processes, each of which expose the sensitive and fragile ICs to ESD environment. In normal end-user applications, ESD from human and machine handling, surge and spike signals in the power supply, and wrong supplying signals, will probably cause severe damage to the ICs and even the whole systems. Generally, ESD protections are evaluated after wafer and even system fabrication, increasing the development period and cost if the protections cannot meet customer's requirements. Therefore, it is important to design and customize robust and area-efficient ESD protections for the ICs at the early development stage. As the technologies generally scaling down, however, ESD protection clamps remain comparable area consumption in the recent years because they provide the discharging path for the ESD energy which rarely scales down. Diode is the most simple and effective device for ESD protection in ICs, but the usage is significantly limited by its low turn-on voltage. MOS devices can be triggered by a dynamic-triggered RC circuit for IOs operating at low voltage, while the one triggered by a static-triggered network, e.g., zener-resistor circuit or grounded-gate configuration, provides a high trigger voltage for high-voltage applications. However, the relatively low current discharging capability makes MOS devices as the secondary choice. Silicon-controlled rectifier (SCR) has become famous due to its high robustness and area efficiency, compared to diode and MOS. In this dissertation, a comprehensive design methodology for SCR based on simulation and measurement are presented for different advanced commercial technologies. Furthermore, an ESD clamp is designed and verified for the first time for the emerging GaN technology. For the SCR, no matter what modification is going to be made, the first concern when drawing the layout is to determine the layout geometrical style, finger width and finger number. This problem for diode and MOS device were studied in detail, so the same method was usually used in SCR. The research in this dissertation provides a closer look into the metal layout effect to the SCR, finding out the optimized robustness and minimized side-effect can be obtained by using specific layout geometry. Another concern about SCR is the relatively low turn-on speed when the IOs under protection is stressed by ESD pulses having very fast rising time, e.g., CDM and IEC 61000-4-2 pulses. On this occasion a large overshoot voltage is generated and cause damage to internal circuit component like gate oxides of MOS devices. The key determination of turn-on speed of SCR is physically investigated, followed by a novel design on SCR by directly connecting the Anode Gate and Cathode Gate to form internal trigger (DCSCR), with improved performance verified experimentally in this dissertation. The overshoot voltage and trigger voltage of the DCSCR will be significantly reduced, in return a better protection for internal circuit component is offered without scarifying neither area or robustness. Even though two SCR's with single direction of ESD current path can be constructed in reverse parallel to form bidirectional protection to pins, stand-alone bidirectional SCR (BSCR) is always desirable for sake of smaller area. The inherent high trigger voltage of BSCR that only fit in high-voltage technologies is overcome by embedding a PMOS transistor as trigger element, making it highly suitable for low-voltage ESD protection applications. More than that, this modification simultaneously introduces benefits including high robustness and low overshoot voltage. For high voltage pins, however, it presents another story for ESD designs. The high operation voltages require that a high trigger voltage and high holding voltage, so as to reduce the false trigger and latch-up risk. For several capacitive pins, the displacement current induced by a large snapback will cause severe damage to internal circuits. A novel design on SCR is proposed to minimize the snapback with adjustable trigger and holding voltage. Thanks to the additional a PIN diode, the similar high robustness and stable thermal leakage performance to SCR is maintained. For academic purpose of ESD design, it is always difficult to obtain the complete process deck in TCAD simulation because those information are highly confidential to the companies. Another challenge of using TCAD is the difficulty of maintaining the accuracy of physics models and predicting the performance of the other structures. In this dissertation a TCAD-aid ESD design methodology is used to evaluate ESD performance before the silicon shuttle. GaN is a promising material for high-voltage high-power RF application compared to the GaAs. However, distinct from GaAs, the leaky problem of the schottky junction and the lack of choice of passive/active components in GaN technology limit the ESD protection design, which will be discussed in this dissertation. However, a promising ESD protection clamp is finally developed based on depletion-mode pHEMT with adjustable trigger voltage, reasonable leakage current and high robustness.
296

Transferability and Robustness of Predictive Models to Proactively Assess Real-Time Freeway Crash Risk

Shew, Cameron Hunter 01 October 2012 (has links) (PDF)
This thesis describes the development and evaluation of real-time crash risk assessment models for four freeway corridors, US-101 NB (northbound) and SB (southbound) as well as I-880 NB and SB. Crash data for these freeway segments for the 16-month period from January 2010 through April 2011 are used to link historical crash occurrences with real-time traffic patterns observed through loop detector data. The analysis techniques adopted for this study are logistic regression and classification trees, which are one of the most common data mining tools. The crash risk assessment models are developed based on a binary classification approach (crash and non-crash outcomes), with traffic parameters measured at surrounding vehicle detection station (VDS) locations as the independent variables. The classification performance assessment methodology accounts for rarity of crashes compared to non-crash cases in the sample instead of the more common pre-specified threshold-based classification. Prior to development of the models, some of the data-related issues such as data cleaning and aggregation were addressed. Based on the modeling efforts, it was found that the turbulence in terms of speed variation is significantly associated with crash risk on the US-101 NB corridor. The models estimated with data from US-101 NB were evaluated based on their classification performance, not only on US-101 NB, but also on the other three freeways for transferability assessment. It was found that the predictive model derived from one freeway can be readily applied to other freeways, although the classification performance decreases. The models which transfer best to other roadways were found to be those that use the least number of VDSs–that is, using one upstream and downstream station rather than two or three. The classification accuracy of the models is discussed in terms of how the models can be used for real-time crash risk assessment, which may be helpful to authorities for freeway segments with newly installed traffic surveillance apparatuses, since the real-time crash risk assessment models from nearby freeways with existing infrastructure would be able to provide a reasonable estimate of crash risk. These models can also be applied for developing and testing variable speed limits (VSLs) and ramp metering strategies that proactively attempt to reduce crash risk. The robustness of the model output is assessed by location, time of day and day of week. The analysis shows that on some locations the models may require further learning due to higher than expected false positive (e.g., the I-680/I-280 interchange on US-101 NB) or false negative rates. The approach for post-processing the results from the model provides ideas to refine the model prior to or during the implementation.
297

A hazard-based risk analysis approach to understanding climate change impacts to water resource systems: application to the Upper Great Lakes

Moody, Paul Markert 01 May 2013 (has links)
Water resources systems are designed to operate under a wide range of potential climate conditions. Traditionally, systems have been designed using stationarity-based methods. Stationarity is the assumption that the climate varies within an envelope of variability, implying that future variability will be similar to past variability. Due to anthropogenic climate change, the credibility of the stationarity-based assumptions has been reduced. In response, climate change assessments have been developed to quantify the potential impacts due to climatic change. While these methods quantify potential changes, they lack the probabilistic information that is needed for a risk-based approach to decision-analysis. This dissertation seeks to answer two crucial questions. First, what is the best way to evaluate water resource systems given uncertainty due to climate change? Second, what role should climate projections or scenarios play in water resources evaluation? A decision analytic approach is applied that begins by considering system decisions and proceeds to determine the information relevant to decision making. Climate based predictor variables are used to predict system hazards using a climate response function. The function is used with climate probability distributions to determine metrics of system robustness and risk. Climate projections and additional sources of climate information are used to develop conditional probability distributions for future climate conditions. The robustness and risk metrics are used to determine decision sensitivity to assumptions about future climate conditions. The methodology is applied within the context of the International Upper Great Lakes Study, which sought to determine a new regulation plan for the releases from Lake Superior that would perform better than the current regulation plan and be more robust to potential future climate change. The methodology clarifies the value of climate related assumptions and the value of GCM projections to the regulation plan decision. The approach presented in this dissertation represents a significant advancement in accounting for potential climate change in water resources decision making. The approach evaluates risk and robustness in a probabilistic context that is familiar to decision makers and evaluates the relevance of additional climate information to decisions.
298

<b>Deep Neural Network Structural Vulnerabilities And Remedial Measures</b>

Yitao Li (9148706) 02 December 2023 (has links)
<p dir="ltr">In the realm of deep learning and neural networks, there has been substantial advancement, but the persistent DNN vulnerability to adversarial attacks has prompted the search for more efficient defense strategies. Unfortunately, this becomes an arms race. Stronger attacks are being develops, while more sophisticated defense strategies are being proposed, which either require modifying the model's structure or incurring significant computational costs during training. The first part of the work makes a significant progress towards breaking this arms race. Let’s consider natural images, where all the feature values are discrete. Our proposed metrics are able to discover all the vulnerabilities surrounding a given natural image. Given sufficient computation resource, we are able to discover all the adversarial examples given one clean natural image, eliminating the need to develop new attacks. For remedial measures, our approach is to introduce a random factor into DNN classification process. Furthermore, our approach can be combined with existing defense strategy, such as adversarial training, to further improve performance.</p>
299

Contributions to the Interface between Experimental Design and Machine Learning

Lian, Jiayi 31 July 2023 (has links)
In data science, machine learning methods, such as deep learning and other AI algorithms, have been widely used in many applications. These machine learning methods often have complicated model structures with a large number of model parameters and a set of hyperparameters. Moreover, these machine learning methods are data-driven in nature. Thus, it is not easy to provide a comprehensive evaluation on the performance of these machine learning methods with respect to the data quality and hyper-parameters of the algorithms. In the statistical literature, design of experiments (DoE) is a set of systematical methods to effectively investigate the effects of input factors for the complex systems. There are few works focusing on the use of DoE methodology for evaluating the quality assurance of AI algorithms, while an AI algorithm is naturally a complex system. An understanding of the quality of Artificial Intelligence (AI) algorithms is important for confidently deploying them in real applications such as cybersecurity, healthcare, and autonomous driving. In this proposal, I aim to develop a set of novel methods on the interface between experimental design and machine learning, providing a systematical framework of using DoE methodology for AI algorithms. This proposal contains six chapters. Chapter 1 provides a general introduction of design of experiments, machine learning, and surrogate modeling. Chapter 2 focuses on investigating the robustness of AI classification algorithms by conducting a comprehensive set of mixture experiments. Chapter 3 proposes a so-called Do-AIQ framework of using DoE for evaluating the AI algorithm’s quality assurance. I establish a design-of-experiment framework to construct an efficient space-filling design in a high-dimensional constraint space and develop an effective surrogate model using additive Gaussian process to enable the quality assessment of AI algorithms. Chapter 4 introduces a framework to generate continual learning (CL) datsets for cybersecurity applications. Chapter 5 presents a variable selection method under cumulative exposure model for time-to-event data with time-varying covariates. Chapter 6 provides the summary of the entire dissertation. / Doctor of Philosophy / Artificial intelligence (AI) techniques, including machine learning and deep learning algorithms, are widely used in various applications in the era of big data. While these algorithms have impressed the public with their remarkable performance, their underlying mechanisms are often highly complex and difficult to interpret. As a result, it becomes challenging to comprehensively evaluate the overall performance and quality of these algorithms. The Design of Experiments (DoE) offers a valuable set of tools for studying and understanding the underlying mechanisms of complex systems, thereby facilitating improvements. DoE has been successfully applied in diverse areas such as manufacturing, agriculture, and healthcare. The use of DoE has played a crucial role in enhancing processes and ensuring high quality. However, there are few works focusing on the use of DoE methodology for evaluating the quality assurance of AI algorithms, where an AI algorithm can be naturally considered as a complex system. This dissertation aims to develop innovative methodologies on the interface between experimental design and machine learning. The research conducted in this dissertation can serve as practical tools to use DoE methodology in the context of AI algorithms.
300

Towards Robust and Adaptive Machine Learning : A Fresh Perspective on Evaluation and Adaptation Methodologies in Non-Stationary Environments

Bayram, Firas January 2023 (has links)
Machine learning (ML) has become ubiquitous in various disciplines and applications, serving as a powerful tool for developing predictive models to analyze diverse variables of interest. With the advent of the digital era, the proliferation of data has presented numerous opportunities for growth and expansion across various domains. However, along with these opportunities, there is a unique set of challenges that arises due to the dynamic and ever-changing nature of data. These challenges include concept drift, which refers to shifting data distributions over time, and other data-related issues that can be framed as learning problems. Traditional static models are inadequate in handling these issues, underscoring the need for novel approaches to enhance the performance robustness and reliability of ML models to effectively navigate the inherent non-stationarity in the online world. The field of concept drift is characterized by several intricate aspects that challenge learning algorithms, including the analysis of model performance, which requires evaluating and understanding how the ML model's predictive capability is affected by different problem settings. Additionally, determining the magnitude of drift necessary for change detection is an indispensable task, as it involves identifying substantial shifts in data distributions. Moreover, the integration of adaptive methodologies is essential for updating ML models in response to data dynamics, enabling them to maintain their effectiveness and reliability in evolving environments. In light of the significance and complexity of the topic, this dissertation offers a fresh perspective on the performance robustness and adaptivity of ML models in non-stationary environments. The main contributions of this research include exploring and organizing the literature, analyzing the performance of ML models in the presence of different types of drift, and proposing innovative methodologies for drift detection and adaptation that solve real-world problems. By addressing these challenges, this research paves the way for the development of more robust and adaptive ML solutions capable of thriving in dynamic and evolving data landscapes. / Machine learning (ML) is widely used in various disciplines as a powerful tool for developing predictive models to analyze diverse variables. In the digital era, the abundance of data has created growth opportunities, but it also brings challenges due to the dynamic nature of data. One of these challenges is concept drift, the shifting data distributions over time. Consequently, traditional static models are inadequate for handling these challenges in the online world. Concept drift, with its intricate aspects, presents a challenge for learning algorithms. Analyzing model performance and detecting substantial shifts in data distributions are crucial for integrating adaptive methodologies to update ML models in response to data dynamics, maintaining effectiveness and reliability in evolving environments. In this dissertation, a fresh perspective is offered on the robustness and adaptivity of ML models in non-stationary environments. This research explores and organizes existing literature, analyzes ML model performance in the presence of drift, and proposes innovative methodologies for detecting and adapting to drift in real-world problems. The aim is to develop more robust and adaptive ML solutions capable of thriving in dynamic and evolving data landscapes.

Page generated in 0.3404 seconds