• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 764
  • 229
  • 138
  • 95
  • 30
  • 29
  • 19
  • 16
  • 14
  • 10
  • 7
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1611
  • 591
  • 340
  • 247
  • 245
  • 235
  • 191
  • 187
  • 176
  • 167
  • 167
  • 160
  • 143
  • 135
  • 131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
811

Optimal Risk-based Pooled Testing in Public Health Screening, with Equity and Robustness Considerations

Aprahamian, Hrayer Yaznek Berg 03 May 2018 (has links)
Group (pooled) testing, i.e., testing multiple subjects simultaneously with a single test, is essential for classifying a large population of subjects as positive or negative for a binary characteristic (e.g., presence of a disease, genetic disorder, or a product defect). While group testing is used in various contexts (e.g., screening donated blood or for sexually transmitted diseases), a lack of understanding of how an optimal grouping scheme should be designed to maximize classification accuracy under a budget constraint hampers screening efforts. We study Dorfman and Array group testing designs under subject-specific risk characteristics, operational constraints, and imperfect tests, considering classification accuracy-, efficiency-, robustness-, and equity-based objectives, and characterize important structural properties of optimal testing designs. These properties provide us with key insights and allow us to model the testing design problems as network flow problems, develop efficient algorithms, and derive insights on equity and robustness versus accuracy trade-off. One of our models reduces to a constrained shortest path problem, for a special case of which we develop a polynomial-time algorithm. We also show that determining an optimal risk-based Dorfman testing scheme that minimizes the expected number of tests is tractable, resolving an open conjecture. Our case studies, on chlamydia screening and screening of donated blood, demonstrate the value of optimal risk-based testing designs, which are shown to be less expensive, more accurate, more equitable, and more robust than current screening practices. / PHD
812

Robust Turnaround Management: Ground Operations under Uncertainty

Asadi, Ehsan 15 April 2024 (has links)
Efficient ground handling at airports greatly adds to the performance of the entire air transportation network. In this network, airports are connected via aircraft that rely on passenger and crew connections, successful local airport operations, and efficient ground handling resource management. In addition, airport stakeholders’ decision-making processes must take into account various time scales (look-ahead times), process estimates, and both limited and multiple-dependent solution spaces. Most airlines have created integrated hub and operations control centers to monitor and adapt tactical operations. Despite this, decisions in such control centers should be made quickly in case of disruption. The decisions should also include the interests of various airline departments and local stakeholders. Taking into account the Airport Collaborative Decision Making (A-CDM) concept, the joint venture between Airports Council International Europe (ACI EUROPE) - European Organization for the Safety of Air Navigation (EUROCONTROL) - International Air Transport Association (IATA) - Civil Air Navigation Services Organization (CANSO), this study creates different tools to manage turnaround in normal and disrupted contexts, hence facilitating decision-making in an Airport Operations Control Center (AOCC) and a Hub Control Center (HCC). This research focuses on the airline role in the collaborative decision-making process. Regarding A-CDM milestones, turnaround time estimation is computed by four modeling methodologies, namely Critical Path Method (CPM), Project Evaluation and Review Technique (PERT), Fuzzy Critical Path Method (FCPM), and Analytical Convolution in deterministic and nondeterministic domains. In addition, the study develops mathematical models to return the airline schedule to its original plan in the event of delays. Chance-constrained and Robust optimization are also created for optimal decision-making when airlines confront uncertainty during real-world operations. The study also develops a novel Hybrid Shuffled Frog-Leaping Algorithm (SFLA)-Grasshopper Optimization Algorithm (GOA) to expedite the process of finding recovery solutions, allowing AOCC and HCC for real-time applications to send this information to the relevant departments. In comparison to common linear solvers, the solution process is sped up by 18 percent and the quality of the solutions is enhanced by 24 percent on average. Initial results are generated in less than 2 minutes, and global optimal results are achieved in near 15 minutes allowing the system to be applied in real-time applications.:Abstract 1 Introduction 1.1 Problem Description 1.1.1 Decision Scope 1.1.2 Airport Collaborative Decision Making (A-CDM) 1.1.3 Total Airport Management 1.1.4 Ground Handlers 1.1.5 Turnaround Management 1.2 Aims and Objectives 1.3 Thesis Contribution 1.4 Structure 2 Literature Review 2.1 Turnaround 2.2 Ground Handling 2.3 Flights and Networks 2.4 Apron and Gate Assignment 2.5 Scopes Combination 2.5.1 Gate Assignment and Turnaround 2.5.2 Gate Assignment and Flights 2.5.3 Gate Assignment and Ground Handling 2.5.4 Turnaround and Flights 2.5.5 Turnaround and Ground Handling 2.5.6 Flights and Ground Handling 2.6 Turnaround Operations 2.7 Conclusion 3 Turnaround Definition 3.1 Turnaround in A-CDM System 3.2 Turnaround and Ground Handling 3.3 Turnaround Operations 3.3.1 In-Block (INB) and Acceptance (ACC) 3.3.2 Deboarding (DEB) and Boarding (BOA) 3.3.3 Fueling (FUE) 3.3.4 Catering (CAT) 3.3.5 Cleaning (CLE) 3.3.6 Unloading (UNL) and Loading (LOA) 3.3.7 Water service (WAT) and Toilette (TOI) 3.3.8 Finalization (FIN) 4 Total Turnaround Time (TTT) Calculation 4.1 Critical Path Method (CPM) 4.2 Project Evaluation and Review Technique (PERT) 4.3 Fuzzy Critical Path Method (FCPM) 4.3.1 Fuzzy Numbers and Fuzzy Sets 4.3.2 Fuzzy Membership Functions of Turnaround Tasks 4.3.3 Probability-possibility Transformation of Turnaround Tasks 4.3.4 Fuzzy Critical Path Method (FCPM) in Total Turnaround Time (TTT) Calculation 4.3.5 Discussion 4.4 Analytical Convolution 4.4.1 Convolution Method 4.4.2 Monte Carlo (MC) Simulation Evaluation 4.4.3 Application of Convolution in Turnaround Control 5 Disruption Management 5.1 Airline Disruption Management 5.1.1 Airport Operations Control Center (AOCC) 5.1.2 Delay in the Airline Networks 5.1.3 Recovery Options 5.2 Deterministic Model 5.2.1 Mathematical Model 5.2.2 Solution Approaches 5.2.3 Problem Setting 5.3 Non Deterministic Model 5.3.1 Stochastic Arrivals 5.3.2 Stochastic Duration 6 Conclusion 6.1 Discussion around Research Questions 6.1.1 Integration of All Actors 6.1.2 Turnaround Time Prediction 6.1.3 Quick and Robust Reaction 6.2 Future Research 6.2.1 Scope Development 6.2.2 Algorithm Development 6.2.3 Parameter Development List of Acronyms List of Figures List of Tables Bibliography Acknowledgement
813

Adaptation and Installation of a Robust State Estimation Package in the Eef Utility

Chapman, Michael Addison 20 April 1999 (has links)
Robust estimation methods have been successfully applied to the problem of power system state estimation in a real-time environment. The Schweppe-type GM-estimator with the Huber psi-function (SHGM) has been fully installed in conjunction with a topology processor in the EEF utility, headquartered in Fribourg, Switzerland. Some basic concepts of maximum likelihood estimation and robust analysis are reviewed, and applied to the development of the SHGM-estimator. The algorithms used by the topology processor and state estimator are presented, and the superior performance of the SHGM-estimator over the classic weighted least squares estimator is demonstrated on the EEF network. The measurement configuration of the EEF network has been evaluated, and suggestions for its reinforcement have been proposed. / Master of Science
814

Robust Blind Spectral Estimation in the Presence of Impulsive Noise

Kees, Joel Thomas 07 March 2019 (has links)
Robust nonparametric spectral estimation includes generating an accurate estimate of the Power Spectral Density (PSD) for a given set of data while trying to minimize the bias due to data outliers. Robust nonparametric spectral estimation is applied in the domain of electrical communications and digital signal processing when a PSD estimate of the electromagnetic spectrum is desired (often for the goal of signal detection), and when the spectrum is also contaminated by Impulsive Noise (IN). Power Line Communication (PLC) is an example of a communication environment where IN is a concern because power lines were not designed with the intent to transmit communication signals. There are many different noise models used to statistically model different types of IN, but one popular model that has been used for PLC and various other applications is called the Middleton Class A model, and this model is extensively used in this thesis. The performances of two different nonparametric spectral estimation methods are analyzed in IN: the Welch method and the multitaper method. These estimators work well under the common assumption that the receiver noise is characterized by Additive White Gaussian Noise (AWGN). However, the performance degrades for both of these estimators when they are used for signal detection in IN environments. In this thesis basic robust estimation theory is used to modify the Welch and multitaper methods in order to increase their robustness, and it is shown that the signal detection capabilities in IN is improved when using the modified robust estimators. / Master of Science / One application of blind spectral estimation is blind signal detection. Unlike a car radio, where the radio is specifically designed to receive AM and PM radio waves, sometimes it is useful for a radio to be able to detect the presence of transmitted signals whose characteristics are not known ahead of time. Cognitive radio is one application where this capability is useful. Often signal detection is inhibited by Additive White Gaussian Noise (AWGN). This is analogous to trying to hear a friend speak (signal detection) in a room full of people talking (background AWGN). However, some noise environments are more impulsive in nature. Using the previous analogy, the background noise could be loud banging caused by machinery; the noise will not be as constant as the chatter of the crowd, but it will be much louder. When power lines are used as a medium for electromagnetic communication (instead of just sending power), it is called Power Line Communication (PLC), and PLC is a good example of a system where the noise environment is impulsive. In this thesis, methods used for blind spectral estimation are modified to work reliably (or robustly) for impulsive noise environments.
815

New design comparison criteria in Taguchi's robust parameter design

Savarese, Paul Tenzing 06 June 2008 (has links)
Choice of an experimental design is an important concern for most researchers. Judicious selection of an experimental design is also a weighty matter in Robust Parameter Design (RPD). RPD seeks to choose the levels of fixed controllable variables that provide insensitivity (robustness) to the variability of a process induced by uncontrollable noise variables. We use the fact that in the RPD scenario interest lies primarily with the ability of a design to estimate the noise and control by noise interaction effects in the fitted model. These effects allow for effective estimation of the process variance — an understanding of which is necessary to achieve the goals of RPD. Possible designs for use in RPD are quite numerous. Standard designs such as crossed array designs, Plackett-Burman designs, combined array factorial designs and many second order designs all vie for a place in the experimenters tool kit. New criteria are developed based on classical optimality criteria for judging various designs with respect to their performance in RPD. Many different designs are studied and compared. Several first-order and many second order designs such as the central-composite designs, Box-Behnken designs, and hybrid designs are studied and compared via our criteria. Numerous scenarios involving different models and designs are considered; results and conclusions are presented regarding which designs are preferable for use in RPD. Also, a new design rotatability entity is introduced. Optimality conditions with respect to our criteria are studied. For designs which are rotatable by our new rotatability entity, conditions are given which lead to optimality for a number of the new design comparison criteria. Finally, a sequential design-augmentation algorithm was developed and programmed on a computer. By cultivating a unique mechanism the algorithm implements a D<sub>s</sub>-optimal strategy in selecting candidate points. D<sub>s</sub>-optimality is likened to D-optimality on a subset of model parameters and is naturally suited to the RPD scenario. The algorithm can be used in either a sequential design-augmentation scenario or in a design-building scenario. Especially useful when a standard design does not exist to match the number of runs available to the researcher, the algorithm can be used to generate a design of the requisite size that should perform well in RPD. / Ph. D.
816

Domain Adaptation with a Classifier Trained by Robust Pseudo-Labels

Zhou, Yunke 07 January 2022 (has links)
With the rapid growth of computing power, approaches based on deep learning algorithms have achieved remarkable results in solving computer vision classification problems. These performance improvements are achieved by assuming the source and target data are collected from the same probability distribution. However, this assumption is usually too strict to be satisfied in many real-world applications, such as big data analysis, natural language processing, and computer vision classification problems. Because of distribution discrepancies between these domains, directly training the model on the source domain cannot be expected to generate satisfactory results on the target domain. Therefore, the problem of minimizing these data distribution discrepancies is the main challenge with which modern machine learning is now faced. To address this problem, domain adaptation (DA) aims to identify domain-invariant features between two different but related domains. This thesis proposes a state-of-the-art DA approach that overcomes the limitations of traditional DA methods. To capture fine-grained information for each category, I deploy centroid-to-centroid alignment to perform domain adaptation. An Exponential Moving Average strategy (EMA) is used to ensure we can form robust source and target centroids. A Gaussian-uniform mixture model is trained using an Expectation-Maximization (EM) algorithm to infer the robustness of the target pseudo-labels. With the help of target pseudo-labels, I propose two novel types of classifiers: (1) a target-oriented classifier (TO); and (2) a centroid-oriented classifier (CO). Extensive experiments show that these two classifiers exhibit superior performance on a variety of DA benchmarks when compared to standard baseline methods. / Master of Science / Approaches based on deep learning algorithms have achieved remarkable results in solving computer vision classification problems. These performance improvements are achieved by assuming the source and target data are collected from the same probability distribution; however, in many real-world applications, such as big data analysis, natural language processing, and computer vision classification problems, this assumption is usually too strict to be satisfied. For example, these two domains may have the same types of classes, but the objects in each category of these different domains can vary in shape, color, background, or even illumination. Because the probability distributions are slightly mismatched, directly training the model on one domain cannot achieve a satisfactory result on the other domain. To address this problem, domain adaptation (DA) aims to extract common features on both domains to transfer knowledge from one domain to another. In this thesis, I propose a state-of-the-art DA approach that overcomes the limitation of the traditional DA methods. To capture the low-level information of each category, I deploy centroid-to-centroid alignment to perform domain adaptation. An Exponential Moving Average (EMA) strategy is used to ensure the generation of robust centroids. A Gaussian-Uniform Mixture model is trained by using the Expectation-Maximization (EM) algorithm to infer the robustness of the target sample pseudo-labels. With the help of robust target pseudo-labels, I propose two novel types of classifiers: (1) a target-oriented classifier (TO); and (2) a centroid-oriented classifier (CO). Extensive experiments show that the proposed method outperforms traditional baseline methods on various DA benchmarks.
817

Confirmatory factor analysis with ordinal variables: A comparison of different estimation methods

Jing, Jiazhen January 2024 (has links)
In social science research, data is often collected using questionnaires with Likert scales, resulting in ordinal data. Confirmatory factor analysis (CFA) is the most common type of analysis, which assumes continuous data and multivariate normality, the assumptions violated for ordinal data. Simulation studies have shown that Robust Maximum Likelihood (RML) works well when the normality assumption is violated. Diagonally Weighted Least Squares (DWLS) estimation is especially recommended for categorical data. Bayesian estimation (BE) methods are also potentially effective for ordinal data. The current study employs a CFA model and Monte Carlo simulation to evaluate the performance of three estimation methods with ordinal data under various conditions in terms of the levels of asymmetry, sample sizes, and number of categories. The results indicate that, for ordinal data, DWLS outperforms RML and BE. RML is effective for ordinal data when the category numbers are sufficiently large. Bayesian methods do not demonstrate a significant advantage with different values of factor loadings, and category distributions had minimal impact on the estimation results.
818

Statistically and Computationally Efficient Resampling and Distributionally Robust Optimization with Applications

Liu, Zhenyuan January 2024 (has links)
Uncertainty quantification via construction of confidence regions has been long studied in statistics. While these existing methods are powerful and commonly used, some modern problems that require expensive model fitting, or those that elicit convoluted interactions between statistical and computational noises, could challenge the effectiveness of these methods. To remedy some of these challenges, this thesis proposes novel approaches that not only guarantee statistical validity but also are computationally efficient. We study two main methodological directions: resampling-based methods in the first half (Chapters 2 and 3) and optimization-based methods, in particular so-called distributionally robust optimization, in the second half (Chapters 4 to 6) of this thesis. The first half focuses on the bootstrap, a common approach for statistical inference. This approach resamples data and hinges on the principle of using the resampling distribution as an approximation to the sampling distribution. However, implementing the bootstrap often demands extensive resampling and model refitting effort to wash away the Monte Carlo error, which can be computationally expensive for modern problems. Chapters 2 and 3 study bootstrap approaches using fewer resamples while maintaining coverage validity, and also the quantification of uncertainty for models with both statistical and Monte Carlo computation errors. In Chapter 2, we investigate bootstrap-based construction of confidence intervals using minimal resampling. We use a “cheap” bootstrap perspective based on sample-resample independence that yields valid coverage with as small as one resample, even when the problem dimension grows closely with the data size. We validate our theoretical findings and assess our approach against other benchmarks through various large-scale or high-dimensional problems. In Chapter 3, we focus on the so-called input uncertainty problem in stochastic simulation, which refers to the propagation of the statistical noise in calibrating input models to impact output accuracy. Unlike most existing literature that focuses on real-valued output quantities, we aim at constructing confidence bands for the entire output distribution function that can contain more holistic information. We develop a new test statistic that generalizes the Kolmogorov-Smirnov statistic to construct confidence bands that account for input uncertainty on top of Monte Carlo errors via an additional asymptotic component formed by a mean-zero Gaussian process. We also demonstrate how subsampling can be used to efficiently estimate the covariance function of this Gaussian process in a computationally cheap fashion. The second part of the thesis is devoted to optimization-based methods, in particular distributionally robust optimization (DRO). Originally built to tackle the uncertainty of the underlying distribution in a stochastic optimization, DRO adopts a worst-case perspective and seeks decisions that optimize under the worst-case scenario, over the so-called ambiguity set that represents the distributional uncertainty. In this thesis, we turn DRO broadly into a statistical tool (still referred to as DRO) by optimizing targets of interest over the ambiguity set and transforming the coverage guarantee of the ambiguity set into confidence bounds for targets. The flexibility of ambiguity sets advantageously allows the injection of prior distribution knowledge that operates with less data requirement than existing methods. In Chapter 4, motivated by the bias-variance tradeoff and other technical complications in conventional multivariate extreme value theory, we propose a shape-constrained DRO called orthounimodality DRO (OU-DRO) as a vehicle to incorporate natural and verifiable information into the tail. We study the statistical guarantee, and tractability especially in the bivariate setting via a new Choquet representation in convex analysis. Chapter 5 further studies a general approach that applies to higher dimensions via sample average approximation (SAA) and importance sampling. We establish convergence guarantee of the SAA optimal value for OU-DRO in any dimension under regularity conditions. We also argue that the resulting SAA problem is a linear program that can be solved by off-the-shelf algorithms. In Chapter 6, we study the connection between the out-of-sample errors of data-driven stochastic optimization and DRO via large deviations theory. We propose a special type of DRO formulation which uses an ambiguity set based on a Kullback Leibler divergence smoothed by the Wasserstein or Levy-Prokhorov distance. We relate large deviations theory to the performance of the proposed DRO and show it achieves nearly optimal out-of-sample performance in terms of the exponential decay rate of the generalization error. Furthermore, the computation of the proposed DRO is not harder than DRO problems based on f-divergence or Wasserstein distances, which leads to a statistically optimal and computationally tractable DRO formulation.
819

<b>Advanced Control Strategies For Heavy Duty Diesel Powertrains</b>

Shubham Ashta (18857710) 21 June 2024 (has links)
<p dir="ltr">The automotive industry has incorporated controls since the 1970s, starting with the pioneering application of an air-to-fuel ratio feedback control carburetor. Over time, significant advancements have been made in control strategies to meet industry standards for reduced fuel consumption, exhaust emissions, and enhanced safety. This thesis focuses on the implementation of advanced control strategies in heavy-duty diesel powertrains and their advantages over traditional control methods commonly employed in the automotive industry.</p><p dir="ltr">The initial part of the thesis demonstrates the utilization of model predictive control (MPC) to generate an optimized velocity profile for class 8 trucks. These velocity profiles are designed to minimize fuel consumption along a given route with known grade conditions, while adhering to the time constraints comparable to those of standard commercial cruise controllers. This methodology is further expanded to include the platooning of two trucks, with the rear truck following a desired gap (variable or fixed), resulting in additional fuel savings throughout the designated route. Through collaborative efforts involving Cummins, Peloton Technology, and Purdue University, these control strategies were implemented and validated through simulation, hardware-in-the-loop testing, and ultimately, in demonstration vehicles.</p><p dir="ltr">MPC is highly effective for vehicle-level controls due to the accurate plant model used for optimization. However, when it comes to engine controls, the plant model becomes highly nonlinear and loses accuracy when linearized [20]. To address this issue, robust control techniques are introduced to account for the inherent inaccuracies in the plant model, which can be represented as uncertainties.</p><p dir="ltr">The second study showcases the application of robust controllers in diesel engine operations, focusing on a 4.5L John Deere diesel engine equipped with an electrified intake boosting system. The intake boosting system is selectively activated during transient operations to mitigate drops in the air-to-fuel ratio (AFR), which can result in smoke emissions. Initially, a two-degree-of-freedom robustsingle-input single-output (SISO) eBooster controller is synthesized to control the eBooster during load transients. Although the robust SISO controller yields improvements, the eBooster alone does not encompass all factors affecting the gas exchange process. Other actuators, such as the exhaust throttle and EGR valve, need to be considered to enhance the air handling system. To achieve this, a robust model-basedmultiple-input multiple-output (MIMO) controller is developed to regulate the desired AFR, engine speed, and diluent air ratio (DAR) using various air handling actuators and fueling strategies. The robust MIMO controller is synthesized based on a physics-based mean value engine model, which has been calibrated to accurately reflect high-fidelity engine simulation software. The robust SISO and MIMO controllers are implemented in simulation using the high-fidelity engine simulation software. Following the simulation, the controllers are validated through experimental testing conducted in an engine dynamometer at University of Wisconsin. Results from these controllers are compared against a non-eBoosted engine, which serves as the baseline. While both the SISO and MIMO controllers show improvements in AFR (Air-Fuel Ratio), DAR (Diluent Air Ratio), and engine speed recovery during the load transients, the robust MIMO controller outperforms them by demonstrating the best overall engine performance. This superiority is attributed to its comprehensive understanding of the coupling between each actuator input and the model output. When the MIMO controller operates alongside the electrified intake boosting system, the engine exhibits remarkable enhancements. Not only does it recover back to a steady state 70% faster than the baseline, but it also reduces engine speed droop by 45%. Consequently, the engine's ability to accept load torque increases significantly.</p><p dir="ltr">As a result, a single robust MIMO controller can efficiently perform the same task instead of employing multiple PIDs or look-up tables for each actuator.</p>
820

Robust Portfolio Optimization : Construction and analysis of a robust mixed-integer linear program for use in portfolio optimization

Bjurström, Tobias, Gabrielsson Baas, Sebastian January 2024 (has links)
When making an investment, it is desirable to maximize the profits while minimizingthe risk. The theory of portfolio optimization is the mathematical approach to choosingwhat assets to invest in, and distributing the capital accordingly. Usually, the objectiveof the optimization is to maximize the return or minimize the risk. This report aims toconstruct and analyze a robust optimization model with MILP in order to determine ifthat model is more suitable for portfolio optimization than earlier models. This is doneby creating a robust MILP model, altering its parameters, and comparing the resultingportfolios with portfolios from older models. Our conclusion is that the constructed modelis appropriate to use for portfolio optimization. In particular, a robust approach is wellsuited for portfolio optimization, and the added MILP-part allows users of the model tospecialize the portfolio to their own preferences.

Page generated in 0.0511 seconds