• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 110
  • 25
  • 19
  • 7
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 196
  • 196
  • 52
  • 46
  • 42
  • 39
  • 37
  • 31
  • 27
  • 27
  • 22
  • 21
  • 19
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Distributionally Robust Learning under the Wasserstein Metric

Chen, Ruidi 29 September 2019 (has links)
This dissertation develops a comprehensive statistical learning framework that is robust to (distributional) perturbations in the data using Distributionally Robust Optimization (DRO) under the Wasserstein metric. The learning problems that are studied include: (i) Distributionally Robust Linear Regression (DRLR), which estimates a robustified linear regression plane by minimizing the worst-case expected absolute loss over a probabilistic ambiguity set characterized by the Wasserstein metric; (ii) Groupwise Wasserstein Grouped LASSO (GWGL), which aims at inducing sparsity at a group level when there exists a predefined grouping structure for the predictors, through defining a specially structured Wasserstein metric for DRO; (iii) Optimal decision making using DRLR informed K-Nearest Neighbors (K-NN) estimation, which selects among a set of actions the optimal one through predicting the outcome under each action using K-NN with a distance metric weighted by the DRLR solution; and (iv) Distributionally Robust Multivariate Learning, which solves a DRO problem with a multi-dimensional response/label vector, as in Multivariate Linear Regression (MLR) and Multiclass Logistic Regression (MLG), generalizing the univariate response model addressed in DRLR. A tractable DRO relaxation for each problem is being derived, establishing a connection between robustness and regularization, and obtaining upper bounds on the prediction and estimation errors of the solution. The accuracy and robustness of the estimator is verified through a series of synthetic and real data experiments. The experiments with real data are all associated with various health informatics applications, an application area which motivated the work in this dissertation. In addition to estimation (regression and classification), this dissertation also considers outlier detection applications.
82

Design Optimization and Plan Optimization for Particle Beam Therapy Systems / 粒子線治療システムを対象とした設計・計画最適化

Sakamoto, Yusuke 23 January 2024 (has links)
京都大学 / 新制・課程博士 / 博士(工学) / 甲第25013号 / 工博第5190号 / 新制||工||1991(附属図書館) / 京都大学大学院工学研究科機械理工学専攻 / (主査)教授 泉井 一浩, 教授 小森 雅晴, 教授 井上 康博 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
83

A robust optimization approach for active and reactive power management in smart distribution networks using electric vehicles

Pirouzi, S., Agahaei, J., Latify, M.A., Yousefi, G.R., Mokryani, Geev 07 July 2017 (has links)
Yes / This paper presents a robust framework for active and reactive power management in distribution networks using electric vehicles (EVs). The method simultaneously minimizes the energy cost and the voltage deviation subject to network and EVs constraints. The uncertainties related to active and reactive loads, required energy to charge EV batteries, charge rate of batteries and charger capacity of EVs are modeled using deterministic uncertainty sets. Firstly, based on duality theory, the max min form of the model is converted to a max form. Secondly, Benders decomposition is employed to solve the problem. The effectiveness of the proposed method is demonstrated with a 33-bus distribution network.
84

Optimal Risk-based Pooled Testing in Public Health Screening, with Equity and Robustness Considerations

Aprahamian, Hrayer Yaznek Berg 03 May 2018 (has links)
Group (pooled) testing, i.e., testing multiple subjects simultaneously with a single test, is essential for classifying a large population of subjects as positive or negative for a binary characteristic (e.g., presence of a disease, genetic disorder, or a product defect). While group testing is used in various contexts (e.g., screening donated blood or for sexually transmitted diseases), a lack of understanding of how an optimal grouping scheme should be designed to maximize classification accuracy under a budget constraint hampers screening efforts. We study Dorfman and Array group testing designs under subject-specific risk characteristics, operational constraints, and imperfect tests, considering classification accuracy-, efficiency-, robustness-, and equity-based objectives, and characterize important structural properties of optimal testing designs. These properties provide us with key insights and allow us to model the testing design problems as network flow problems, develop efficient algorithms, and derive insights on equity and robustness versus accuracy trade-off. One of our models reduces to a constrained shortest path problem, for a special case of which we develop a polynomial-time algorithm. We also show that determining an optimal risk-based Dorfman testing scheme that minimizes the expected number of tests is tractable, resolving an open conjecture. Our case studies, on chlamydia screening and screening of donated blood, demonstrate the value of optimal risk-based testing designs, which are shown to be less expensive, more accurate, more equitable, and more robust than current screening practices. / PHD
85

Efficient Prevalence Estimation for Emerging and Seasonal Diseases Under Limited Resources

Nguyen, Ngoc Thu 30 May 2019 (has links)
Estimating the prevalence rate of a disease is crucial for controlling its spread, and for planning of healthcare services. Due to limited testing budgets and resources, prevalence estimation typically entails pooled, or group, testing where specimens (e.g., blood, urine, tissue swabs) from a number of subjects are combined into a testing pool, which is then tested via a single test. Testing outcomes from multiple pools are analyzed so as to assess the prevalence of the disease. The accuracy of prevalence estimation relies on the testing pool design, i.e., the number of pools to test and the pool sizes (the number of specimens to combine in a pool). Determining an optimal pool design for prevalence estimation can be challenging, as it requires prior information on the current status of the disease, which can be highly unreliable, or simply unavailable, especially for emerging and/or seasonal diseases. We develop and study frameworks for prevalence estimation, under highly unreliable prior information on the disease and limited testing budgets. Embedded into each estimation framework is an optimization model that determines the optimal testing pool design, considering the trade-off between testing cost and estimation accuracy. We establish important structural properties of optimal testing pool designs in various settings, and develop efficient and exact algorithms. Our numerous case studies, ranging from prevalence estimation of the human immunodeficiency virus (HIV) in various parts of Africa, to prevalence estimation of diseases in plants and insects, including the Tomato Spotted Wilt virus in thrips and West Nile virus in mosquitoes, indicate that the proposed estimation methods substantially outperform current approaches developed in the literature, and produce robust testing pool designs that can hedge against the uncertainty in model inputs.Our research findings indicate that the proposed prevalence estimation frameworks are capable of producing accurate prevalence estimates, and are highly desirable, especially for emerging and/or seasonal diseases under limited testing budgets. / Doctor of Philosophy / Accurately estimating the proportion of a population that has a disease, i.e., the disease prevalence rate, is crucial for controlling its spread, and for planning of healthcare services, such as disease prevention, screening, and treatment. Due to limited testing budgets and resources, prevalence estimation typically entails pooled, or group, testing where biological specimens (e.g., blood, urine, tissue swabs) from a number of subjects are combined into a testing pool, which is then tested via a single test. Testing results from the testing pools are analyzed so as to assess the prevalence of the disease. The accuracy of prevalence estimation relies on the testing pool design, i.e., the number of pools to test and the pool sizes (the number of specimens to combine in a pool). Determining an optimal pool design for prevalence estimation, e.g., the pool design that minimizes the estimation error, can be challenging, as it requires information on the current status of the disease prior to testing, which can be highly unreliable, or simply unavailable, especially for emerging and/or seasonal diseases. Examples of such diseases include, but are not limited to, Zika virus, West Nile virus, and Lyme disease. We develop and study frameworks for prevalence estimation, under highly unreliable prior information on the disease and limited testing budgets. Embedded into each estimation framework is an optimization model that determines the optimal testing pool design, considering the trade-off between testing cost and estimation accuracy. We establish important structural properties of optimal testing pool designs in various settings, and develop efficient and exact optimization algorithms. Our numerous case studies, ranging from prevalence estimation of the human immunodeficiency virus (HIV) in various parts of Africa, to prevalence estimation of diseases in plants and insects, including the Tomato Spotted Wilt virus in thrips and West Nile virus in mosquitoes, indicate that the proposed estimation methods substantially outperform current approaches developed in the literature, and produce robust testing pool designs that can hedge against the uncertainty in model input parameters. Our research findings indicate that the proposed prevalence estimation frameworks are capable of producing accurate prevalence estimates, and are highly desirable, especially for emerging and/or seasonal diseases under limited testing budgets.
86

Optimal Data-driven Methods for Subject Classification in Public Health Screening

Sadeghzadeh, Seyedehsaloumeh 01 July 2019 (has links)
Biomarker testing, wherein the concentration of a biochemical marker is measured to predict the presence or absence of a certain binary characteristic (e.g., a disease) in a subject, is an essential component of public health screening. For many diseases, the concentration of disease-related biomarkers may exhibit a wide range, particularly among the disease positive subjects, in part due to variations caused by external and/or subject-specific factors. Further, a subject's actual biomarker concentration is not directly observable by the decision maker (e.g., the tester), who has access only to the test's measurement of the biomarker concentration, which can be noisy. In this setting, the decision maker needs to determine a classification scheme in order to classify each subject as test negative or test positive. However, the inherent variability in biomarker concentrations and the noisy test measurements can increase the likelihood of subject misclassification. We develop an optimal data-driven framework, which integrates optimization and data analytics methodologies, for subject classification in disease screening, with the aim of minimizing classification errors. In particular, our framework utilizes data analytics methodologies to estimate the posterior disease risk of each subject, based on both subject-specific and external factors, coupled with robust optimization methodologies to derive an optimal robust subject classification scheme, under uncertainty on actual biomarker concentrations. We establish various key structural properties of optimal classification schemes, show that they are easily implementable, and develop key insights and principles for classification schemes in disease screening. As one application of our framework, we study newborn screening for cystic fibrosis in the United States. Cystic fibrosis is one of the most common genetic diseases in the United States. Early diagnosis of cystic fibrosis can substantially improve health outcomes, while a delayed diagnosis can result in severe symptoms of the disease, including fatality. We demonstrate our framework on a five-year newborn screening data set from the North Carolina State Laboratory of Public Health. Our study underscores the value of optimization-based approaches to subject classification, and show that substantial reductions in classification error can be achieved through the use of the proposed framework over current practices. / Doctor of Philosophy / A biomarker is a measurable characteristic that is used as an indicator of a biological state or condition, such as a disease or disorder. Biomarker testing, where a biochemical marker is used to predict the presence or absence of a disease in a subject, is an essential tool in public health screening. For many diseases, related biomarkers may have a wide range of concentration among subjects, particularly among the disease positive subjects. Furthermore, biomarker levels may fluctuate based on external factors (e.g., temperature, humidity) or subject-specific characteristics (e.g., weight, race, gender). These sources of variability can increase the likelihood of subject misclassification based on a biomarker test. We develop an optimal data-driven framework, which integrates optimization and data analytics methodologies, for subject classification in disease screening, with the aim of minimizing classification errors. We establish various key structural properties of optimal classification schemes, show that they are easily implementable, and develop key insights and principles for classification schemes in disease screening. As one application of our framework, we study newborn screening for cystic fibrosis in the United States. Cystic fibrosis is one of the most common genetic diseases in the United States. Early diagnosis of cystic fibrosis can substantially improve health outcomes, while a delayed diagnosis can result in severe symptoms of the disease, including fatality. As a result, newborn screening for cystic fibrosis is conducted throughout the United States. We demonstrate our framework on a five-year newborn screening data set from the North Carolina State Laboratory of Public Health. Our study underscores the value of optimization-based approaches to subject classification, and show that substantial reductions in classification error can be achieved through the use of the proposed framework over current practices.
87

Applications and algorithms for two-stage robust linear optimization / Applications et algorithmes pour l'optimisation linéaire robuste en deux étapes

Costa da Silva, Marco Aurelio 13 November 2018 (has links)
Le domaine de recherche de cette thèse est l'optimisation linéaire robuste en deux étapes. Nous sommes intéressés par des algorithmes d'exploration de sa structure et aussi pour ajouter des alternatives afin d'atténuer le conservatisme inhérent à une solution robuste. Nous développons des algorithmes qui incorporent ces alternatives et sont personnalisés pour fonctionner avec des exemples de problèmes à moyenne ou grande échelle. En faisant cela, nous expérimentons une approche holistique du conservatisme en optimisation linéaire robuste et nous rassemblons les dernières avancées dans des domaines tels que l'optimisation robuste basée sur les données, optimisation robuste par distribution et optimisation robuste adaptative. Nous appliquons ces algorithmes dans des applications définies du problème de conception / chargement du réseau, problème de planification, problème combinatoire min-max-min et problème d'affectation de la flotte aérienne. Nous montrons comment les algorithmes développés améliorent les performances par rapport aux implémentations précédentes. / The research scope of this thesis is two-stage robust linear optimization. We are interested in investigating algorithms that can explore its structure and also on adding alternatives to mitigate conservatism inherent to a robust solution. We develop algorithms that incorporate these alternatives and are customized to work with rather medium or large scale instances of problems. By doing this we experiment a holistic approach to conservatism in robust linear optimization and bring together the most recent advances in areas such as data-driven robust optimization, distributionally robust optimization and adaptive robust optimization. We apply these algorithms in defined applications of the network design/loading problem, the scheduling problem, a min-max-min combinatorial problem and the airline fleet assignment problem. We show how the algorithms developed improve performance when compared to previous implementations.
88

Risk neutral and risk averse approaches to multistage stochastic programming with applications to hydrothermal operation planning problems

Tekaya, Wajdi 14 March 2013 (has links)
The main objective of this thesis is to investigate risk neutral and risk averse approaches to multistage stochastic programming with applications to hydrothermal operation planning problems. The purpose of hydrothermal system operation planning is to define an operation strategy which, for each stage of the planning period, given the system state at the beginning of the stage, produces generation targets for each plant. This problem can be formulated as a large scale multistage stochastic linear programming problem. The energy rationing that took place in Brazil in the period 2001/2002 raised the question of whether a policy that is based on a criterion of minimizing the expected cost (i.e. risk neutral approach) is a valid one when it comes to meet the day-to-day supply requirements and taking into account severe weather conditions that may occur. The risk averse methodology provides a suitable framework to remedy these deficiencies. This thesis attempts to provide a better understanding of the risk averse methodology from the practice perspective and suggests further possible alternatives using robust optimization techniques. The questions investigated and the contributions of this thesis are as follows. First, we suggest a multiplicative autoregressive time series model for the energy inflows that can be embedded into the optimization problem that we investigate. Then, computational aspects related to the stochastic dual dynamic programming (SDDP) algorithm are discussed. We investigate the stopping criteria of the algorithm and provide a framework for assessing the quality of the policy. The SDDP method works reasonably well when the number of state variables is relatively small while the number of stages can be large. However, as the number of state variables increases the convergence of the SDDP algorithm can become very slow. Afterwards, performance improvement techniques of the algorithm are discussed. We suggest a subroutine to eliminate the redundant cutting planes in the future cost functions description which allows a considerable speed up factor. Also, a design using high performance computing techniques is discussed. Moreover, an analysis of the obtained policy is outlined with focus on specific aspects of the long term operation planning problem. In the risk neutral framework, extreme events can occur and might cause considerable social costs. These costs can translate into blackouts or forced rationing similarly to what happened in 2001/2002 crisis. Finally, issues related to variability of the SAA problems and sensitivity to initial conditions are studied. No significant variability of the SAA problems is observed. Second, we analyze the risk averse approach and its application to the hydrothermal operation planning problem. A review of the methodology is suggested and a generic description of the SDDP method for coherent risk measures is presented. A detailed study of the risk averse policy is outlined for the hydrothermal operation planning problem using different risk measures. The adaptive risk averse approach is discussed under two different perspectives: one through the mean-$avr$ and the other through the mean-upper-semideviation risk measures. Computational aspects for the hydrothermal system operation planning problem of the Brazilian interconnected power system are discussed and the contributions of the risk averse methodology when compared to the risk neutral approach are presented. We have seen that the risk averse approach ensures a reduction in the high quantile values of the individual stage costs. This protection comes with an increase of the average policy value - the price of risk aversion. Furthermore, both of the risk averse approaches come with practically no extra computational effort and, similarly to the risk neutral method, there was no significant variability of the SAA problems. Finally, a methodology that combines robust and stochastic programming approaches is investigated. In many situations, such as the operation planning problem, the involved uncertain parameters can be naturally divided into two groups, for one group the robust approach makes sense while for the other the stochastic programming approach is more appropriate. The basic ideas are discussed in the multistage setting and a formulation with the corresponding dynamic programming equations is presented. A variant of the SDDP algorithm for solving this class of problems is suggested. The contributions of this methodology are illustrated with computational experiments of the hydrothermal operation planning problem and a comparison with the risk neutral and risk averse approaches is presented. The worst-case-expectation approach constructs a policy that is less sensitive to unexpected demand increase with a reasonable loss on average when compared to the risk neutral method. Also, we comp are the suggested method with a risk averse approach based on coherent risk measures. On the one hand, the idea behind the risk averse method is to allow a trade off between loss on average and immunity against unexpected extreme scenarios. On the other hand, the worst-case-expectation approach consists in a trade off between a loss on average and immunity against unanticipated demand increase. In some sense, there is a certain equivalence between the policies constructed using each of these methods.
89

Water Supply System Management Design and Optimization under Uncertainty

Chung, Gunhui January 2007 (has links)
Increasing population, diminishing supplies and variable climatic conditions can cause difficulties in meeting water demands. When this long range water supply plan is developed to cope with future water demand changes, accuracy and reliability are the two most important factors. To develop an accurate model, the water supply system has become more complicated and comprehensive structures. Future uncertainty also has been considered to improve system reliability as well as economic feasibility.In this study, a general large-scale water supply system that is comprised of modular components was developed in a dynamic simulation environment. Several possible scenarios were simulated in a realistic hypothetical system. In addition to water balances and quality analyses, construction and operation of system components costs were estimated for each scenario. One set of results demonstrates that construction of small-cluster decentralized wastewater treatment systems could be more economical than a centralized plant when communities are spatially scattered or located in steep areas.The Shuffled Frog Leaping Algorithm (SFLA), then, is used to minimize the total system cost of the general water supply system. Decisions are comprised of sizing decisions - pipe diameter, pump design capacity and head, canal capacity, and water/wastewater treatment capabilities - and flow allocations over the water supply network. An explicit representation of energy consumption cost for the operation is incorporated into the system in the optimization process of overall system cost. Although the study water supply systems included highly nonlinear terms in the objective function and constraints, a stochastic search algorithm was applied successfully to find optimal solutions that satisfied all the constraints for the study networks.Finally, a robust optimization approach was introduced into the design process of a water supply system as a framework to consider uncertainties of the correlated future data. The approach allows for the control of the degree of conservatism which is a crucial factor for the system reliabilities and economical feasibilities. The system stability is guaranteed under the most uncertain condition and it was found that the water supply system with uncertainty can be a useful tool to assist decision makers to develop future water supply schemes.
90

Investigating Robustness, Public Transport Optimization, and their Interface / Mathematical Models and Solution Algorithms

Pätzold, Julius 28 June 2019 (has links)
No description available.

Page generated in 0.1125 seconds