Spelling suggestions: "subject:"bobust optimization"" "subject:"arobust optimization""
91 |
Models, algorithms, and distributional robustness in Nash games and related problems / ナッシュゲームと関連する問題におけるモデル・アルゴリズム・分布的ロバスト性Hori, Atsushi 23 March 2023 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第24741号 / 情博第829号 / 新制||情||139(附属図書館) / 京都大学大学院情報学研究科数理工学専攻 / (主査)教授 山下 信雄, 教授 太田 快人, 教授 永持 仁 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
92 |
An Optimization-Based Framework for Designing Robust Cam-Based Constant-Force Compliant MechanismsMeaders, John Christian 11 June 2008 (has links) (PDF)
Constant-force mechanisms are mechanical devices that provide a near-constant output force over a prescribed deflection range. This thesis develops various optimization-based methods for designing robust constant-force mechanisms. The configuration of the mechanisms that are the focus of this research comprises a cam and a compliant spring fixed at one end while making contact with the cam at the other end. This configuration has proven to be an innovative solution in several applications because of its simplicity in manufacturing and operation. In this work, several methods are introduced to design these mechanisms, and reduce the sensitivity of these mechanisms to manufacturing uncertainties and frictional effects. The mechanism's sensitivity to these factors is critical in small scale applications where manufacturing variations can be large relative to overall dimensions, and frictional forces can be large relative to the output force. The methods in this work are demonstrated on a small scale electrical contact on the order of millimeters in size. The method identifies a design whose output force is 98.20% constant over its operational deflection range. When this design is analyzed using a Monte Carlo simulation the standard deviation in constant force performance is 0.76%. When compared to a benchmark design from earlier research, this represents a 34% increase in constant-force performance, and a reduction from 1.68% in the standard deviation of performance. When this new optimal design is evaluated to reduce frictional effects a design is identifed that shows a 36% reduction in frictional energy loss while giving up, however, 18.63% in constant force.
|
93 |
Uncertainties in Proton Therapy and Their Impact on Treatment Precision : Looking at Mechanical and Beam Alignment Uncertainties / Osäkerheter i protonterapi och dess påverkan på behandlingsprecisionen : Undersökning av mekaniska och strålstyrningsosäkerheterKarlsson, Albin January 2022 (has links)
With the growing use and complexity of proton therapy, the safety and accuracy of the machines becomes increasing important. This, to be able to deliver the prescribed dose to the target while minimizing the dose to healthy tissue. In this project, machine quality assurance data are analyzed to quantify the existing positional machine uncertainties in the form of deviations from expected value and their effect on the dose accuracy in order to improve precision. The method consisted of two main parts. In the first part, two systems to monitor the measured deviations variations from the machine quality assurance tests were implemented. In the second part, two ways to measure the impact of the positional machine uncertainties were developed. The monitoring systems showed that the uncertainties had shrunken over time or were stable, and that the tolerance limits currently used for the machine quality assurance can be lowered. The measured impact of the positional machine uncertainties showed that a margin of 0.61 mm for treatment room 1 and a margin of 1.02 mm for treatment room 2 was required to compensated for the machine uncertainties. When the uncertainties we reincorporated into a clinical approved robust optimized plan, the result showed no significant change in dose to the different treatment volumes. The result gives the Scandion clinic insight and tools to minimize the impact of machine uncertainties and to be able to improve the precision of future treatments.
|
94 |
The Rational Investor is a BayesianQu, Jiajun January 2022 (has links)
The concept of portfolio optimization has been widely studied in the academy and implemented in the financial markets since its introduction by Markowitz 70 years ago. The problem of the mean-variance optimization framework caused by input uncertainty has been one of the foci in the previous research. In this study, several models (linear shrinkage and Black-Litterman) based on Bayesian approaches are studied to improve the estimation of inputs. Moreover, a new framework based on robust optimization is presented to mitigate the input uncertainty further. An out-of-sample test is specially designed, and the results show that Bayesian models in this study can improve the optimization results in terms of higher Sharpe ratios (the quotient between portfolio returns and their risks). Both covariance matrix estimators based on the linear shrinkage method contain less error and provide better optimization results, i.e. higher Sharpe ratios. The Black-Litterman model with a proper choice of inputs can significantly improve the portfolio return. The new framework based on the combination of shrinkage estimators, Black-Litterman, and robust optimization presents a better way for portfolio optimization than the classical framework of mean-variance optimization.
|
95 |
Distributionally Robust Learning under the Wasserstein MetricChen, Ruidi 29 September 2019 (has links)
This dissertation develops a comprehensive statistical learning framework that is robust to (distributional) perturbations in the data using Distributionally Robust Optimization (DRO) under the Wasserstein metric. The learning problems that are studied include: (i) Distributionally Robust Linear Regression (DRLR), which estimates a robustified linear regression plane by minimizing the worst-case expected absolute loss over a probabilistic ambiguity set characterized by the Wasserstein metric; (ii) Groupwise Wasserstein Grouped LASSO (GWGL), which aims at inducing sparsity at a group level when there exists a predefined grouping structure for the predictors, through defining a specially structured Wasserstein metric for DRO; (iii) Optimal decision making using DRLR informed K-Nearest Neighbors (K-NN) estimation, which selects among a set of actions the optimal one through predicting the outcome under each action using K-NN with a distance metric weighted by the DRLR solution; and (iv) Distributionally Robust Multivariate Learning, which solves a DRO problem with a multi-dimensional response/label vector, as in Multivariate Linear Regression (MLR) and Multiclass Logistic Regression (MLG), generalizing the univariate response model addressed in DRLR. A tractable DRO relaxation for each problem is being derived, establishing a connection between robustness and regularization, and obtaining upper bounds on the prediction and estimation errors of the solution. The accuracy and robustness of the estimator is verified through a series of synthetic and real data experiments. The experiments with real data are all associated with various health informatics applications, an application area which motivated the work in this dissertation. In addition to estimation (regression and classification), this dissertation also considers outlier detection applications.
|
96 |
Robust optimization for portfolio risk : a ravisit of worst-case risk management procedures after Basel III award.Özün, Alper January 2012 (has links)
The main purpose of this thesis is to develop methodological and practical improvements on robust portfolio optimization procedures. Firstly, the thesis discusses the drawbacks of classical mean-variance optimization models, and examines robust portfolio optimization procedures with CVaR and worst-case CVaR risk models by providing a clear presentation of derivation of robust optimization models from a basic VaR model. For practical purposes, the thesis introduces an open source software interface called “RobustRisk”, which is developed for producing empirical evidence for the robust portfolio optimization models. The software, which performs Monte-Carlo simulation and out-of-sample performance for the portfolio optimization, is introduced by using a hypothetical portfolio data from selected emerging markets. In addition, the performance of robust portfolio optimization procedures are discussed by providing empirical evidence in the crisis period from advanced markets. Empirical results show that robust optimization with worst-case CVaR model outperforms the nominal CVaR model in the crisis period. The empirical results encourage us to construct a forward-looking stress test procedure based on robust portfolio optimization under regime switches. For this purpose, the Markov chain process is embedded into robust optimization procedure in order to stress regime transition matrix. In addition, assets returns, volatilities, correlation matrix and covariance matrix can be stressed under pre-defined scenario expectations. An application is provided with a hypothetical portfolio representing an internationally diversified portfolio. The CVaR efficient frontier and corresponding optimized portfolio weights are achieved under regime switch scenarios. The research suggests that stressed-CVaR optimization provides a robust and forward-looking stress test procedure to comply with the regulatory requirements stated in Basel II and CRD regulations.
|
97 |
Design Optimization and Plan Optimization for Particle Beam Therapy Systems / 粒子線治療システムを対象とした設計・計画最適化Sakamoto, Yusuke 23 January 2024 (has links)
京都大学 / 新制・課程博士 / 博士(工学) / 甲第25013号 / 工博第5190号 / 新制||工||1991(附属図書館) / 京都大学大学院工学研究科機械理工学専攻 / (主査)教授 泉井 一浩, 教授 小森 雅晴, 教授 井上 康博 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
|
98 |
A robust optimization approach for active and reactive power management in smart distribution networks using electric vehiclesPirouzi, S., Agahaei, J., Latify, M.A., Yousefi, G.R., Mokryani, Geev 07 July 2017 (has links)
Yes / This paper presents a robust framework for active
and reactive power management in distribution networks using
electric vehicles (EVs). The method simultaneously minimizes the
energy cost and the voltage deviation subject to network and EVs
constraints. The uncertainties related to active and reactive
loads, required energy to charge EV batteries, charge rate of
batteries and charger capacity of EVs are modeled using
deterministic uncertainty sets. Firstly, based on duality theory,
the max min form of the model is converted to a max form.
Secondly, Benders decomposition is employed to solve the
problem. The effectiveness of the proposed method is
demonstrated with a 33-bus distribution network.
|
99 |
Efficient Prevalence Estimation for Emerging and Seasonal Diseases Under Limited ResourcesNguyen, Ngoc Thu 30 May 2019 (has links)
Estimating the prevalence rate of a disease is crucial for controlling its spread, and for planning of healthcare services. Due to limited testing budgets and resources, prevalence estimation typically entails pooled, or group, testing where specimens (e.g., blood, urine, tissue swabs) from a number of subjects are combined into a testing pool, which is then tested via a single test. Testing outcomes from multiple pools are analyzed so as to assess the prevalence of the disease. The accuracy of prevalence estimation relies on the testing pool design, i.e., the number of pools to test and the pool sizes (the number of specimens to combine in a pool). Determining an optimal pool design for prevalence estimation can be challenging, as it requires prior information on the current status of the disease, which can be highly unreliable, or simply unavailable, especially for emerging and/or seasonal diseases.
We develop and study frameworks for prevalence estimation, under highly unreliable prior information on the disease and limited testing budgets. Embedded into each estimation framework is an optimization model that determines the optimal testing pool design, considering the trade-off between testing cost and estimation accuracy. We establish important structural properties of optimal testing pool designs in various settings, and develop efficient and exact algorithms. Our numerous case studies, ranging from prevalence estimation of the human immunodeficiency virus (HIV) in various parts of Africa, to prevalence estimation of diseases in plants and insects, including the Tomato Spotted Wilt virus in thrips and West Nile virus in mosquitoes, indicate that the proposed estimation methods substantially outperform current approaches developed in the literature, and produce robust testing pool designs that can hedge against the uncertainty in model inputs.Our research findings indicate that the proposed prevalence estimation frameworks are capable of producing accurate prevalence estimates, and are highly desirable, especially for emerging and/or seasonal diseases under limited testing budgets. / Doctor of Philosophy / Accurately estimating the proportion of a population that has a disease, i.e., the disease prevalence rate, is crucial for controlling its spread, and for planning of healthcare services, such as disease prevention, screening, and treatment. Due to limited testing budgets and resources, prevalence estimation typically entails pooled, or group, testing where biological specimens (e.g., blood, urine, tissue swabs) from a number of subjects are combined into a testing pool, which is then tested via a single test. Testing results from the testing pools are analyzed so as to assess the prevalence of the disease. The accuracy of prevalence estimation relies on the testing pool design, i.e., the number of pools to test and the pool sizes (the number of specimens to combine in a pool). Determining an optimal pool design for prevalence estimation, e.g., the pool design that minimizes the estimation error, can be challenging, as it requires information on the current status of the disease prior to testing, which can be highly unreliable, or simply unavailable, especially for emerging and/or seasonal diseases. Examples of such diseases include, but are not limited to, Zika virus, West Nile virus, and Lyme disease. We develop and study frameworks for prevalence estimation, under highly unreliable prior information on the disease and limited testing budgets. Embedded into each estimation framework is an optimization model that determines the optimal testing pool design, considering the trade-off between testing cost and estimation accuracy. We establish important structural properties of optimal testing pool designs in various settings, and develop efficient and exact optimization algorithms. Our numerous case studies, ranging from prevalence estimation of the human immunodeficiency virus (HIV) in various parts of Africa, to prevalence estimation of diseases in plants and insects, including the Tomato Spotted Wilt virus in thrips and West Nile virus in mosquitoes, indicate that the proposed estimation methods substantially outperform current approaches developed in the literature, and produce robust testing pool designs that can hedge against the uncertainty in model input parameters. Our research findings indicate that the proposed prevalence estimation frameworks are capable of producing accurate prevalence estimates, and are highly desirable, especially for emerging and/or seasonal diseases under limited testing budgets.
|
100 |
Optimal Data-driven Methods for Subject Classification in Public Health ScreeningSadeghzadeh, Seyedehsaloumeh 01 July 2019 (has links)
Biomarker testing, wherein the concentration of a biochemical marker is measured to predict the presence or absence of a certain binary characteristic (e.g., a disease) in a subject, is an essential component of public health screening. For many diseases, the concentration of disease-related biomarkers may exhibit a wide range, particularly among the disease positive subjects, in part due to variations caused by external and/or subject-specific factors. Further, a subject's actual biomarker concentration is not directly observable by the decision maker (e.g., the tester), who has access only to the test's measurement of the biomarker concentration, which can be noisy. In this setting, the decision maker needs to determine a classification scheme in order to classify each subject as test negative or test positive. However, the inherent variability in biomarker concentrations and the noisy test measurements can increase the likelihood of subject misclassification.
We develop an optimal data-driven framework, which integrates optimization and data analytics methodologies, for subject classification in disease screening, with the aim of minimizing classification errors. In particular, our framework utilizes data analytics methodologies to estimate the posterior disease risk of each subject, based on both subject-specific and external factors, coupled with robust optimization methodologies to derive an optimal robust subject classification scheme, under uncertainty on actual biomarker concentrations. We establish various key structural properties of optimal classification schemes, show that they are easily implementable, and develop key insights and principles for classification schemes in disease screening.
As one application of our framework, we study newborn screening for cystic fibrosis in the United States. Cystic fibrosis is one of the most common genetic diseases in the United States. Early diagnosis of cystic fibrosis can substantially improve health outcomes, while a delayed diagnosis can result in severe symptoms of the disease, including fatality. We demonstrate our framework on a five-year newborn screening data set from the North Carolina State Laboratory of Public Health. Our study underscores the value of optimization-based approaches to subject classification, and show that substantial reductions in classification error can be achieved through the use of the proposed framework over current practices. / Doctor of Philosophy / A biomarker is a measurable characteristic that is used as an indicator of a biological state or condition, such as a disease or disorder. Biomarker testing, where a biochemical marker is used to predict the presence or absence of a disease in a subject, is an essential tool in public health screening. For many diseases, related biomarkers may have a wide range of concentration among subjects, particularly among the disease positive subjects. Furthermore, biomarker levels may fluctuate based on external factors (e.g., temperature, humidity) or subject-specific characteristics (e.g., weight, race, gender). These sources of variability can increase the likelihood of subject misclassification based on a biomarker test.
We develop an optimal data-driven framework, which integrates optimization and data analytics methodologies, for subject classification in disease screening, with the aim of minimizing classification errors. We establish various key structural properties of optimal classification schemes, show that they are easily implementable, and develop key insights and principles for classification schemes in disease screening.
As one application of our framework, we study newborn screening for cystic fibrosis in the United States. Cystic fibrosis is one of the most common genetic diseases in the United States. Early diagnosis of cystic fibrosis can substantially improve health outcomes, while a delayed diagnosis can result in severe symptoms of the disease, including fatality. As a result, newborn screening for cystic fibrosis is conducted throughout the United States. We demonstrate our framework on a five-year newborn screening data set from the North Carolina State Laboratory of Public Health. Our study underscores the value of optimization-based approaches to subject classification, and show that substantial reductions in classification error can be achieved through the use of the proposed framework over current practices.
|
Page generated in 0.1134 seconds