• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 127
  • 25
  • 19
  • 11
  • 5
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 223
  • 223
  • 56
  • 50
  • 43
  • 40
  • 38
  • 38
  • 34
  • 29
  • 27
  • 23
  • 21
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Robust optimization and machine learning tools for adaptive transmission in wireless networks

Yun, Sung-Ho 01 February 2012 (has links)
Current and emerging wireless systems require adaptive transmissions to improve their throughput, to meet the QoS requirements or to maintain robust performance. However finding the optimal transmit parameters is getting more difficult due to the growing number of wireless devices that share the wireless medium and the increasing dimensions of transmit parameters, e.g., frequency, time and spatial domain. The performance of adaptive transmission policies derived from given measurements degrade when the environment changes. The policies need to either build up protection against those changes or tune themselves accordingly. Also, an adaptation for systems that take advantages of transmit diversity with finer granularity of resource allocation is hard to come up with due to the prohibitively large number of explicit and implicit environmental variables to take into account. The solutions to the simplified problems often fail due to incorrect assumptions and approximations. In this dissertation, we suggest two tools for adaptive transmission in changing complex environments. We show that adjustable robust optimization builds up protection upon the adaptive resource allocation in interference limited cellular broadband systems, yet maintains the flexibility to tune it according to temporally changing demand. Another tool we propose is based on a data driven approach called Support Vectors. We develop adaptive transmission policies to select the right set of transmit parameters in MIMO-OFDM wireless systems. While we don't explicitly consider all the related parameters, learning based algorithms implicitly take them all into account and result in the adaptation policies that fit optimally to the given environment. We extend the result to multicast traffic and show that the distributed algorithm combined with a data driven approach increases the system performance while keeping the required overhead for information exchange bounded. / text
32

A Quick-and-Dirty Approach to Robustness in Linear Optimization

Karimi, Mehdi January 2012 (has links)
We introduce methods for dealing with linear programming (LP) problems with uncertain data, using the notion of weighted analytic centers. Our methods are based on high interaction with the decision maker (DM) and try to find solutions which satisfy most of his/her important criteria/goals. Starting with the drawbacks of different methods for dealing with uncertainty in LP, we explain how our methods improve most of them. We prove that, besides many practical advantages, our approach is theoretically as strong as robust optimization. Interactive cutting-plane algorithms are developed for concave and quasi-concave utility functions. We present some probabilistic bounds for feasibility and evaluate our approach by means of computational experiments.
33

A comparative simulation study of robust estimators of standard errors /

Johnson, Natalie, January 2007 (has links) (PDF)
Project (M.S.)--Brigham Young University. Dept. of Statistics, 2007. / Includes bibliographical references (p. 57-59).
34

An optimization approach to plant-controller co-design /

Russell, Jared S. January 2009 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 2009. / Typescript. Includes bibliographical references (leaves 74-76).
35

Mathematical optimization techniques for cognitive radar networks

Rossetti, Gaia January 2018 (has links)
This thesis discusses mathematical optimization techniques for waveform design in cognitive radars. These techniques have been designed with an increasing level of sophistication, starting from a bistatic model (i.e. two transmitters and a single receiver) and ending with a cognitive network (i.e. multiple transmitting and multiple receiving radars). The environment under investigation always features strong signal-dependent clutter and noise. All algorithms are based on an iterative waveform-filter optimization. The waveform optimization is based on convex optimization techniques and the exploitation of initial radar waveforms characterized by desired auto and cross-correlation properties. Finally, robust optimization techniques are introduced to account for the assumptions made by cognitive radars on certain second order statistics such as the covariance matrix of the clutter. More specifically, initial optimization techniques were proposed for the case of bistatic radars. By maximizing the signal to interference and noise ratio (SINR) under certain constraints on the transmitted signals, it was possible to iteratively optimize both the orthogonal transmission waveforms and the receiver filter. Subsequently, the above work was extended to a convex optimization framework for a waveform design technique for bistatic radars where both radars transmit and receive to detect targets. The method exploited prior knowledge of the environment to maximize the accumulated target return signal power while keeping the disturbance power to unity at both radar receivers. The thesis further proposes convex optimization based waveform designs for multiple input multiple output (MIMO) based cognitive radars. All radars within the system are able to both transmit and receive signals for detecting targets. The proposed model investigated two complementary optimization techniques. The first one aims at optimizing the signal to interference and noise ratio (SINR) of a specific radar while keeping the SINR of the remaining radars at desired levels. The second approach optimizes the SINR of all radars using a max-min optimization criterion. To account for possible mismatches between actual parameters and estimated ones, this thesis includes robust optimization techniques. Initially, the multistatic, signal-dependent model was tested against existing worst-case and probabilistic methods. These methods appeared to be over conservative and generic for the considered signal-dependent clutter scenario. Therefore a new approach was derived where uncertainty was assumed directly on the radar cross-section and Doppler parameters of the clutters. Approximations based on Taylor series were invoked to make the optimization problem convex and {subsequently} determine robust waveforms with specific SINR outage constraints. Finally, this thesis introduces robust optimization techniques for through-the-wall radars. These are also cognitive but rely on different optimization techniques than the ones previously discussed. By noticing the similarities between the minimum variance distortionless response (MVDR) problem and the matched-illumination one, this thesis introduces robust optimization techniques that consider uncertainty on environment-related parameters. Various performance analyses demonstrate the effectiveness of all the above algorithms in providing a significant increase in SINR in an environment affected by very strong clutter and noise.
36

Robust optimization for discrete structures and non-linear impact of uncertainty

Espinoza García, Juan Carlos 28 September 2017 (has links)
L’objectif de cette thèse est de proposer des solutions efficaces à des problèmes de décision qui ont un impact sur la vie des citoyens, et qui reposent sur des données incertaines. Au niveau des applications, nous nous intéressons à deux problèmes de localisation qui ont un impact sur l’espace public, notamment la localisation de nouveaux logements, et la localisation de vendeurs mobiles dans l’espace urbain. Les problèmes de localisation ne sont pas un sujet récent dans la littérature, toutefois, pour ces deux problèmes qui reposent sur des modèles de choix pour le comportement d’achat des consommateurs, l’incertitude dans le modèle génère un cas spécial qui permet d’étendre la littérature sur l’Optimisation Robuste. Les contributions de cette thèse peuvent s’appliquer à divers problèmes génériques d’optimisation. / We address decision problems under uncertain information with non-linear structures of parameter variation, and devise solution methods in the spirit of Bertsimas and Sim’s Γ-Robustness approach. Furthermore, although the non-linear impact of uncertainty often introduces discrete structures to the problem, for tractability, we provide the conditions under which the complexity class of the nominal model is preserved for the robust counterpart. We extend the Γ-Robustness approach in three avenues. First, we propose a generic case of non-linear impact of parameter variation, and model it with a piecewise linear approximation of the impact function. We show that the subproblem of determining the worst-case variation can be dualized despite the discrete structure of the piece-wise function. Next, we built a robust model for the location of new housing where the non-linearity is introduced by a choice model, and propose a solution combining Γ-Robustness with a scenario-based approach. We show that the subproblem is tractable and leads to a linear formulation of the robust problem. Finally, we model the demand in a Location Problem through a Poisson Process inducing, when demands are uncertain, non-linear structures of parameter variation. We propose the concept of Nested Uncertainty Budgets to manage uncertainty in a tractable way through a hierarchical structure and, under this framework, obtain a subproblem that includes both continuous and discrete deviation variables.
37

Context Informed Statistics in Two Cases: Age Standardization and Risk Minimization

Lin, Zihan 24 October 2018 (has links)
When faced with death counts strati ed by age, analysts often calculate a crude mortality rate (CMR) as a single summary measure. This is done by simply dividing total death counts by total population counts. However, the crude mortality rate is not appropriate for comparing different populations due to the significant impact of age on mortality and the possibility of having different age structures for different populations. While a set of age-adjustment methods seeks to collapse age-specific mortality rates into a single measure that is free from the confounding effect of age structure, we focus on one of these methods called "direct age-standardization" method which summarizes and compares age-specific mortality rates by adopting a reference population. While qualitative insights in relation to age-standardization are often discussed, we seek to approximate age-standardized mortality rate of a population based on the corresponding CMR and the 90th quantile of its population distribution. This approximation is most useful when age-specific mortality data is unavailable. In addition, we provide quantitative insights related to age-standardization. We derive our model based on mathematical insights drawn from the explication of exact calculations and validate our model by using empirical data for a large number of countries under a large number of circumstances. We also extend the application of our approximation model to other age-standardized mortality indicators such as cause-specific mortality rate and potential years of life lost. In the second part of the thesis, we consider the formulation of a general risk management procedure, where risk needs to be measured and further mitigated. The formulation admits an optimization representation and requires as input the distributional information about the underlying risk factors. Unfortunately, for most risk factors it is known to be difficult to identify their distribution in full details, and more problematically the risk management procedure can be prone to errors in the input distribution. In particular, one of the most important distribution information is the covariance hat captures the spread and correlation among risk factors. We study the issue of covariance uncertainty in the problem of mitigating tail risk and by admitting an uncertainty set of covariance of risk factors, we propose a robust optimization model which minimizes risk for the worst scenario especially when data is insufficient and the number of risk factors is large. We will then transform our model into a computationally solvable one and test the model using real-world data.
38

Optimizing Surgical Scheduling Through Integer Programming and Robust Optimization

Geranmayeh, Shirin January 2015 (has links)
This thesis proposes and verifies a number of optimization models for re-designing a master surgery schedule with minimized peak inpatient load at the ward. All models include limitations on Operating Rooms and surgeons availability. Surgeons` preference is included with regards to a consistent weekly schedule over a cycle. The uncertain in patients` length of stay was incorporated using discrete probability distributions unique to each surgeon. Furthermore, robust optimization was utilized to protect against the uncertainty in the number of inpatients a surgeon may send to the ward per block. Different scenarios were developed that explore the impact of varying the availability of operating rooms on each day of the week. The models were solved using Cplex and were verified by an Arena simulation model.
39

Optimization-based approaches to non-parametric extreme event estimation

Mottet, Clementine Delphine Sophie 09 October 2018 (has links)
Modeling extreme events is one of the central tasks in risk management and planning, as catastrophes and crises put human lives and financial assets at stake. A common approach to estimate the likelihood of extreme events, using extreme value theory (EVT), studies the asymptotic behavior of the ``tail" portion of data, and suggests suitable parametric distributions to fit the data backed up by their limiting behaviors as the data size or the excess threshold grows. We explore an alternate approach to estimate extreme events that is inspired from recent advances in robust optimization. Our approach represents information about tail behaviors as constraints and attempts to estimate a target extremal quantity of interest (e.g, tail probability above a given high level) by imposing an optimization problem to find a conservative estimate subject to the constraints that encode the tail information capturing belief on the tail distributional shape. We first study programs where the feasible region is restricted to distribution functions with convex tail densities, a feature shared by all common parametric tail distributions. We then extend our work by generalizing the feasible region to distribution functions with monotone derivatives and bounded or infinite moments. In both cases, we study the statistical implications of the resulting optimization problems. Through investigating their optimality structures, we also present how the worst-case tail in general behaves as a linear combination of polynomial decay tails. Numerically, we develop results to reduce these optimization problems into tractable forms that allow solution schemes via linear-programming-based techniques.
40

Programmation linéaire mixte robuste; Application au dimensionnement d'un système hybride de production d'électricité. / Robust mixed integer linear programming; Application to the design of an hybrid system for electricity production

Poirion, Pierre-Louis 17 December 2013 (has links)
Dans cette thèse, nous nous intéressons à l’optimisation robuste. Plus précisément,nous nous intéresserons aux problèmes linéaires mixtes bi-niveaux, c’est à dire aux problèmes dans lesquels le processus de décision est divisé en deux parties : dans un premier temps, les valeurs optimales des variables dites "de décisions" seront calculées ; puis, une fois que l’incertitude sur les données est levée, nous calculerons les valeurs des variables dites "de recours". Dans cette thèse, nousnous limiterons au cas où les variables de deuxième étape, dites "de recours", sontcontinues.Dans la première partie de cette thèse, nous nous concentrerons sur l’étudethéorique de tels problèmes. Nous commencerons par résoudre un problème linéairesimplifié dans lequel l’incertitude porte seulement sur le membre droit descontraintes, et est modélisée par un polytope bien particulier. Nous supposerons enoutre que le problème vérifie une propriété dite "de recours complet", qui assureque, quelles que soient les valeurs prises par les variables de dcisions, si ces dernières sont admissibles, alors le problème admet toujours une solution réalisable, et ce, quelles que soient les valeurs prises par les paramètres incertains. Nous verrons alors une méthode permettant, à partir d’un programme robuste quelconque, de se ramener à un programme robuste équivalent dont le problème déterministe associévérifie la propriété de recours complet. Avant de traiter le cas général, nous nouslimiterons d’abord au cas o les variables de décisions sont entières. Nous testeronsalors notre approche sur un problème de production. Ensuite, après avoir remarquéque l’approche développée dans les chapitres précédents ne se généralisait pasnaturellement aux polytopes qui n’ont pas des points extrmes 0-1, nous montreronscomment, en utilisant des propriétés de convexité du problème, résoudre le problème robuste dans le cas général. Nous en déduirons alors des résultats de complexité sur le problème de deuxième étape, et sur le problème robuste. Dans la suite de cette partie nous tenterons d’utiliser au mieux les informations probabilistes que l’on a sur les données aléatoires pour estimer la pertinence de notre ensemble d’incertitude.Dans la deuxième partie de cette thèse, nous étudierons un problème de conceptionde parc hybride de production d’électricité. Plus précisément, nous chercheronsà optimiser un parc de production électrique constitué d’éoliennes, de panneauxsolaires, de batteries et d’un générateur à diesel, destiné à répondre à unedemande locale d’énergie électrique. Il s’agit de déterminer le nombre d’éoliennes,de panneaux solaires et de batteries à installer afin de répondre à la demande pourun cot minimum. Cependant, les données du problème sont très aléatoires. En effet,l’énergie produite par une éolienne dépend de la force et de la direction du vent ; celle produite par un panneau solaire, de l’ensoleillement et la demande en électricité peut tre liée à la température ou à d’autres paramètres extérieurs. Pour résoudre ce problème, nous commencerons par modéliser le problème déterministeen un programme linéaire mixte. Puis nous appliquerons directement l’approche de la première partie pour résoudre le problème robuste associé. Nous montrerons ensuite que le problème de deuxième étape associé, peut se résoudre en temps polynomial en utilisant un algorithme de programmation dynamique. Enfin, nous donnerons quelques généralisations et améliorations pour notre problème. / Robust optimization is a recent approach to study problems with uncertain datathat does not rely on a prerequisite precise probability model but on mild assumptionson the uncertainties involved in the problem.We studied a linear two-stage robustproblem with mixed-integer first-stage variables and continuous second stagevariables. We considered column wise uncertainty and focused on the case whenthe problem doesn’t satisfy a "full recourse property" which cannot be always satisfied for real problems. We also studied the complexity of the robust problemwhich is NP-hard and proved that it is actually polynomial solvable when a parameterof the problem is fixed.We then applied this approach to study a stand-alonehybrid system composed of wind turbines, solar photovoltaic panels and batteries.The aim was to determine the optimal number of photovoltaic panels, wind turbinesand batteries in order to serve a given demand while minimizing the total cost of investment and use. We also studied some properties of the second stage problem, in particular that the second stage problem can be solvable in polynomial time using dynamic programming.

Page generated in 0.0874 seconds