• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 127
  • 25
  • 19
  • 11
  • 5
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 223
  • 223
  • 56
  • 50
  • 43
  • 40
  • 38
  • 38
  • 34
  • 29
  • 27
  • 23
  • 21
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Building Networks in the Face of Uncertainty

Gupta, Shubham January 2011 (has links)
The subject of this thesis is to study approximation algorithms for some network design problems in face of uncertainty. We consider two widely studied models of handling uncertainties - Robust Optimization and Stochastic Optimization. We study a robust version of the well studied Uncapacitated Facility Location Problem (UFLP). In this version, once the set of facilities to be opened is decided, an adversary may close at most β facilities. The clients must then be assigned to the remaining open facilities. The performance of a solution is measured by the worst possible set of facilities that the adversary may close. We introduce a novel LP for the problem, and provide an LP rounding algorithm when all facilities have same opening costs. We also study the 2-stage Stochastic version of the Steiner Tree Problem. In this version, the set of terminals to be covered is not known in advance. Instead, a probability distribution over the possible sets of terminals is known. One is allowed to build a partial solution in the first stage a low cost, and when the exact scenario to be covered becomes known in the second stage, one is allowed to extend the solution by building a recourse network, albeit at higher cost. The aim is to construct a solution of low cost in expectation. We provide an LP rounding algorithm for this problem that beats the current best known LP rounding based approximation algorithm.
72

Optimal Portfolio Execution Strategies: Uncertainty and Robustness

Moazeni, Somayeh 25 October 2011 (has links)
Optimal investment decisions often rely on assumptions about the models and their associated parameter values. Therefore, it is essential to assess suitability of these assumptions and to understand sensitivity of outcomes when they are altered. More importantly, appropriate approaches should be developed to achieve a robust decision. In this thesis, we carry out a sensitivity analysis on parameter values as well as model speci cation of an important problem in portfolio management, namely the optimal portfolio execution problem. We then propose more robust solution techniques and models to achieve greater reliability on the performance of an optimal execution strategy. The optimal portfolio execution problem yields an execution strategy to liquidate large blocks of assets over a given execution horizon to minimize the mean of the execution cost and risk in execution. For large-volume trades, a major component of the execution cost comes from price impact. The optimal execution strategy then depends on the market price dynamics, the execution price model, the price impact model, as well as the choice of the risk measure. In this study, rst, sensitivity of the optimal execution strategy to estimation errors in the price impact parameters is analyzed, when a deterministic strategy is sought to minimize the mean and variance of the execution cost. An upper bound on the size of change in the solution is provided, which indicates the contributing factors to sensitivity of an optimal execution strategy. Our results show that the optimal execution strategy and the e cient frontier may be quite sensitive to perturbations in the price impact parameters. Motivated by our sensitivity results, a regularized robust optimization approach is devised when the price impact parameters belong to some uncertainty set. We rst illustrate that the classical robust optimization might be unstable to variation in the uncertainty set. To achieve greater stability, the proposed approach imposes a regularization constraint on the uncertainty set before being used in the minimax optimization formulation. Improvement in the stability of the robust solution is discussed and some implications of the regularization on the robust solution are studied. Sensitivity of the optimal execution strategy to market price dynamics is then investigated. We provide arguments that jump di usion models using compound poisson processes naturally model uncertain price impact of other large trades. Using stochastic dynamic programming, we derive analytical solutions for minimizing the expected execution cost under jump di usion models and compare them with the optimal execution strategies obtained from a di usion process. A jump di usion model for the market price dynamics suggests the use of Conditional Value-at-Risk (CVaR) as the risk measure. Using Monte Carlo simulations, a smoothing technique, and a parametric representation of a stochastic strategy, we investigate an approach to minimize the mean and CVaR of the execution cost. The devised approach can further handle constraints using a smoothed exact penalty function.
73

Optimization Models and Algorithms for Workforce Scheduling with Uncertain Demand

Dhaliwal, Gurjot January 2012 (has links)
A workforce plan states the number of workers required at any point in time. Efficient workforce plans can help companies achieve their organizational goals while keeping costs low. In ever increasing globalized work market, companies need a competitive edge over their competitors. A competitive edge can be achieved by lowering costs. Labour costs can be one of the significant costs faced by the companies. Efficient workforce plans can provide companies with a competitive edge by finding low cost options to meet customer demand. This thesis studies the problem of determining the required number of workers when there are two categories of workers. Workers belonging to the first category are trained to work on one type of task (called Specialized Workers); whereas, workers in the second category are trained to work in all the tasks (called Flexible Workers). This thesis makes the following three main contributions. First, it addresses this problem when the demand is deterministic and stochastic. Two different models for deterministic demand cases have been proposed. To study the effects of uncertain demand, techniques of Robust Optimization and Robust Mathemat- ical Programming were used. The thesis also investigates methods to solve large instances of this problem; some of the instances we considered have more than 600,000 variables and constraints. As most of the variables are integer, and objective function is nonlinear, a commercial solver was not able to solve the problem in one day. Initially, we tried to solve the problem by using Lagrangian relaxation and Outer approximation techniques but these approaches were not successful. Although effective in solving small problems, these tools were not able to generate a bound within run time limit for the large data set. A number of heuristics were proposed using projection techniques. Finally this thesis develops a genetic algorithm to solve large instances of this prob- lem. For the tested population, the genetic algorithm delivered results within 2-3% of optimal solution.
74

The design exploration method for adaptive design systems

Wang, Chenjie 08 April 2009 (has links)
The design exploration method for adaptive design systems is developed to facilitate the pursuit of a balance between the efficiency and accuracy in systems engineering design. The proposed method is modified from an existing multiscale material robust design method, the Inductive Design Exploration Method (IDEM). The IDEM is effective in managing uncertainty propagation in the model chain. However, it is not an appropriate method in other systems engineering design outside of original design domain due to its high computational cost. In this thesis, the IDEM is augmented with more efficient solution search methods to improve its capability for efficiently exploring robust design solutions in systems engineering design. The accuracy of the meta-model in engineering design is one uncertainty source. In current engineering design, response surface model is widely used. However, this method is shown as inaccurate in fitting nonlinear models. In this thesis, the local regression method is introduced as an alternative of meta-modeling technique to reduce the computational cost of simulation models. It is proposed as an appropriate method in systems design with nonlinear simulations models. The proposed methods are tested and verified by application to a Multifunctional Energetic Materials design and a Photonic Crystal Coupler and Waveguide design. The methods are demonstrated through the better accuracy of the local regression model in comparison to the response surface model and the better efficiency of the design exploration method for adaptive design systems in comparison to the IDEM. The proposed methods are validated theoretically and empirically through application of the validation square.
75

Valuation of design adaptability in aerospace systems

Fernandez Martin, Ismael. January 2008 (has links)
Thesis (Ph. D.)--Aerospace Engineering, Georgia Institute of Technology, 2008. / Committee Chair: Dr. Mavris, Dimitri; Committee Member: Dr. Hollingsworth, Peter; Committee Member: Dr. McMichael, Jim; Committee Member: Dr. Saleh, Joseph; Committee Member: Dr. Schrage, Daniel.
76

Including severe uncertainty into environmentally benign life cycle design using information gap-decision theory

Duncan, Scott Joseph. January 2008 (has links)
Thesis (Ph. D.)--Mechanical Engineering, Georgia Institute of Technology, 2008. / Committee Chair: Bras, Bert; Committee Member: Allen, Janet; Committee Member: Chameau, Jean-Lou; Committee Member: McGinnis, Leon; Committee Member: Paredis, Chris.
77

A methodology for the robustness-based evaluation of systems-of-systems alternatives using regret analysis

Poole, Benjamin Hancock January 2008 (has links)
Thesis (Ph.D.)--Aerospace Engineering, Georgia Institute of Technology, 2009. / Committee Chair: Mavris, Dimitri; Committee Member: Bishop, Carlee; Committee Member: McMichael, James; Committee Member: Nixon, Janel; Committee Member: Schrage, Daniel
78

Robust Large Margin Approaches for Machine Learning in Adversarial Settings

Torkamani, MohamadAli 21 November 2016 (has links)
Machine learning algorithms are invented to learn from data and to use data to perform predictions and analyses. Many agencies are now using machine learning algorithms to present services and to perform tasks that used to be done by humans. These services and tasks include making high-stake decisions. Determining the right decision strongly relies on the correctness of the input data. This fact provides a tempting incentive for criminals to try to deceive machine learning algorithms by manipulating the data that is fed to the algorithms. And yet, traditional machine learning algorithms are not designed to be safe when confronting unexpected inputs. In this dissertation, we address the problem of adversarial machine learning; i.e., our goal is to build safe machine learning algorithms that are robust in the presence of noisy or adversarially manipulated data. Many complex questions -- to which a machine learning system must respond -- have complex answers. Such outputs of the machine learning algorithm can have some internal structure, with exponentially many possible values. Adversarial machine learning will be more challenging when the output that we want to predict has a complex structure itself. In this dissertation, a significant focus is on adversarial machine learning for predicting structured outputs. In this thesis, first, we develop a new algorithm that reliably performs collective classification: It jointly assigns labels to the nodes of graphed data. It is robust to malicious changes that an adversary can make in the properties of the different nodes of the graph. The learning method is highly efficient and is formulated as a convex quadratic program. Empirical evaluations confirm that this technique not only secures the prediction algorithm in the presence of an adversary, but it also generalizes to future inputs better, even if there is no adversary. While our robust collective classification method is efficient, it is not applicable to generic structured prediction problems. Next, we investigate the problem of parameter learning for robust, structured prediction models. This method constructs regularization functions based on the limitations of the adversary in altering the feature space of the structured prediction algorithm. The proposed regularization techniques secure the algorithm against adversarial data changes, with little additional computational cost. In this dissertation, we prove that robustness to adversarial manipulation of data is equivalent to some regularization for large-margin structured prediction, and vice versa. This confirms some of the previous results for simpler problems. As a matter of fact, an ordinary adversary regularly either does not have enough computational power to design the ultimate optimal attack, or it does not have sufficient information about the learner's model to do so. Therefore, it often tries to apply many random changes to the input in a hope of making a breakthrough. This fact implies that if we minimize the expected loss function under adversarial noise, we will obtain robustness against mediocre adversaries. Dropout training resembles such a noise injection scenario. Dropout training was initially proposed as a regularization technique for neural networks. The procedure is simple: At each iteration of training, randomly selected features are set to zero. We derive a regularization method for large-margin parameter learning based on dropout. Our method calculates the expected loss function under all possible dropout values. This method results in a simple objective function that is efficient to optimize. We extend dropout regularization to non-linear kernels in several different directions. We define the concept of dropout for input space, feature space, and input dimensions, and we introduce methods for approximate marginalization over feature space, even if the feature space is infinite-dimensional. Empirical evaluations show that our techniques consistently outperform the baselines on different datasets.
79

Une méthode d'optimisation hybride pour une évaluation robuste de requêtes / A Hybrid Method to Robust Query Processing

Moumen, Chiraz 29 May 2017 (has links)
La qualité d'un plan d'exécution engendré par un optimiseur de requêtes est fortement dépendante de la qualité des estimations produites par le modèle de coûts. Malheureusement, ces estimations sont souvent imprécises. De nombreux travaux ont été menés pour améliorer la précision des estimations. Cependant, obtenir des estimations précises reste très difficile car ceci nécessite une connaissance préalable et détaillée des propriétés des données et des caractéristiques de l'environnement d'exécution. Motivé par ce problème, deux approches principales de méthodes d'optimisation ont été proposées. Une première approche s'appuie sur des valeurs singulières d'estimations pour choisir un plan d'exécution optimal. A l'exécution, des statistiques sont collectées et comparées à celles estimées. En cas d'erreur d'estimation, une ré-optimisation est déclenchée pour le reste du plan. A chaque invocation, l'optimiseur associe des valeurs spécifiques aux paramètres nécessaires aux calculs des coûts. Cette approche peut ainsi induire plusieurs ré-optimisations d'un plan, engendrant ainsi de mauvaises performances. Dans l'objectif d'éviter cela, une approche alternative considère la possibilité d'erreurs d'estimation dès la phase d'optimisation. Ceci est modélisé par l'utilisation d'un ensemble de points d'estimations pour chaque paramètre présumé incertain. L'objectif est d'anticiper la réaction à une sous-optimalité éventuelle d'un plan d'exécution. Les méthodes dans cette approche cherchent à générer des plans robustes dans le sens où ils sont capables de fournir des performances acceptables et stables pour plusieurs conditions d'exécution. Ces méthodes supposent souvent qu'il est possible de trouver un plan robuste pour l'ensemble de points d'estimations considéré. Cette hypothèse reste injustifiée, notamment lorsque cet ensemble est important. De plus, la majorité de ces méthodes maintiennent sans modification un plan d'exécution jusqu'à la terminaison. Cela peut conduire à de mauvaises performances en cas de violation de la robustesse à l'exécution. Compte tenu de ces constatations, nous proposons dans le cadre de cette thèse une méthode d'optimisation hybride qui vise deux objectifs : la production de plans d'exécution robustes, notamment lorsque l'incertitude des estimations utilisées est importante, et la correction d'une violation de la robustesse pendant l'exécution. Notre méthode s'appuie sur des intervalles d'estimations calculés autour des paramètres incertains, pour produire des plans d'exécution robustes. Ces plans sont ensuite enrichis par des opérateurs dits de contrôle et de décision. Ces opérateurs collectent des statistiques à l'exécution et vérifient la robustesse du plan en cours. Si la robustesse est violée, ces opérateurs sont capables de prendre des décisions de corrections du reste du plan sans avoir besoin de rappeler l'optimiseur. Les résultats de l'évaluation des performances de notre méthode indiquent qu'elle fournit des améliorations significatives dans la robustesse d'évaluation de requêtes. / The quality of an execution plan generated by a query optimizer is highly dependent on the quality of the estimates produced by the cost model. Unfortunately, these estimates are often imprecise. A body of work has been done to improve estimate accuracy. However, obtaining accurate estimates remains very challenging since it requires a prior and detailed knowledge of the data properties and run-time characteristics. Motivated by this issue, two main optimization approaches have been proposed. A first approach relies on single-point estimates to choose an optimal execution plan. At run-time, statistics are collected and compared with estimates. If an estimation error is detected, a re-optimization is triggered for the rest of the plan. At each invocation, the optimizer uses specific values for parameters required for cost calculations. Thus, this approach can induce several plan re-optimizations, resulting in poor performance. In order to avoid this, a second approach considers the possibility of estimation errors at the optimization time. This is modelled by the use of multi-point estimates for each error-prone parameter. The aim is to anticipate the reaction to a possible plan sub-optimality. Methods in this approach seek to generate robust plans, which are able to provide good performance for several run-time conditions. These methods often assume that it is possible to find a robust plan for all expected run-time conditions. This assumption remains unjustified. Moreover, the majority of these methods maintain without modifications an execution plan until the termination. This can lead to poor performance in case of robustness violation at run-time. Based on these findings, we propose in this thesis a hybrid optimization method that aims at two objectives : the production of robust execution plans, particularly when the uncertainty in the used estimates is high, and the correction of a robustness violation during execution. This method makes use of intervals of estimates around error-prone parameters. It produces execution plans that are likely to perform reasonably well over different run-time conditions, so called robust plans. Robust plans are then augmented with what we call check-decide operators. These operators collect statistics at run-time and check the robustness of the current plan. If the robustness is violated, check-decide operators are able to make decisions for plan modifications to correct the robustness violation without a need to recall the optimizer. The results of performance studies of our method indicate that it provides significant improvements in the robustness of query processing.
80

Robustesse et visualisation de production de mélanges / Robustness and visualization of blend's production

Aguilera Cabanas, Jorge Antonio 28 October 2011 (has links)
Le procédé de fabrication de mélanges (PM) consiste à déterminer les proportions optimales à mélanger d'un ensemble de composants de façon que le produit obtenu satisfasse un ensemble de spécifications sur leurs propriétés. Deux caractéristiques importantes du problème de mélange sont les bornes dures sur les propriétés du mélange et l'incertitude répandue dans le procédé. Dans ce travail, on propose une méthode pour la production de mélanges robustes en temps réel qui minimise le coût de la recette et la sur-qualité du mélange. La méthode est basée sur les techniques de l'Optimisation Robuste et sur l'hypothèse que les lois des mélange sont linéaires. On exploite les polytopes sous-jacents pour mesurer, visualiser et caractériser l'infaisabilité du PM et on analyse la modification des bornes sur les composants pour guider le procédé vers le ``meilleur`` mélange robuste. On propose un ensemble d'indicateurs et de visualisations en vue d'offrir une aide à la décision. / The oil blending process (BP) consists in determining the optimal proportions to blend from a set of available components such that the final product fulfills a set of specifications on their properties. Two important characteristics of the blending problem are the hard bounds on the blend's properties and the uncertainty pervading the process. In this work, a real-time optimization method is proposed for producing robust blends while minimizing the blend quality giveaway and the recipe's cost. The method is based on the Robust Optimization techniques and under the assumption that the components properties blend linearly. The blending intrinsic polytopes are exploited in order to measure, visualize and characterize the infeasibility of the BP. A fine analysis of the components bounds modifications is conducted to guide the process towards the ``best`` robust blend. A set of indices and visualizations provide a helpful support for the decision maker.

Page generated in 0.1193 seconds