• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 406
  • 315
  • 50
  • 46
  • 24
  • 12
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 1046
  • 1046
  • 339
  • 280
  • 279
  • 186
  • 130
  • 114
  • 107
  • 100
  • 95
  • 95
  • 83
  • 80
  • 80
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

Eradicating Malaria: Improving a Multiple-Timestep Optimization Model of Malarial Intervention Policy

Ohashi, Taryn M 18 May 2013 (has links)
Malaria is a preventable and treatable blood-borne disease whose complications can be fatal. Although many interventions exist in order to reduce the impacts of malaria, the optimal method of distributing these interventions in a geographical area with limited resources must be determined. This thesis refines a model that uses an integer linear program and a compartmental model of epidemiology called an SIR model of ordinary differential equations. The objective of the model is to find an intervention strategy over multiple time steps and multiple geographic regions that minimizes the number of days people spend infected with malaria. In this paper, we refine the resolution of the model and conduct sensitivity analysis on its parameter values.
502

Approximation Algorithms for Network Connectivity Problems

Cameron, Amy 18 April 2012 (has links)
In this dissertation, we examine specific network connectivity problems, and achieve improved approximation algorithm and integrality gap results for them. We introduce an important new, highly useful and applicable, network connectivity problem - the Vital Core Connectivity Problem (VCC). Despite its many practical uses, this problem has not been previously studied. We present the first constant factor approximation algorithm for VCC, and provide an upper bound on the integrality gap of its linear programming relaxation. We also introduce a new, useful, extension of the minimum spanning tree problem, called the Extended Minimum Spanning Tree Problem (EMST), that is based on a special case of VCC; and provide both a polynomial-time algorithm and a complete linear description for it. Furthermore, we show how to generalize this new problem to handle numerous disjoint vital cores, providing the first complete linear description of, and polynomial-time algorithm for, the generalized problem. We examine the Survivable Network Design Problem (SNDP) with multiple copies of edges allowed in the solution (multi-SNDP), and present a new approximation algorithm for which the approximation guarantee is better than that of the current best known for certain cases of multi-SNDP. With our method, we also obtain improved bounds on the integrality gap of the linear programming relaxation of the problem. Furthermore, we show the application of these results to variations of SNDP. We investigate cases where the optimal values of multi-SNDP and SNDP are equal; and we present an improvement on the previously best known integrality gap bound and approximation guarantee for the special case of SNDP with metric costs and low vertex connectivity requirements, as well as for the similar special case of the Vertex Connected Survivable Network Design Problem (VC-SNDP). The quality of the results that one can obtain for a given network design problem often depends on its integer linear programming formulation, and, in particular, on its linear programming relaxation. In this connection, we investigate formulations for the Steiner Tree Problem (ST). We propose two new formulations for ST, and investigate their strength in terms of their associated integrality gaps.
503

Scheduling and Advanced Process Control in semiconductor Manufacturing

Obeid, Ali 29 March 2012 (has links) (PDF)
In this thesis, we discussed various possibilities of integrating scheduling decisions with information and constraints from Advanced Process Control (APC) systems in semiconductor Manufacturing. In this context, important questions were opened regarding the benefits of integrating scheduling and APC. An overview on processes, scheduling and Advanced Process Control in semiconductor manufacturing was done, where a description of semiconductor manufacturing processes is given. Two of the proposed problems that result from integrating bith systems were studied and analyzed, they are :Problem of Scheduling with Time Constraints (PTC) and Problem of Scheduling with Equipement health Factor (PEHF). PTC and PEHF have multicriteria objective functions.PTC aims at scheduling job in families on non-identical parallel machines with setup times and time constraints.Non-identical machines mean that not all miachines can (are qualified to) process all types of job families. Time constraints are inspired from APC needs, for which APC control loops must be regularly fed with information from metrology operations (inspection) within a time interval (threshold). The objective is to schedule job families on machines while minimizing the sum of completion times and the losses in machine qualifications.Moreover, PEHF was defined which is an extension of PTC where scheduling takes into account the equipement Health Factors (EHF). EHF is an indicator on the state of a machine. Scheduling is now done by considering a yield resulting from an assignment of a job to a machine and this yield is defined as a function of machine state and job state.
504

Probabilistic covering problems

Qiu, Feng 25 February 2013 (has links)
This dissertation studies optimization problems that involve probabilistic covering constraints. A probabilistic constraint evaluates and requires that the probability that a set of constraints involving random coefficients with known distributions hold satisfy a minimum requirement. A covering constraint involves a linear inequality on non-negative variables with a greater or equal to sign and non-negative coefficients. A variety of applications, such as set cover problems, node/edge cover problems, crew scheduling, production planning, facility location, and machine learning, in uncertain settings involve probabilistic covering constraints. In the first part of this dissertation we consider probabilistic covering linear programs. Using the sampling average approximation (SAA) framework, a probabilistic covering linear program can be approximated by a covering k-violation linear program (CKVLP), a deterministic covering linear program in which at most k constraints are allowed to be violated. We show that CKVLP is strongly NP-hard. Then, to improve the performance of standard mixed-integer programming (MIP) based schemes for CKVLP, we (i) introduce and analyze a coefficient strengthening scheme, (ii) adapt and analyze an existing cutting plane technique, and (iii) present a branching technique. Through computational experiments, we empirically verify that these techniques are significantly effective in improving solution times over the CPLEX MIP solver. In particular, we observe that the proposed schemes can cut down solution times from as much as six days to under four hours in some instances. We also developed valid inequalities arising from two subsets of the constraints in the original formulation. When incorporating them with a modified coefficient strengthening procedure, we are able to solve a difficult probabilistic portfolio optimization instance listed in MIPLIB 2010, which cannot be solved by existing approaches. In the second part of this dissertation we study a class of probabilistic 0-1 covering problems, namely probabilistic k-cover problems. A probabilistic k-cover problem is a stochastic version of a set k-cover problem, which is to seek a collection of subsets with a minimal cost whose union covers each element in the set at least k times. In a stochastic setting, the coefficients of the covering constraints are modeled as Bernoulli random variables, and the probabilistic constraint imposes a minimal requirement on the probability of k-coverage. To account for absence of full distributional information, we define a general ambiguous k-cover set, which is ``distributionally-robust." Using a classical linear program (called the Boolean LP) to compute the probability of events, we develop an exact deterministic reformulation to this ambiguous k-cover problem. However, since the boolean model consists of exponential number of auxiliary variables, and hence not useful in practice, we use two linear program based bounds on the probability that at least k events occur, which can be obtained by aggregating the variables and constraints of the Boolean model, to develop tractable deterministic approximations to the ambiguous k-cover set. We derive new valid inequalities that can be used to strengthen the linear programming based lower bounds. Numerical results show that these new inequalities significantly improve the probability bounds. To use standard MIP solvers, we linearize the multi-linear terms in the approximations and develop mixed-integer linear programming formulations. We conduct computational experiments to demonstrate the quality of the deterministic reformulations in terms of cost effectiveness and solution robustness. To demonstrate the usefulness of the modeling technique developed for probabilistic k-cover problems, we formulate a number of problems that have up till now only been studied under data independence assumption and we also introduce a new applications that can be modeled using the probabilistic k-cover model.
505

Coherent Distortion Risk Measures in Portfolio Selection

Feng, Ming Bin January 2011 (has links)
The theme of this thesis relates to solving the optimal portfolio selection problems using linear programming. There are two key contributions in this thesis. The first contribution is to generalize the well-known linear optimization framework of Conditional Value-at-Risk (CVaR)-based portfolio selection problems (see Rockafellar and Uryasev (2000, 2002)) to more general risk measure portfolio selection problems. In particular, the class of risk measure under consideration is called the Coherent Distortion Risk Measure (CDRM) and is the intersection of two well-known classes of risk measures in the literature: the Coherent Risk Measure (CRM) and the Distortion Risk Measure (DRM). In addition to CVaR, other risk measures which belong to CDRM include the Wang Transform (WT) measure, Proportional Hazard (PH) transform measure, and lookback (LB) distortion measure. Our generalization implies that the portfolio selection problems can be solved very efficiently using the linear programming approach and over a much wider class of risk measures. The second contribution of the thesis is to establish the equivalences among four formulations of CDRM optimization problems: the return maximization subject to CDRM constraint, the CDRM minimization subject to return constraint, the return-CDRM utility maximization, the CDRM-based Sharpe Ratio maximization. Equivalences among these four formulations are established in a sense that they produce the same efficient frontier when varying the parameters in their corresponding problems. We point out that the first three formulations have already been investigated in Krokhmal et al. (2002) with milder assumptions on risk measures (convex functional of portfolio weights). Here we apply their results to CDRM and establish the fourth equivalence. For every one of these formulations, the relationship between its given parameter and the implied parameters for the other three formulations is explored. Such equivalences and relationships can help verifying consistencies (or inconsistencies) for risk management with different objectives and constraints. They are also helpful for uncovering the implied information of a decision making process or of a given investment market. We conclude the thesis by conducting two case studies to illustrate the methodologies and implementations of our linear optimization approach, to verify the equivalences among four different problem formulations, and to investigate the properties of different members of CDRM. In addition, the efficiency (or inefficiency) of the so-called 1/n portfolio strategy in terms of the trade off between portfolio return and portfolio CDRM. The properties of optimal portfolios and their returns with respect to different CDRM minimization problems are compared through their numerical results.
506

Robust Optimization of Nanometer SRAM Designs

Dayal, Akshit 2009 December 1900 (has links)
Technology scaling has been the most obvious choice of designers and chip manufacturing companies to improve the performance of analog and digital circuits. With the ever shrinking technological node, process variations can no longer be ignored and play a significant role in determining the performance of nanoscaled devices. By choosing a worst case design methodology, circuit designers have been very munificent with the design parameters chosen, often manifesting in pessimistic designs with significant area overheads. Significant work has been done in estimating the impact of intra-die process variations on circuit performance, pertinently, noise margin and standby leakage power, for fixed transistor channel dimensions. However, for an optimal, high yield, SRAM cell design, it is absolutely imperative to analyze the impact of process variations at every design point, especially, since the distribution of process variations is a statistically varying parameter and has an inverse correlation with the area of the MOS transistor. Furthermore, the first order analytical models used for optimization of SRAM memories are not as accurate and the impact of voltage and its inclusion as an input, along with other design parameters, is often ignored. In this thesis, the performance parameters of a nano-scaled 6-T SRAM cell are modeled as an accurate, yield aware, empirical polynomial predictor, in the presence of intra-die process variations. The estimated empirical models are used in a constrained non-linear, robust optimization framework to design an SRAM cell, for a 45 nm CMOS technology, having optimal performance, according to bounds specified for the circuit performance parameters, with the objective of minimizing on-chip area. This statistically aware technique provides a more realistic design methodology to study the trade off between performance parameters of the SRAM. Furthermore, a dual optimization approach is followed by considering SRAM power supply and wordline voltages as additional input parameters, to simultaneously tune the design parameters, ensuring a high yield and considerable area reduction. In addition, the cell level optimization framework is extended to the system level optimization of caches, under both cell level and system level performance constraints.
507

Data Envelopment Analysis And Malmquist Total Factor Productivity (tfp) Index: An Application To Turkish Automotive Industry

Karaduman, Alper 01 September 2006 (has links) (PDF)
This thesis shows how the relative efficiency of automotive companies can be evaluated and how the changes in productivity of these companies by time can be observed. There are 17 companies in the analysis which are the main automotive manufacturers of Turkish automotive industry. A method called stepwise approach is used to determine the input and output factors. The two input variables used are the company&rsquo / s Payment for Raw Materials and Components and Payment for Wages and Insurances of Employees / the three output variables are Domestic Sales, Exports and Capacity Usage. The panel data that covers the time period between years 2001 and 2005 is obtained from OSD (Automotive Manufacturers Association). The efficiency analysis is performed according to basic Data Envelopment Analysis (DEA) models which are Charnes, Cooper and Rhodes (CCR) models and Banker, Charnes and Cooper (BCC) models. The software LINGO 10 is used for solving the linear programming models. After finding the overall efficiency, technical efficiency and scale efficiency of each company for each year, the changes in the efficiencies are analyzed by using Malmquist Total Factor Productivity (TFP) Index. The results are illustrated by the help of many tables and graphs for better understanding. When the results in tables and graphs are analyzed, the negative effect of 2001 economic crisis on automotive industry can be observed. Besides, it is seen that the efficiency changes by time show variance from company to company because they produce 7 types of vehicles and there are important differences between them such as production technology, market, demand, etc.
508

Interval Priority Weight Generation From Interval Comparison Matrices In Analytic Hierarchy Process

Ozturk, Ufuk 01 September 2009 (has links) (PDF)
In this study, for the well-known Analytic Hierarchy Process (AHP) method a new approach to interval priority weight generation from interval comparison matrix is proposed. This method can be used for both inconsistent and consistent matrices. Also for the problems having more than two hierarchical levels a synthesizing heuristic is presented. The performances of the methods, interval generation and synthesizing, are compared with the methods that are already available in the literature on randomly generated matrices.
509

Multi Resource Agent Bottleneck Generalized Assignment Problem

Karabulut, Ozlem 01 May 2010 (has links) (PDF)
In this thesis, we consider the Multi Resource Agent Bottleneck Generalized Assignment Problem. We aim to minimize the maximum load over all agents. We study the Linear Programming (LP) relaxation of the problem. We use the optimal LP relaxation solutions in our Branch and Bound algorithm while defining lower and upper bounds and branching schemes. We find that our Branch and Bound algorithm returns optimal solutions to the problems with up to 60 jobs when the number of agents is 5, and up to 30 jobs when the number of agents is 10, in less than 20 minutes. To find approximate solutions, we define a tabu search algorithm and an &amp / #945 / approximation algorithm. Our computational results have revealed that these procedures can find high quality solutions to large sized instances very quickly.
510

On the nonnegative least squares

Santiago, Claudio Prata 19 August 2009 (has links)
In this document, we study the nonnegative least squares primal-dual method for solving linear programming problems. In particular, we investigate connections between this primal-dual method and the classical Hungarian method for the assignment problem. Firstly, we devise a fast procedure for computing the unrestricted least squares solution of a bipartite matching problem by exploiting the special structure of the incidence matrix of a bipartite graph. Moreover, we explain how to extract a solution for the cardinality matching problem from the nonnegative least squares solution. We also give an efficient procedure for solving the cardinality matching problem on general graphs using the nonnegative least squares approach. Next we look into some theoretical results concerning the minimization of p-norms, and separable differentiable convex functions, subject to linear constraints described by node-arc incidence matrices for graphs. Our main result is the reduction of the assignment problem to a single nonnegative least squares problem. This means that the primal-dual approach can be made to converge in one step for the assignment problem. This method does not reduce the primal-dual approach to one step for general linear programming problems, but it appears to give a good starting dual feasible point for the general problem.

Page generated in 0.4441 seconds