• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 403
  • 315
  • 50
  • 46
  • 24
  • 12
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 1042
  • 1042
  • 339
  • 279
  • 278
  • 186
  • 129
  • 114
  • 106
  • 100
  • 94
  • 94
  • 83
  • 80
  • 80
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Capacity Pricing in Electric Generation Expansion

Pirnia, Mehrdad January 2009 (has links)
The focus of this thesis is to explore a new mechanism to give added incentive to invest in new capacities in deregulated electricity markets. There is a lot of concern in energy markets, regarding lack of sufficient private sector investment in new capacities to generate electricity. Although some markets are using mechanisms to reward these investments directly, e.g., by governmental subsidies for renewable sources such as wind or solar, there is not much theory to guide the process of setting the reward levels. The proposed mechanism involves a long term planning model, maximizing the social welfare measured as consumers’ plus producers’ surplus, by choosing new generation capacities which, along with still existing capacities, can meet demand. Much previous research in electricity capacity planning has also solved optimization models, usually with continuous variables only, in linear or non-linear programs. However, these approaches can be misleading when capacity additions must either be zero or a large size, e.g., the building of a nuclear reactor or a large wind farm. Therefore, this research includes binary variables for the building of large new facilities in the optimization problem, i.e. the model becomes a mixed integer linear or nonlinear program. It is well known that, when binary variables are included in such a model, the resulting commodity prices may give insufficient incentive for private investment in the optimal new capacities. The new mechanism is intended to overcome this difficulty with a capacity price in addition to the commodity price: an auxiliary mathematical program calculates the minimum capacity price that is necessary to ensure that all firms investing in new capacities are satisfied with their profit levels. In order to test the applicability of this approach, the result of the suggested model is compared with the Ontario Integrated Power System Plan (IPSP), which recommends new generation capacities, based on historical data and costs of different sources of electricity generation for the next 20 years given a fixed forecast of demand.
262

Cardinality Constrained Robust Optimization Applied to a Class of Interval Observers

McCarthy, Philip James January 2013 (has links)
Observers are used in the monitoring and control of dynamical systems to deduce the values of unmeasured states. Designing an observer requires having an accurate model of the plant — if the model parameters are characterized imprecisely, the observer may not provide reliable estimates. An interval observer, which comprises an upper and lower observer, bounds the plant's states from above and below, given the range of values of the imprecisely characterized parameters, i.e., it defines an interval in which the plant's states must lie at any given instant. We propose a linear programming-based method of interval observer design for two cases: 1) only the initial conditions of the plant are uncertain; 2) the dynamical parameters are also uncertain. In the former, we optimize the transient performance of the interval observers, in the sense that the volume enclosed by the interval is minimized. In the latter, we optimize the steady state performance of the interval observers, in the sense that the norm of the width of the interval is minimized at steady state. Interval observers are typically designed to characterize the widest interval that bounds the states. This thesis proposes an interval observer design method that utilizes additional, but still-incomplete information, that enables the designer to identify tighter bounds on the uncertain parameters under certain operating conditions. The number of bounds that can be refined defines a class of systems. The definition of this class is independent of the specific parameters whose bounds are refined. Applying robust optimization techniques, under a cardinality constrained model of uncertainty, we design a single observer for an entire class of systems. These observers guarantee a minimum level of performance with respect to the aforementioned metrics, as we optimize the worst-case performance over a given class of systems. The robust formulation allows the designer to tune the level of uncertainty in the model. If many of the uncertain parameter bounds can be refined, the nominal performance of the observer can be improved, however, if few or none of the parameter bounds can be refined, the nominal performance of the observer can be designed to be more conservative.
263

Time-efficient Computation with Near-optimal Solutions for Maximum Link Activation in Wireless Communication Systems

Geng, Qifeng January 2012 (has links)
In a generic wireless network where the activation of a transmission link is subject to its signal-to-noise-and-interference ratio (SINR) constraint, one of the most fundamental and yet challenging problem is to find the maximum number of simultaneous transmissions. In this thesis, we consider and study in detail the problem of maximum link activation in wireless networks based on the SINR model. Integer Linear Programming has been used as the main tool in this thesis for the design of algorithms. Fast algorithms have been proposed for the delivery of near-optimal results time-efficiently. With the state-of-art Gurobi optimization solver, both the conventional approach consisting of all the SINR constraints explicitly and the exact algorithm developed recently using cutting planes have been implemented in the thesis. Based on those implementations, new solution algorithms have been proposed for the fast delivery of solutions. Instead of considering interference from all other links, an interference range has been proposed. Two scenarios have been considered, namely the optimistic case and the pessimistic case. The optimistic case considers no interference from outside the interference range, while the pessimistic case considers the interference from outside the range as a common large value. Together with the algorithms, further enhancement procedures on the data analysis have also been proposed to facilitate the computation in the solver.
264

Numerical Stability in Linear Programming and Semidefinite Programming

Wei, Hua January 2006 (has links)
We study numerical stability for interior-point methods applied to Linear Programming, LP, and Semidefinite Programming, SDP. We analyze the difficulties inherent in current methods and present robust algorithms. <br /><br /> We start with the error bound analysis of the search directions for the normal equation approach for LP. Our error analysis explains the surprising fact that the ill-conditioning is not a significant problem for the normal equation system. We also explain why most of the popular LP solvers have a default stop tolerance of only 10<sup>-8</sup> when the machine precision on a 32-bit computer is approximately 10<sup>-16</sup>. <br /><br /> We then propose a simple alternative approach for the normal equation based interior-point method. This approach has better numerical stability than the normal equation based method. Although, our approach is not competitive in terms of CPU time for the NETLIB problem set, we do obtain higher accuracy. In addition, we obtain significantly smaller CPU times compared to the normal equation based direct solver, when we solve well-conditioned, huge, and sparse problems by using our iterative based linear solver. Additional techniques discussed are: crossover; purification step; and no backtracking. <br /><br /> Finally, we present an algorithm to construct SDP problem instances with prescribed strict complementarity gaps. We then introduce two <em>measures of strict complementarity gaps</em>. We empirically show that: (i) these measures can be evaluated accurately; (ii) the size of the strict complementarity gaps correlate well with the number of iteration for the SDPT3 solver, as well as with the local asymptotic convergence rate; and (iii) large strict complementarity gaps, coupled with the failure of Slater's condition, correlate well with loss of accuracy in the solutions. In addition, the numerical tests show that there is no correlation between the strict complementarity gaps and the geometrical measure used in [31], or with Renegar's condition number.
265

A Multiple-objective ILP based Global Routing Approach for VLSI ASIC Design

Yang, Zhen January 2008 (has links)
A VLSI chip can today contain hundreds of millions transistors and is expected to contain more than 1 billion transistors in the next decade. In order to handle this rapid growth in integration technology, the design procedure is therefore divided into a sequence of design steps. Circuit layout is the design step in which a physical realization of a circuit is obtained from its functional description. Global routing is one of the key subproblems of the circuit layout which involves finding an approximate path for the wires connecting the elements of the circuit without violating resource constraints. The global routing problem is NP-hard, therefore, heuristics capable of producing high quality routes with little computational effort are required as we move into the Deep Sub-Micron (DSM) regime. In this thesis, different approaches for global routing problem are first reviewed. The advantages and disadvantages of these approaches are also summarized. According to this literature review, several mathematical programming based global routing models are fully investigated. Quality of solution obtained by these models are then compared with traditional Maze routing technique. The experimental results show that the proposed model can optimize several global routing objectives simultaneously and effectively. Also, it is easy to incorporate new objectives into the proposed global routing model. To speedup the computation time of the proposed ILP based global router, several hierarchical methods are combined with the flat ILP based global routing approach. The experimental results indicate that the bottom-up global routing method can reduce the computation time effectively with a slight increase of maximum routing density. In addition to wire area, routability, and vias, performance and low power are also important goals in global routing, especially in deep submicron designs. Previous efforts that focused on power optimization for global routing are hindered by excessively long run times or the routing of a subset of the nets. Accordingly, a power efficient multi-pin global routing technique (PIRT) is proposed in this thesis. This integer linear programming based techniques strives to find a power efficient global routing solution. The results indicate that an average power savings as high as 32\% for the 130-nm technology can be achieved with no impact on the maximum chip frequency.
266

Capacity Pricing in Electric Generation Expansion

Pirnia, Mehrdad January 2009 (has links)
The focus of this thesis is to explore a new mechanism to give added incentive to invest in new capacities in deregulated electricity markets. There is a lot of concern in energy markets, regarding lack of sufficient private sector investment in new capacities to generate electricity. Although some markets are using mechanisms to reward these investments directly, e.g., by governmental subsidies for renewable sources such as wind or solar, there is not much theory to guide the process of setting the reward levels. The proposed mechanism involves a long term planning model, maximizing the social welfare measured as consumers’ plus producers’ surplus, by choosing new generation capacities which, along with still existing capacities, can meet demand. Much previous research in electricity capacity planning has also solved optimization models, usually with continuous variables only, in linear or non-linear programs. However, these approaches can be misleading when capacity additions must either be zero or a large size, e.g., the building of a nuclear reactor or a large wind farm. Therefore, this research includes binary variables for the building of large new facilities in the optimization problem, i.e. the model becomes a mixed integer linear or nonlinear program. It is well known that, when binary variables are included in such a model, the resulting commodity prices may give insufficient incentive for private investment in the optimal new capacities. The new mechanism is intended to overcome this difficulty with a capacity price in addition to the commodity price: an auxiliary mathematical program calculates the minimum capacity price that is necessary to ensure that all firms investing in new capacities are satisfied with their profit levels. In order to test the applicability of this approach, the result of the suggested model is compared with the Ontario Integrated Power System Plan (IPSP), which recommends new generation capacities, based on historical data and costs of different sources of electricity generation for the next 20 years given a fixed forecast of demand.
267

Properties of Stable Matchings

Szestopalow, Michael Jay January 2010 (has links)
Stable matchings were introduced in 1962 by David Gale and Lloyd Shapley to study the college admissions problem. The seminal work of Gale and Shapley has motivated hundreds of research papers and found applications in many areas of mathematics, computer science, economics, and even medicine. This thesis studies stable matchings in graphs and hypergraphs. We begin by introducing the work of Gale and Shapley. Their main contribution was the proof that every bipartite graph has a stable matching. Our discussion revolves around the Gale-Shapley algorithm and highlights some of the interesting properties of stable matchings in bipartite graphs. We then progress to non-bipartite graphs. Contrary to bipartite graphs, we may not be able to find a stable matching in a non-bipartite graph. Some of the work of Irving will be surveyed, including his extension of the Gale-Shapley algorithm. Irving's algorithm shows that many of the properties of bipartite stable matchings remain when the general case is examined. In 1991, Tan showed how to extend the fundamental theorem of Gale and Shapley to non-bipartite graphs. He proved that every graph contains a set of edges that is very similar to a stable matching. In the process, he found a characterization of graphs with stable matchings based on a modification of Irving's algorithm. Aharoni and Fleiner gave a non-constructive proof of Tan's Theorem in 2003. Their proof relies on a powerful topological result, due to Scarf in 1965. In fact, their result extends beyond graphs and shows that every hypergraph has a fractional stable matching. We show how their work provides new and simpler proofs to several of Tan's results. We then consider fractional stable matchings from a linear programming perspective. Vande Vate obtained the first formulation for complete bipartite graphs in 1989. Further, he showed that the extreme points of the solution set exactly correspond to stable matchings. Roth, Rothblum, and Vande Vate extended Vande Vate's work to arbitrary bipartite graphs. Abeledo and Rothblum further noticed that this new formulation can model fractional stable matchings in non-bipartite graphs in 1994. Remarkably, these formulations yield analogous results to those obtained from Gale-Shapley's and Irving's algorithms. Without the presence of an algorithm, the properties are obtained through clever applications of duality and complementary slackness. We will also discuss stable matchings in hypergraphs. However, the desirable properties that are present in graphs no longer hold. To rectify this problem, we introduce a new ``majority" stable matchings for 3-uniform hypergraphs and show that, under this stronger definition, many properties extend beyond graphs. Once again, the linear programming tools of duality and complementary slackness are invaluable to our analysis. We will conclude with a discussion of two open problems relating to stable matchings in 3-uniform hypergraphs.
268

Optimization Models and Algorithms for Workforce Scheduling with Uncertain Demand

Dhaliwal, Gurjot January 2012 (has links)
A workforce plan states the number of workers required at any point in time. Efficient workforce plans can help companies achieve their organizational goals while keeping costs low. In ever increasing globalized work market, companies need a competitive edge over their competitors. A competitive edge can be achieved by lowering costs. Labour costs can be one of the significant costs faced by the companies. Efficient workforce plans can provide companies with a competitive edge by finding low cost options to meet customer demand. This thesis studies the problem of determining the required number of workers when there are two categories of workers. Workers belonging to the first category are trained to work on one type of task (called Specialized Workers); whereas, workers in the second category are trained to work in all the tasks (called Flexible Workers). This thesis makes the following three main contributions. First, it addresses this problem when the demand is deterministic and stochastic. Two different models for deterministic demand cases have been proposed. To study the effects of uncertain demand, techniques of Robust Optimization and Robust Mathemat- ical Programming were used. The thesis also investigates methods to solve large instances of this problem; some of the instances we considered have more than 600,000 variables and constraints. As most of the variables are integer, and objective function is nonlinear, a commercial solver was not able to solve the problem in one day. Initially, we tried to solve the problem by using Lagrangian relaxation and Outer approximation techniques but these approaches were not successful. Although effective in solving small problems, these tools were not able to generate a bound within run time limit for the large data set. A number of heuristics were proposed using projection techniques. Finally this thesis develops a genetic algorithm to solve large instances of this prob- lem. For the tested population, the genetic algorithm delivered results within 2-3% of optimal solution.
269

On Linear Programming, Integer Programming and Cutting Planes

Espinoza, Daniel G. 30 March 2006 (has links)
In this thesis we address three related topic in the field of Operations Research. Firstly we discuss the problems and limitation of most common solvers for linear programming, precision. We then present a solver that generate rational optimal solutions to linear programming problems by solving a succession of (increasingly more precise) floating point approximations of the original rational problem until the rational optimality conditions are achieved. This method is shown to be (on average) only 20% slower than the common pure floating point approach, while returning true optimal solutions to the problems. Secondly we present an extension of the Local Cut procedure introduced by Applegate et al, 2001, for the Symmetric Traveling Salesman Problem (STSP), to the general setting of MIP problems. This extension also proves finiteness of the separation, facet and tilting procedures in the general MIP setting, and also provides conditions under which the separation procedure is guaranteed to generate cuts that separate the current fractional solution from the convex hull of the mixed-integer polyhedron. We then move on to explore some configurations for local cuts, realizing extensive testing on the instances from MIPLIB. Those results show that this technique may be useful in general MIP problems, while the experience of Applegate et al, shows that the ideas can be successfully applied to structures problems as well. Thirdly we present an extensive computational experiment on the TSP and Domino Parity inequalities as introduced by Letchford, 2000. This work also include a safe-shrinking theorem for domino parity inequalities, heuristics to apply the planar separation algorithm introduced by Letchford to instances where the planarity requirement does not hold, and several practical speed-ups. Our computational experience showed that this class of inequalities effectively improve the lower bounds from the best relaxations obtained with Concorde, which is one of the state of the art solvers for the STSP. As part of these experience, we solved to optimality the (up to now) largest two STSP instances, both of them belong to the TSPLIB set of instances and they have 18,520 and 33,810 cities respectively.
270

The Cardinality Constrained Multiple Knapsack Problem

Aslan, Murat 01 November 2008 (has links) (PDF)
The classical multiple knapsack problem selects a set of items and assigns each to one of the knapsacks so as to maximize the total profit. The knapsacks have limited capacities. The cardinality constrained multiple knapsack problem assumes limits on the number of items that are to be put in each knapsack, as well. Despite many efforts on the classical multiple knapsack problem, the research on the cardinality constrained multiple knapsack problem is scarce. In this study we consider the cardinality constrained multiple knapsack problem. We propose heuristic and optimization procedures that rely on the optimal solutions of the linear programming relaxation problem. Our computational results on the large-sized problem instances have shown the satisfactory performances of our algorithms.

Page generated in 0.142 seconds