• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 17
  • 7
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 57
  • 57
  • 57
  • 34
  • 31
  • 28
  • 28
  • 27
  • 21
  • 20
  • 9
  • 9
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The Use of Preconditioned Iterative Linear Solvers in Interior-Point Methods and Related Topics

O'Neal, Jerome W. 24 June 2005 (has links)
Over the last 25 years, interior-point methods (IPMs) have emerged as a viable class of algorithms for solving various forms of conic optimization problems. Most IPMs use a modified Newton method to determine the search direction at each iteration. The system of equations corresponding to the modified Newton system can often be reduced to the so-called normal equation, a system of equations whose matrix ADA' is positive definite, yet often ill-conditioned. In this thesis, we first investigate the theoretical properties of the maximum weight basis (MWB) preconditioner, and show that when applied to a matrix of the form ADA', where D is positive definite and diagonal, the MWB preconditioner yields a preconditioned matrix whose condition number is uniformly bounded by a constant depending only on A. Next, we incorporate the results regarding the MWB preconditioner into infeasible, long-step, primal-dual, path-following algorithms for linear programming (LP) and convex quadratic programming (CQP). In both LP and CQP, we show that the number of iterative solver iterations of the algorithms can be uniformly bounded by n and a condition number of A, while the algorithmic iterations of the IPMs can be polynomially bounded by n and the logarithm of the desired accuracy. We also expand the scope of the LP and CQP algorithms to incorporate a family of preconditioners, of which MWB is a member, to determine an approximate solution to the normal equation. For the remainder of the thesis, we develop a new preconditioning strategy for solving systems of equations whose associated matrix is positive definite but ill-conditioned. Our so-called adaptive preconditioning strategy allows one to change the preconditioner during the course of the conjugate gradient (CG) algorithm by post-multiplying the current preconditioner by a simple matrix, consisting of the identity matrix plus a rank-one update. Our resulting algorithm, the Adaptive Preconditioned CG (APCG) algorithm, is shown to have polynomial convergence properties. Numerical tests are conducted to compare a variant of the APCG algorithm with the CG algorithm on various matrices.
12

Active-set prediction for interior point methods

Yan, Yiming January 2015 (has links)
This research studies how to efficiently predict optimal active constraints of an inequality constrained optimization problem, in the context of Interior Point Methods (IPMs). We propose a framework based on shifting/perturbing the inequality constraints of the problem. Despite being a class of powerful tools for solving Linear Programming (LP) problems, IPMs are well-known to encounter difficulties with active-set prediction due essentially to their construction. When applied to an inequality constrained optimization problem, IPMs generate iterates that belong to the interior of the set determined by the constraints, thus avoiding/ignoring the combinatorial aspect of the solution. This comes at the cost of difficulty in predicting the optimal active constraints that would enable termination, as well as increasing ill-conditioning of the solution process. We show that, existing techniques for active-set prediction, however, suffer from difficulties in making an accurate prediction at the early stage of the iterative process of IPMs; when these techniques are ready to yield an accurate prediction towards the end of a run, as the iterates approach the solution set, the IPMs have to solve increasingly ill-conditioned and hence difficult, subproblems. To address this challenging question, we propose the use of controlled perturbations. Namely, in the context of LP problems, we consider perturbing the inequality constraints (by a small amount) so as to enlarge the feasible set. We show that if the perturbations are chosen judiciously, the solution of the original problem lies on or close to the central path of the perturbed problem. We solve the resulting perturbed problem(s) using a path-following IPM while predicting on the way the active set of the original LP problem; we find that our approach is able to accurately predict the optimal active set of the original problem before the duality gap for the perturbed problem gets too small. Furthermore, depending on problem conditioning, this prediction can happen sooner than predicting the active set for the perturbed problem or for the original one if no perturbations are used. Proof-of-concept algorithms are presented and encouraging preliminary numerical experience is also reported when comparing activity prediction for the perturbed and unperturbed problem formulations. We also extend the idea of using controlled perturbations to enhance the capabilities of optimal active-set prediction for IPMs for convex Quadratic Programming (QP) problems. QP problems share many properties of LP, and based on these properties, some results require more care; furthermore, encouraging preliminary numerical experience is also presented for the QP case.
13

Structure-exploiting interior point methods for security constrained optimal power flow problems

Chiang, Naiyuan January 2013 (has links)
The aim of this research is to demonstrate some more efficient approaches to solve the n-1 security constrained optimal power flow (SCOPF) problems by using structure-exploiting primal-dual interior point methods (IPM). Firstly, we consider a DC-SCOPF model, which is a linearized version of AC-SCOPF. One new reformulation of the DC-SCOPF model is suggested, in which most matrices that need to be factorized are constant. Consequently, most numerical factorizations and a large number of back-solve operations only need to be performed once throughout the entire IPM process. In the framework of the structure-exploiting IPM implementation, one of the major computational efforts consists of forming the Schur complement matrix, which is very computationally expensive if no further measure is applied. One remedy is to apply a preconditioned iterative method to solve the corresponding linear systems which appear in assembling the Schur complement matrix. We suggest two main schemes to pick a good and robust preconditioner for SCOPF problems based on combining different “active” contingency scenarios. The numerical results show that our new approaches are much faster than the default structure-exploiting method in OOPS, and also that it requires less memory. The second part of this thesis goes to the standard AC-SCOPF problem, which is a nonlinear and nonconvex optimization problem. We present a new contingency generation algorithm: it starts with solving the basic OPF problem, which is a much smaller problem of the same structure, and then generates contingency scenarios dynamically when needed. Some theoretical analysis of this algorithm is shown for the linear case, while the numerical results are exciting, as this new algorithm works for both AC and DC cases. It can find all the active scenarios and significantly reduce the number of scenarios one needs to contain in the model. As a result, it speeds up the solving process and may require less IPM iterations. Also, some heuristic algorithms are designed and presented to predict the active contingencies for the standard AC-SCOPF, based on the use of AC-OPF or DC-SCOPF. We test our heuristic algorithms on the modified IEEE 24-bus system, and also present their corresponding numerical results in the thesis.
14

"Planejamento do tratamento por radioterapia através de métodos de pontos interiores" / Specialized Interior Point Methods for Radiotherapy Treatment Design

Cid, Cecilia Bollini Barboza 07 April 2003 (has links)
O objetivo deste trabalho consiste no desenvolvimento, estudo e implementação de métodos de pontos interiores específicos para o problema de planejamento do tratamento de câncer por radioterapia. Este é um problema de grande porte que contém uma estrutura matricial particular. A exploração desta estrutura de forma eficiente obtém bom desempenho computacional, através da redução da dimensão dos sistemas lineares que devem ser resolvidos a cada iteração, agilizando a definição de um tratamento adequado, uma vez que tipicamente várias simulações são realizadas antes da definição de um plano definitivo. Resultados numéricos em Matlab ilustram a eficiência desta abordagem em problemas reais e mostram a superioridade do método preditor corretor em comparação ao método primal-dual. / In this work, a specialized interior point method is developed for planning cancer treatment by radiotherapy. This is a large-scale problem with a specific matrix structure. That structure is explored in an efficient way reducing the dimension of the linear system which must be solved at each iteration speeding up the treatment design since usually several versions must be solved to obtain a satisfactory plan. Moreover, the system obtained is sparse, symmetric and positive definite. Numerical results in Matlab illustrate the efficiency of this approach in real problems and show the superiority of the preditor-corrector method in comparison to the primal-dual method.
15

On low order controller synthesis using rational constraints

Ankelhed, Daniel January 2009 (has links)
<p>In order to design robust controllers, H-infinity synthesis is a common tool to use. The controllers that result from these algorithms are typically of very high order, which complicates implementation. However, if a constraint on the maximum order of the controller is set, that is lower than the order of the plant, the problem is no longer convex and it is then relatively hard to solve. These problems become very complex, even when the order of the system to be controlled is low.</p><p>The approach used in the thesis is based on formulating the constraint on the maximum order of the plant as a polynomial equation. By using the fact that the polynomial is non-negative on the feasible set, the problem is reformulated as an optimization problem where the nonconvex polynomial function is to be minimized over a convex set defined by linear matrix inequalities.</p><p>To solve this optimization problem, two methods have been proposed. The first method is a barrier method and the second one is a method based on a primal-dual framework. These methods have been evaluated on several problems and compared with a well-known method found in the literature. To motivate this choice of method, we have made a brief survey of available methods available for solving the same or related problems.</p><p>The proposed methods emerged as the best methods among the three for finding lower order controllers with the same or similar performance as the full order controller. When the aim is to find the lowest order controller with no worse than +50% increase in the closed loop H-infinity norm, then the three compared methods perform equally well.</p>
16

Generalized Stationary Points and an Interior Point Method for MPEC

Liu, Xinwei, Sun, Jie 01 1900 (has links)
Mathematical program with equilibrium constraints (MPEC)has extensive applications in practical areas such as traffic control, engineering design, and economic modeling. Some generalized stationary points of MPEC are studied to better describe the limiting points produced by interior point methods for MPEC.A primal-dual interior point method is then proposed, which solves a sequence of relaxed barrier problems derived from MPEC. Global convergence results are deduced without assuming strict complementarity or linear independence constraint qualification. Under very general assumptions, the algorithm can always find some point with strong or weak stationarity. In particular, it is shown that every limiting point of the generated sequence is a piece-wise stationary point of MPEC if the penalty parameter of the merit function is bounded. Otherwise, a certain point with weak stationarity can be obtained. Preliminary numerical results are satisfactory, which include a case analyzed by Leyffer for which the penalty interior point algorithm failed to find a stationary solution. / Singapore-MIT Alliance (SMA)
17

Condition-Measure Bounds on the Behavior of the Central Trajectory of a Semi-Definete Program

Nunez, Manuel A., Freund, Robert M. 08 1900 (has links)
We present bounds on various quantities of interest regarding the central trajectory of a semi-definite program (SDP), where the bounds are functions of Renegar's condition number C(d) and other naturally-occurring quantities such as the dimensions n and m. The condition number C(d) is defined in terms of the data instance d = (A, b, C) for SDP; it is the inverse of a relative measure of the distance of the data instance to the set of ill-posed data instances, that is, data instances for which arbitrary perturbations would make the corresponding SDP either feasible or infeasible. We provide upper and lower bounds on the solutions along the central trajectory, and upper bounds on changes in solutions and objective function values along the central trajectory when the data instance is perturbed and/or when the path parameter defining the central trajectory is changed. Based on these bounds, we prove that the solutions along the central trajectory grow at most linearly and at a rate proportional to the inverse of the distance to ill-posedness, and grow at least linearly and at a rate proportional to the inverse of C(d)2 , as the trajectory approaches an optimal solution to the SDP. Furthermore, the change in solutions and in objective function values along the central trajectory is at most linear in the size of the changes in the data. All such bounds involve polynomial functions of C(d), the size of the data, the distance to ill-posedness of the data, and the dimensions n and m of the SDP.
18

On low order controller synthesis using rational constraints

Ankelhed, Daniel January 2009 (has links)
In order to design robust controllers, H-infinity synthesis is a common tool to use. The controllers that result from these algorithms are typically of very high order, which complicates implementation. However, if a constraint on the maximum order of the controller is set, that is lower than the order of the plant, the problem is no longer convex and it is then relatively hard to solve. These problems become very complex, even when the order of the system to be controlled is low. The approach used in the thesis is based on formulating the constraint on the maximum order of the plant as a polynomial equation. By using the fact that the polynomial is non-negative on the feasible set, the problem is reformulated as an optimization problem where the nonconvex polynomial function is to be minimized over a convex set defined by linear matrix inequalities. To solve this optimization problem, two methods have been proposed. The first method is a barrier method and the second one is a method based on a primal-dual framework. These methods have been evaluated on several problems and compared with a well-known method found in the literature. To motivate this choice of method, we have made a brief survey of available methods available for solving the same or related problems. The proposed methods emerged as the best methods among the three for finding lower order controllers with the same or similar performance as the full order controller. When the aim is to find the lowest order controller with no worse than +50% increase in the closed loop H-infinity norm, then the three compared methods perform equally well.
19

Convergence Analysis of Generalized Primal-Dual Interior-Point Algorithms for Linear Optimization

Wei, Hua January 2002 (has links)
We study the zeroth-, first-, and second-order algorithms proposed by Tuncel. The zeroth-order algorithms are the generalization of the classic primal-dual affine-scaling methods, and have a strong connection with the quasi-Newton method. Although the zeroth-order algorithms have the property of strict monotone decrease in both primal and dual objective values, they may not converge. We give an illustrative example as well as an algebraic proof to show that the zeroth-order algorithms do not converge to an optimal solution in some cases. The second-order algorithms use the gradients and Hessians of the barrier functions. Tuncel has shown that all second-order algorithms have a polynomial iteration bound. The second-order algorithms have a range of primal-dual scaling matrices to be chosen. We give a method to construct such a primal-dual scaling matrix. We then analyze a new centrality measure. This centrality measure appeared in both first- and second-order algorithms. We compare the neighbourhood defined by this centrality measure with other known neighbourhoods. We then analyze how this centrality measure changes in the next iteration in terms of the step length and some other information of the current iteration.
20

Convergence Analysis of Generalized Primal-Dual Interior-Point Algorithms for Linear Optimization

Wei, Hua January 2002 (has links)
We study the zeroth-, first-, and second-order algorithms proposed by Tuncel. The zeroth-order algorithms are the generalization of the classic primal-dual affine-scaling methods, and have a strong connection with the quasi-Newton method. Although the zeroth-order algorithms have the property of strict monotone decrease in both primal and dual objective values, they may not converge. We give an illustrative example as well as an algebraic proof to show that the zeroth-order algorithms do not converge to an optimal solution in some cases. The second-order algorithms use the gradients and Hessians of the barrier functions. Tuncel has shown that all second-order algorithms have a polynomial iteration bound. The second-order algorithms have a range of primal-dual scaling matrices to be chosen. We give a method to construct such a primal-dual scaling matrix. We then analyze a new centrality measure. This centrality measure appeared in both first- and second-order algorithms. We compare the neighbourhood defined by this centrality measure with other known neighbourhoods. We then analyze how this centrality measure changes in the next iteration in terms of the step length and some other information of the current iteration.

Page generated in 0.0576 seconds