Spelling suggestions: "subject:"interiorpoint"" "subject:"interpoint""
31 |
Condition-Measure Bounds on the Behavior of the Central Trajectory of a Semi-Definete ProgramNunez, Manuel A., Freund, Robert M. 08 1900 (has links)
We present bounds on various quantities of interest regarding the central trajectory of a semi-definite program (SDP), where the bounds are functions of Renegar's condition number C(d) and other naturally-occurring quantities such as the dimensions n and m. The condition number C(d) is defined in terms of the data instance d = (A, b, C) for SDP; it is the inverse of a relative measure of the distance of the data instance to the set of ill-posed data instances, that is, data instances for which arbitrary perturbations would make the corresponding SDP either feasible or infeasible. We provide upper and lower bounds on the solutions along the central trajectory, and upper bounds on changes in solutions and objective function values along the central trajectory when the data instance is perturbed and/or when the path parameter defining the central trajectory is changed. Based on these bounds, we prove that the solutions along the central trajectory grow at most linearly and at a rate proportional to the inverse of the distance to ill-posedness, and grow at least linearly and at a rate proportional to the inverse of C(d)2 , as the trajectory approaches an optimal solution to the SDP. Furthermore, the change in solutions and in objective function values along the central trajectory is at most linear in the size of the changes in the data. All such bounds involve polynomial functions of C(d), the size of the data, the distance to ill-posedness of the data, and the dimensions n and m of the SDP.
|
32 |
Preconditioning and iterative solution of symmetric indefinite linear systems arising from interior point methods for linear programmingChai, Joo-Siong, Toh, Kim Chuan 01 1900 (has links)
We study the preconditioning of symmetric indefinite linear systems of equations that arise in interior point solution of linear optimization problems. The preconditioning method that we study exploits the block structure of the augmented matrix to design a similar block structure preconditioner to improve the spectral properties of the resulting preconditioned matrix so as to improve the convergence rate of the iterative solution of the system. We also propose a two-phase algorithm that takes advantage of the spectral properties of the transformed matrix to solve for the Newton directions in the interior-point method. Numerical experiments have been performed on some LP test problems in the NETLIB suite to demonstrate the potential of the preconditioning method discussed. / Singapore-MIT Alliance (SMA)
|
33 |
Behavioral Measures and their Correlation with IPM Iteration Counts on Semi-Definite Programming ProblemsFreund, Robert M., Ordóñez, Fernando, Toh, Kim Chuan 04 March 2005 (has links)
We study four measures of problem instance behavior that might account for the observed differences in interior-point method (IPM) iterations when these methods are used to solve semidefinite programming (SDP) problem instances: (i) an aggregate geometry measure related to the primal and dual feasible regions (aspect ratios) and norms of the optimal solutions, (ii) the (Renegar-) condition measure C(d) of the data instance, (iii) a measure of the near-absence of strict complementarity of the optimal solution, and (iv) the level of degeneracy of the optimal solution. We compute these measures for the SDPLIB suite problem instances and measure the correlation between these measures and IPM iteration counts (solved using the software SDPT3) when the measures have finite values. Our conclusions are roughly as follows: the aggregate geometry measure is highly correlated with IPM iterations (CORR = 0.896), and is a very good predictor of IPM iterations, particularly for problem instances with solutions of small norm and aspect ratio. The condition measure C(d) is also correlated with IPM iterations, but less so than the aggregate geometry measure (CORR = 0.630). The near-absence of strict complementarity is weakly correlated with IPM iterations (CORR = 0.423). The level of degeneracy of the optimal solution is essentially uncorrelated with IPM iterations.
|
34 |
On low order controller synthesis using rational constraintsAnkelhed, Daniel January 2009 (has links)
In order to design robust controllers, H-infinity synthesis is a common tool to use. The controllers that result from these algorithms are typically of very high order, which complicates implementation. However, if a constraint on the maximum order of the controller is set, that is lower than the order of the plant, the problem is no longer convex and it is then relatively hard to solve. These problems become very complex, even when the order of the system to be controlled is low. The approach used in the thesis is based on formulating the constraint on the maximum order of the plant as a polynomial equation. By using the fact that the polynomial is non-negative on the feasible set, the problem is reformulated as an optimization problem where the nonconvex polynomial function is to be minimized over a convex set defined by linear matrix inequalities. To solve this optimization problem, two methods have been proposed. The first method is a barrier method and the second one is a method based on a primal-dual framework. These methods have been evaluated on several problems and compared with a well-known method found in the literature. To motivate this choice of method, we have made a brief survey of available methods available for solving the same or related problems. The proposed methods emerged as the best methods among the three for finding lower order controllers with the same or similar performance as the full order controller. When the aim is to find the lowest order controller with no worse than +50% increase in the closed loop H-infinity norm, then the three compared methods perform equally well.
|
35 |
Convergence Analysis of Generalized Primal-Dual Interior-Point Algorithms for Linear OptimizationWei, Hua January 2002 (has links)
We study the zeroth-, first-, and second-order algorithms proposed by Tuncel. The zeroth-order algorithms are the generalization of the classic primal-dual affine-scaling methods, and have a strong connection with the quasi-Newton method. Although the zeroth-order algorithms have the property of strict monotone decrease in both primal and dual objective values, they may not converge. We give an illustrative example as well as an algebraic proof to show that the zeroth-order algorithms do not converge to an optimal solution in some cases. The second-order algorithms use the gradients and Hessians of the barrier functions. Tuncel has shown that all second-order algorithms have a polynomial iteration bound. The second-order algorithms have a range of primal-dual scaling matrices to be chosen. We give a method to construct such a primal-dual scaling matrix. We then analyze a new centrality measure. This centrality measure appeared in both first- and second-order algorithms. We compare the neighbourhood defined by this centrality measure with other known neighbourhoods. We then analyze how this centrality measure changes in the next iteration in terms of the step length and some other information of the current iteration.
|
36 |
Convergence Analysis of Generalized Primal-Dual Interior-Point Algorithms for Linear OptimizationWei, Hua January 2002 (has links)
We study the zeroth-, first-, and second-order algorithms proposed by Tuncel. The zeroth-order algorithms are the generalization of the classic primal-dual affine-scaling methods, and have a strong connection with the quasi-Newton method. Although the zeroth-order algorithms have the property of strict monotone decrease in both primal and dual objective values, they may not converge. We give an illustrative example as well as an algebraic proof to show that the zeroth-order algorithms do not converge to an optimal solution in some cases. The second-order algorithms use the gradients and Hessians of the barrier functions. Tuncel has shown that all second-order algorithms have a polynomial iteration bound. The second-order algorithms have a range of primal-dual scaling matrices to be chosen. We give a method to construct such a primal-dual scaling matrix. We then analyze a new centrality measure. This centrality measure appeared in both first- and second-order algorithms. We compare the neighbourhood defined by this centrality measure with other known neighbourhoods. We then analyze how this centrality measure changes in the next iteration in terms of the step length and some other information of the current iteration.
|
37 |
Computing optimal designs for regression models via convex programmingZhou, Wenjie 25 August 2015 (has links)
Optimal design problems aim at selecting design points optimally with respect to certain
statistical criteria. The research of this thesis focuses on optimal design problems with
respect to A-, D- and E-optimal criteria, which minimize the trace, determinant and largest
eigenvalue of the information matrix, respectively.
Semide nite programming (SDP) is concerned with optimizing a linear objective function
subject to a linear matrix being positive semide nite. Two powerful MATLAB add-ons,
SeDuMi and CVX, have been developed to solve SDP problems e ciently. In this paper,
we show in detail how to formulate A- and E-optimal design problems as SDP problems
and solve them by SeDuMi and CVX. This technique can be used to construct approximate
A-optimal and E-optimal designs for all linear and non-linear models with discrete design
spaces. The results can also provide guidance to nd optimal designs on continuous design
spaces. For one variable polynomial regression models, we solve the A- and E- optimal
designs on the continuous design space by using a two-stage procedure. In the rst stage
we nd the optimal moments by casting it as an SDP problem and in the second stage we
extract the optimal designs from the optimal moments obtained from the rst stage.
Unlike E- and A-optimal design problems, the objective function of D-optimal design
problem is nonlinear. So D-optimal design problems cannot be reformulated as an SDP.
However, it can be cast as a convex problem and solved by an interior point method. In
this thesis we give details on how to use the interior point method to solve D-optimal design
problems.
Finally several numerical examples for A-, D-, and E-optimal designs along with the
MATLAB codes are presented. / Graduate
|
38 |
State Estimation in Electrical NetworksMosbah, Hossam 08 January 2013 (has links)
The continuous growth in power system electric grid by adding new substations lead to construct many new transmission lines, transformers, control devices, and circuit breakers to connect the capacity (generators) to the demand (loads). These components will have a very heavy influence on the performance of the electric grid. The renewable technical solutions for these issues can be found by robust algorithms which can give us a full picture of the current state of the electrical network by monitoring the behavior of phase and voltage magnitude.
In this thesis, the major idea is to implement several algorithms including weighted least square, extend kalman filter, and interior point method in three different electrical networks including IEEE 14, 30, and 118 to compare the performance of these algorithms which is represented by the behavior of phases and magnitude voltages as well as minimize the residual of the balance load flow real time measurements to distinguish which one is more robust. Also to have a particular understanding of the comparison between unconstraint and constraint algorithms.
|
39 |
Constructing an Index Fund Using Interior Point Primal- Dual MethodCelestin, Kamta, Galabe, Sampid Marius January 2011 (has links)
Optimization methods nowadays play a very important role in financial decisions such as portfolio managements, construction of index funds and pension funds. This Master Thesis is devoted to the problem of an index fund construction. The problem is represented as a linear optimization problem of re-balancing the portfolio at minimum cost and solved using the Primal-Dual interior point method. The index fund is constructed using ten companies from the Dow Jones Industrial Average Index (DJIA). The Primal-Dual interior point method was first implemented in Matlab and later on in Java.
|
40 |
The use of preconditioned iterative linear solvers in interior-point methods and related topicsO'Neal, Jerome W. January 2005 (has links)
Thesis (Ph. D.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2006. / Parker, R. Gary, Committee Member ; Shapiro, Alexander, Committee Member ; Nemirovski, Arkadi, Committee Member ; Green, William, Committee Member ; Monteiro, Renato, Committee Chair.
|
Page generated in 0.0747 seconds