• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 513
  • 85
  • 53
  • 49
  • 12
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • Tagged with
  • 864
  • 322
  • 133
  • 94
  • 90
  • 88
  • 86
  • 79
  • 76
  • 68
  • 68
  • 67
  • 66
  • 66
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

A Branch-and-Cut Algorithm based on Semidefinite Programming for the Minimum k-Partition Problem

Ghaddar, Bissan January 2007 (has links)
The minimum k-partition (MkP) problem is a well-known optimization problem encountered in various applications most notably in telecommunication and physics. Formulated in the early 1990s by Chopra and Rao, the MkP problem is the problem of partitioning the set of vertices of a graph into k disjoint subsets so as to minimize the total weight of the edges joining vertices in different partitions. In this thesis, we design and implement a branch-and-cut algorithm based on semidefinite programming (SBC) for the MkP problem. We describe and study the properties of two relaxations of the MkP problem, the linear programming and the semidefinite programming relaxations. We then derive a new strengthened relaxation based on semidefinite programming. This new relaxation provides tighter bounds compared to the other two discussed relaxations but suffers in term of computational time. We further devise an iterative clustering heuristic (ICH), a novel heuristic that finds feasible solution to the MkP problem and we compare it to the hyperplane rounding techniques of Goemans and Williamson and Frieze and Jerrum for k=2 and for k=3 respectively. Our computational results support the conclusion that ICH provides a better feasible solution for the MkP. Furthermore, unlike the hyperplane rounding, ICH remains very effective in the presence of negative edge weights. Next we describe in detail the design and implementation of a branch-and-cut algorithm based on semidefinite programming (SBC) to find optimal solution for the MkP problem. The ICH heuristic is used in our SBC algorithm to provide feasible solutions at each node of the branch-and-cut tree. Finally, we present computational results for the SBC algorithm on several classes of test instances with k=3, 5, and 7. Complete graphs with up to 60 vertices and sparse graphs with up to 100 vertices arising from a physics application were considered.
332

Convex duality in constrained mean-variance portfolio optimization under a regime-switching model

Donnelly, Catherine January 2008 (has links)
In this thesis, we solve a mean-variance portfolio optimization problem with portfolio constraints under a regime-switching model. Specifically, we seek a portfolio process which minimizes the variance of the terminal wealth, subject to a terminal wealth constraint and convex portfolio constraints. The regime-switching is modeled using a finite state space, continuous-time Markov chain and the market parameters are allowed to be random processes. The solution to this problem is of interest to investors in financial markets, such as pension funds, insurance companies and individuals. We establish the existence and characterization of the solution to the given problem using a convex duality method. We encode the constraints on the given problem as static penalty functions in order to derive the primal problem. Next, we synthesize the dual problem from the primal problem using convex conjugate functions. We show that the solution to the dual problem exists. From the construction of the dual problem, we find a set of necessary and sufficient conditions for the primal and dual problems to each have a solution. Using these conditions, we can show the existence of the solution to the given problem and characterize it in terms of the market parameters and the solution to the dual problem. The results of the thesis lay the foundation to find an actual solution to the given problem, by looking at specific examples. If we can find the solution to the dual problem for a specific example, then, using the characterization of the solution to the given problem, we may be able to find the actual solution to the specific example. In order to use the convex duality method, we have to prove a martingale representation theorem for processes which are locally square-integrable martingales with respect to the filtration generated by a Brownian motion and a finite state space, continuous-time Markov chain. This result may be of interest in problems involving regime-switching models which require a martingale representation theorem.
333

New Convex Relaxations for the Maximum Cut and VLSI Layout Problems

Ferreira Fialho dos Anjos, Miguel Nuno January 2001 (has links)
It is well known that many of the optimization problems which arise in applications are <I>hard</I>, which usually means that they are NP-hard. Hence much research has been devoted to finding <I>good</I> relaxations for these hard problems. Usually a <I>good</I> relaxation is one which can be solved (either exactly or within a prescribed numerical tolerance) in polynomial-time. Nesterov and Nemirovskii showed that by this criterion, many convex optimization problems are good relaxations. This thesis presents new convex relaxations for two such hard problems, namely the Maximum-Cut (Max-Cut) problem and the VLSI (Very Large Scale Integration of electronic circuits) layout problem. We derive and study the properties of two new strengthened semidefinite programming relaxations for the Max-Cut problem. Our theoretical results hold for every instance of Max-Cut; in particular, we make no assumptions about the edge weights. The first relaxation provides a strengthening of the well-known Goemans-Williamson relaxation, and the second relaxation is a further tightening of the first. We prove that the tighter relaxation automatically enforces the well-known triangle inequalities, and in fact is stronger than the simple addition of these inequalities to the Goemans-Williamson relaxation. We further prove that the tighter relaxation fully characterizes some low dimensional faces of the cut polytope via the rank of its feasible matrices. We also address some practical issues arising in the solution of these relaxations and present numerical results showing the remarkably good bounds computed by the tighter relaxation. For the VLSI layout problem, we derive a new relaxation by extending the <I>target distance</I> concept recently introduced by Etawil and Vannelli. The resulting AR (Attractor-Repeller) model improves on the NLT (Non-linear optimization Layout Technique) method of van Camp et al. by efficiently finding a good initial point for the NLT solver. Because the AR model is not a convex optimization problem, we isolate the source of the non-convexity and thereby derive a convex version of the model. This derivation leads to the definition of certain generalized target distances with an interesting practical interpretation. Finally, we present numerical results that demonstrate the potential of the AR model.
334

A Branch-and-Cut Algorithm based on Semidefinite Programming for the Minimum k-Partition Problem

Ghaddar, Bissan January 2007 (has links)
The minimum k-partition (MkP) problem is a well-known optimization problem encountered in various applications most notably in telecommunication and physics. Formulated in the early 1990s by Chopra and Rao, the MkP problem is the problem of partitioning the set of vertices of a graph into k disjoint subsets so as to minimize the total weight of the edges joining vertices in different partitions. In this thesis, we design and implement a branch-and-cut algorithm based on semidefinite programming (SBC) for the MkP problem. We describe and study the properties of two relaxations of the MkP problem, the linear programming and the semidefinite programming relaxations. We then derive a new strengthened relaxation based on semidefinite programming. This new relaxation provides tighter bounds compared to the other two discussed relaxations but suffers in term of computational time. We further devise an iterative clustering heuristic (ICH), a novel heuristic that finds feasible solution to the MkP problem and we compare it to the hyperplane rounding techniques of Goemans and Williamson and Frieze and Jerrum for k=2 and for k=3 respectively. Our computational results support the conclusion that ICH provides a better feasible solution for the MkP. Furthermore, unlike the hyperplane rounding, ICH remains very effective in the presence of negative edge weights. Next we describe in detail the design and implementation of a branch-and-cut algorithm based on semidefinite programming (SBC) to find optimal solution for the MkP problem. The ICH heuristic is used in our SBC algorithm to provide feasible solutions at each node of the branch-and-cut tree. Finally, we present computational results for the SBC algorithm on several classes of test instances with k=3, 5, and 7. Complete graphs with up to 60 vertices and sparse graphs with up to 100 vertices arising from a physics application were considered.
335

Convex duality in constrained mean-variance portfolio optimization under a regime-switching model

Donnelly, Catherine January 2008 (has links)
In this thesis, we solve a mean-variance portfolio optimization problem with portfolio constraints under a regime-switching model. Specifically, we seek a portfolio process which minimizes the variance of the terminal wealth, subject to a terminal wealth constraint and convex portfolio constraints. The regime-switching is modeled using a finite state space, continuous-time Markov chain and the market parameters are allowed to be random processes. The solution to this problem is of interest to investors in financial markets, such as pension funds, insurance companies and individuals. We establish the existence and characterization of the solution to the given problem using a convex duality method. We encode the constraints on the given problem as static penalty functions in order to derive the primal problem. Next, we synthesize the dual problem from the primal problem using convex conjugate functions. We show that the solution to the dual problem exists. From the construction of the dual problem, we find a set of necessary and sufficient conditions for the primal and dual problems to each have a solution. Using these conditions, we can show the existence of the solution to the given problem and characterize it in terms of the market parameters and the solution to the dual problem. The results of the thesis lay the foundation to find an actual solution to the given problem, by looking at specific examples. If we can find the solution to the dual problem for a specific example, then, using the characterization of the solution to the given problem, we may be able to find the actual solution to the specific example. In order to use the convex duality method, we have to prove a martingale representation theorem for processes which are locally square-integrable martingales with respect to the filtration generated by a Brownian motion and a finite state space, continuous-time Markov chain. This result may be of interest in problems involving regime-switching models which require a martingale representation theorem.
336

Developing Parsimonious and Efficient Algorithms for Water Resources Optimization Problems

Asadzadeh Esfahani, Masoud 13 November 2012 (has links)
In the current water resources scientific literature, a wide variety of engineering design problems are solved in a simulation-optimization framework. These problems can have single or multiple objective functions and their decision variables can have discrete or continuous values. The majority of current literature in the field of water resources systems optimization report using heuristic global optimization algorithms, including evolutionary algorithms, with great success. These algorithms have multiple parameters that control their behavior both in terms of computational efficiency and the ability to find near globally optimal solutions. Values of these parameters are generally obtained by trial and error and are case study dependent. On the other hand, water resources simulation-optimization problems often have computationally intensive simulation models that can require seconds to hours for a single simulation. Furthermore, analysts may have limited computational budget to solve these problems, as such, the analyst may not be able to spend some of the computational budget to fine-tune the algorithm settings and parameter values. So, in general, algorithm parsimony in the number of parameters is an important factor in the applicability and performance of optimization algorithms for solving computationally intensive problems. A major contribution of this thesis is the development of a highly efficient, single objective, parsimonious optimization algorithm for solving problems with discrete decision variables. The algorithm is called Hybrid Discrete Dynamically Dimensioned Search, HD-DDS, and is designed based on Dynamically Dimensioned Search (DDS) that was developed by Tolson and Shoemaker (2007) for solving single objective hydrologic model calibration problems with continuous decision variables. The motivation for developing HD-DDS comes from the parsimony and high performance of original version of DDS. Similar to DDS, HD-DDS has a single parameter with a robust default value. HD-DDS is successfully applied to several benchmark water distribution system design problems where decision variables are pipe sizes among the available pipe size options. Results show that HD-DDS exhibits superior performance in specific comparisons to state-of-the-art optimization algorithms. The parsimony and efficiency of the original and discrete versions of DDS and their successful application to single objective water resources optimization problems with discrete and continuous decision variables motivated the development of a multi-objective optimization algorithm based on DDS. This algorithm is called Pareto Archived Dynamically Dimensioned Search (PA-DDS). The algorithm parsimony is a major factor in the design of PA-DDS. PA-DDS has a single parameter from its search engine DDS. In each iteration, PA-DDS selects one archived non-dominated solution and perturbs it to search for new solutions. The solution perturbation scheme of PA-DDS is similar to the original and discrete versions of DDS depending on whether the decision variable is discrete or continuous. So, PA-DDS can handle both types of decision variables. PA-DDS is applied to several benchmark mathematical problems, water distribution system design problems, and water resources model calibration problems with great success. It is shown that hypervolume contribution, HVC1, as defined in Knowles et al. (2003) is the superior selection metric for PA-DDS when solving multi-objective optimization problems with Pareto fronts that have a general (unknown) shape. However, one of the main contributions of this thesis is the development of a selection metric specifically designed for solving multi-objective optimization problems with a known or expected convex Pareto front such as water resources model calibration problems. The selection metric is called convex hull contribution (CHC) and makes the optimization algorithm sample solely from a subset of archived solutions that form the convex approximation of the Pareto front. Although CHC is generally applicable to any stochastic search optimization algorithm, it is applied to PA-DDS for solving six water resources calibration case studies with two or three objective functions. These case studies are solved by PA-DDS with CHC and HVC1 selections using 1,000 solution evaluations and by PA-DDS with CHC selection and two popular multi-objective optimization algorithms, AMALGAM and ε-NSGAII, using 10,000 solution evaluations. Results are compared based on the best case and worst case performances (out of multiple optimization trials) from each algorithm to measure the expected performance range for each algorithm. Comparing the best case performance of these algorithms shows that, PA-DDS with CHC selection using 1,000 solution evaluations perform very well in five out of six case studies. Comparing the worst case performance of the algorithms shows that with 1,000 solution evaluations, PA-DDS with CHC selection perform well in four out of six case studies. Furthermore, PA-DDS with CHC selection using 10,000 solution evaluations perform comparable to AMALGAM and ε-NSGAII. Therefore, it is concluded that PA-DDS with CHC selection is a powerful optimization algorithm for finding high quality solutions of multi-objective water resources model calibration problems with convex Pareto front especially when the computational budget is limited.
337

Optimization in multi-relay wireless networks

Nguyen, Huu Ngoc Duy 08 June 2009 (has links)
The concept of cooperation in communications has drawn a lot of research attention in recent years due to its potential to improve the efficiency of wireless networks. This new form of communications allows some users to act as relays and assist the transmission of other users' information signals. The aim of this thesis is to apply optimization techniques in the design of multi-relay wireless networks employing cooperative communications. In general, the thesis is organized into two parts: ``Distributed space-time coding' (DSTC) and ``Distributed beamforming', which cover two main approaches in cooperative communications over multi-relay networks. <br><br> In Part I of the thesis, various aspects of distributed implementation of space-time coding in a wireless relay network are treated. First, the thesis proposes a new fully-diverse distributed code which allows noncoherent reception at the destination. Second, the problem of coordinating the power allocation (PA) between source and relays to achieve the optimal performance of DSTC is studied and a novel PA scheme is developed. It is shown that the proposed PA scheme can obtain the maximum diversity order of DSTC and significantly outperform other suboptimal PA schemes. Third, the thesis presents the optimal PA scheme to minimize the mean-square error (MSE) in channel estimation during training phase of DSTC. The effect of imperfect channel estimation to the performance of DSTC is also thoroughly studied. <br><br> In Part II of the thesis, optimal distributed beamforming designs are developed for a wireless multiuser multi-relay network. Two design criteria for the optimal distributed beamforming at the relays are considered: (i) minimizing the total relay power subject to a guaranteed Quality of Service (QoS) measured in terms of signal-to-noise-ratio (SNR) at the destinations, and (ii) jointly maximizing the SNR margin at the destinations subject to power constraints at the relays. Based on convex optimization techniques, it is shown that these problems can be formulated and solved via second-order conic programming (SOCP). In addition, this part also proposes simple and fast iterative algorithms to directly solve these optimization problems.
338

A New Active Constellation Extension Scheme for PAPR Reduction in OFDM Systems

Huang, Bo-Rong 23 August 2011 (has links)
High peak-to-average power ratio (PAPR) is a serious drawback in orthogonal frequency division multiplexing (OFDM) systems. Various methods have been proposed to reduce PAPR, active constellation extension (ACE) scheme has excellent performance. There are two schemes were proposed in traditional ACE, the one of which is ACE-Smart Gradient-Project (SGP) which can significantly reduce PAPR through first iteration. In fact, optimal solution is not obtained in ACE-SGP, we find the scheme can be formulated as convex optimization problem, that is, we can find out optimal solution to minimize PAPR by convex optimization algorithm. Two proposed schemes are based on two low complexity schemes, respectively, and they were proved to satisfy convex optimization problem. Although the power of transmission and complexity of optimization algorithm in the proposed schemes are higher than that of the traditional ACE-SGP scheme, but proposed schemes has proper improvement in PAPR reduction.
339

Spectrum Sharing in Cognitive Radio Systems Under Outage Probablility Constraint

Cai, Pei Li 2009 December 1900 (has links)
For traditional wireless communication systems, static spectrum allocation is the major spectrum allocation methodology. However, according to the recent investigations by the FCC, this has led to more than 70 percent of the allocated spectrum in the United States being under-utilized. Cognitive radio (CR) technology, which supports opportunistic spectrum sharing, is one idea that is proposed to improve the overall utilization efficiency of the radio spectrum. In this thesis we consider a CR communication system based on spectrum sharing schemes, where we have a secondary user (SU) link with multiple transmitting antennas and a single receiving antenna, coexisting with a primary user (PU) link with a single receiving antenna. At the SU transmitter (SU-Tx), the channel state information (CSI) of the SU link is assumed to be perfectly known; while the interference channel from the SU-Tx to the PU receiver (PU-Rx) is not perfectly known due to less cooperation between the SU and the PU. As such, the SU-Tx is only assumed to know that the interference channel gain can take values from a finite set with certain probabilities. Considering a SU transmit power constraint, our design objective is to determine the transmit covariance matrix that maximizes the SU rate, while we protect the PU by enforcing both a PU average interference constraint and a PU outage probability constraint. This problem is first formulated as a non-convex optimization problem with a non-explicit probabilistic constraint, which is then approximated as a mixed binary integer programming (MBIP) problem and solved with the Branch and Bound (BB) algorithm. The complexity of the BB algorithm is analyzed and numerical results are presented to validate the eff ectiveness of the proposed algorithm. A key result proved in this thesis is that the rank of the optimal transmit covariance matrix is one, i.e., CR beamforming is optimal under PU outage constraints. Finally, a heuristic algorithm is proposed to provide a suboptimal solution to our MBIP problem by efficiently (in polynomial time) solving a particularly-constructed convex problem.
340

A novel design of polishing tool for axially symmetrical surface

Yang, Jian-jhe 11 August 2006 (has links)
This thesis is to develop a novel polishing tool system fitted for convex and concave symmetrical surface of combined surface. There are two design goals in this system. First, this system can be used to polish a concave or convex cone surface with various dimensions and angle cones by adjusting its geometric feature of structure. Second, this polishing tool is expected to have high life expectancy in real applications. Because of the advantages, an efficiency and controllable polishing system would be developed.An inference process, based on a top-down planning strategy, was used to obtain the concept design of polishing tool. There are two major parts in the structure of polishing tool system. The first one is its elastic structure. Both its geometric configuration and loading applied at work are adjustable. The second one is the polishing tool of cylindrical shape. With this specific geometric feature, the effect of tool wear on polishing rate is minimized. The finite element method was adopted to analyze the deformation characteristics of the elastic structure. Accordingly, an optimal design about the shape and dimension of elastic structure was determined. The experimental study showed that the developed polishing system had the property of high repeatability in machining rate. It was also demonstrated that the machining rate of system was insensitive to the tool wear during the polishing process. This advantage may allow this system to gain significant benefit in reducing the need of tool replacement. Finally, it was shown that the experimental trends of machining rate due to the change of applied load or polishing speed followed that of cylindrical polishing system. Both of them can be properly predicted based on the lubricating theories.

Page generated in 0.0312 seconds