• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 159
  • 14
  • 14
  • 13
  • 6
  • 1
  • 1
  • 1
  • Tagged with
  • 247
  • 247
  • 54
  • 45
  • 44
  • 42
  • 39
  • 35
  • 28
  • 27
  • 26
  • 25
  • 25
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Constrained Motion Particle Swarm Optimization for Non-Linear Time Series Prediction

Sapankevych, Nicholas 13 March 2015 (has links)
Time series prediction techniques have been used in many real-world applications such as financial market prediction, electric utility load forecasting, weather and environmental state prediction, and reliability forecasting. The underlying system models and time series data generating processes are generally complex for these applications and the models for these systems are usually not known a priori. Accurate and unbiased estimation of time series data produced by these systems cannot always be achieved using well known linear techniques, and thus the estimation process requires more advanced time series prediction algorithms. One type of time series interpolation and prediction algorithm that has been proven to be effective for these various types of applications is Support Vector Regression (SVR) [1], which is based on the Support Vector Machine (SVM) developed by Vapnik et al. [2, 3]. The underlying motivation for using SVMs is the ability of this methodology to accurately forecast time series data when the underlying system processes are typically nonlinear, non-stationary and not defined a-priori. SVMs have also been proven to outperform other non-linear techniques including neural-network based non-linear prediction techniques such as multi-layer perceptrons. As with most time series prediction algorithms, there are typically challenges associated in applying a given heuristic to any general problem. One difficult challenge in using SVR to solve these types of problems is the selection of free parameters associated with the SVR algorithm. There is no given heuristic to select SVR free parameters and the user is left to adjust these parameters in an ad hoc manner. The focus of this dissertation is to present an alternative to the typical ad hoc approach of tuning SVR for time series prediction problems by using Particle Swarm Optimization (PSO) to assist in the SVR free parameter selection process. Developed by Kennedy and Eberhart [4-8], PSO is a technique that emulates the process living creatures (such as birds or insects) use to discover food resources at a given geographic location. PSO has been proven to be an effective technique for many different kinds of optimization problems [9-11]. The focus of this dissertation is to present an alternative to the typical ad hoc approach of tuning SVR for time series prediction problems by using Particle Swarm Optimization (PSO) to assist in the SVR free parameter selection process. Developed by Kennedy and Eberhart [4-8], PSO is a technique that emulates the process living creatures (such as birds or insects) use to discover food resources at a given geographic location. PSO has been proven to be an effective technique for many different kinds of optimization problems [9-11].
52

On an Extension of Condition Number Theory to Non-Conic Convex Optimization

Freund, Robert M., Ordóñez, Fernando, 1970- 02 1900 (has links)
The purpose of this paper is to extend, as much as possible, the modern theory of condition numbers for conic convex optimization: z* := minz ctx s.t. Ax - b Cy C Cx , to the more general non-conic format: z* := minx ctx (GPd) s.t. Ax-b E Cy X P, where P is any closed convex set, not necessarily a cone, which we call the groundset. Although any convex problem can be transformed to conic form, such transformations are neither unique nor natural given the natural description of many problems, thereby diminishing the relevance of data-based condition number theory. Herein we extend the modern theory of condition numbers to the problem format (GPd). As a byproduct, we are able to state and prove natural extensions of many theorems from the conic-based theory of condition numbers to this broader problem format.
53

On Distributed Optimization in Networked Systems

Johansson, Björn January 2008 (has links)
Numerous control and decision problems in networked systems can be posed as optimization problems. Examples include the framework of network utility maximization for resource allocation in communication networks, multi-agent coordination in robotics, and collaborative estimation in wireless sensor networks (WSNs). In contrast to classical distributed optimization, which focuses on improving computational efficiency and scalability, these new applications require simple mechanisms that can operate under limited communication. In this thesis, we develop several novel mechanisms for distributed optimization under communication constraints, and apply these to several challenging engineering problems. In particular, we devise three tailored optimization algorithms relying only on nearest neighbor, also known as peer-to-peer, communication. Two of the algorithms are designed to minimize a non-smooth convex additive objective function, in which each term corresponds to a node in a network. The first method is an extension of the randomized incremental subgradient method where the update order is given by a random walk on the underlying communication graph, resulting in a randomized peer-to-peer algorithm with guaranteed convergence properties. The second method combines local subgradient iterations with consensus steps to average local update directions. The resulting optimization method can be executed in a peer-to-peer fashion and analyzed using epsilon-subgradient methods. The third algorithm is a center-free algorithm, which solves a non-smooth resource allocation problem with a separable additive convex objective function subject to a constant sum constraint. Then we consider cross-layer optimization of communication networks, and demonstrate how optimization techniques allow us to engineer protocols that mimic the operation of distributed optimization algorithms to obtain an optimal resource allocation. We describe a novel use of decomposition methods for cross-layer optimization, and present a flowchart that can be used to categorize and visualize a large part of the current literature on this topic. In addition, we devise protocols that optimize the resource allocation in frequency-division multiple access (FDMA) networks and spatial reuse time-division multiple access (TDMA) networks, respectively. Next we investigate some variants of the consensus problem for multi-robot coordination, for which it is usually standard to assume that agents should meet at the barycenter of the initial states. We propose a negotiation strategy to find an optimal meeting point in the sense that the agents' trajectories to the meeting point minimize a quadratic cost criterion. Furthermore, we also demonstrate how an augmented state vector can be used to boost the convergence rate of the standard linear distributed averaging iterations, and we present necessary and sufficient convergence conditions for a general version of these iterations. Finally, we devise a generic optimization software component for WSNs. To this end, we implement some of the most promising optimization algorithms developed by ourselves and others in our WSN testbed, and present experimental results, which show that the proposed algorithms work surprisingly well. / QC 20100813
54

Optimization in multi-relay wireless networks

Nguyen, Huu Ngoc Duy 08 June 2009
The concept of cooperation in communications has drawn a lot of research attention in recent years due to its potential to improve the efficiency of wireless networks. This new form of communications allows some users to act as relays and assist the transmission of other users' information signals. The aim of this thesis is to apply optimization techniques in the design of multi-relay wireless networks employing cooperative communications. In general, the thesis is organized into two parts: ``Distributed space-time coding' (DSTC) and ``Distributed beamforming', which cover two main approaches in cooperative communications over multi-relay networks. <br><br> In Part I of the thesis, various aspects of distributed implementation of space-time coding in a wireless relay network are treated. First, the thesis proposes a new fully-diverse distributed code which allows noncoherent reception at the destination. Second, the problem of coordinating the power allocation (PA) between source and relays to achieve the optimal performance of DSTC is studied and a novel PA scheme is developed. It is shown that the proposed PA scheme can obtain the maximum diversity order of DSTC and significantly outperform other suboptimal PA schemes. Third, the thesis presents the optimal PA scheme to minimize the mean-square error (MSE) in channel estimation during training phase of DSTC. The effect of imperfect channel estimation to the performance of DSTC is also thoroughly studied. <br><br> In Part II of the thesis, optimal distributed beamforming designs are developed for a wireless multiuser multi-relay network. Two design criteria for the optimal distributed beamforming at the relays are considered: (i) minimizing the total relay power subject to a guaranteed Quality of Service (QoS) measured in terms of signal-to-noise-ratio (SNR) at the destinations, and (ii) jointly maximizing the SNR margin at the destinations subject to power constraints at the relays. Based on convex optimization techniques, it is shown that these problems can be formulated and solved via second-order conic programming (SOCP). In addition, this part also proposes simple and fast iterative algorithms to directly solve these optimization problems.
55

Distance Measurement-Based Cooperative Source Localization: A Convex Range-Free Approach

Kiraz, Fatma January 2013 (has links)
One of the most essential objectives in WSNs is to determine the spatial coordinates of a source or a sensor node having information. In this study, the problem of range measurement-based localization of a signal source or a sensor is revisited. The main challenge of the problem results from the non-convexity associated with range measurements calculated using the distances from the set of nodes with known positions to a xed sen- sor node. Such measurements corresponding to certain distances are non-convex in two and three dimensions. Attempts recently proposed in the literature to eliminate the non- convexity approach the problem as a non-convex geometric minimization problem, using techniques to handle the non-convexity. This study proposes a new fuzzy range-free sensor localization method. The method suggests using some notions of Euclidean geometry to convert the problem into a convex geometric problem. The convex equivalent problem is built using convex fuzzy sets, thus avoiding multiple stable local minima issues, then a gradient based localization algorithm is chosen to solve the problem. Next, the proposed algorithm is simulated considering various scenarios, including the number of available source nodes, fuzzi cation level, and area coverage. The results are compared with an algorithm having similar fuzzy logic settings. Also, the behaviour of both algorithms with noisy measurements are discussed. Finally, future extensions of the algorithm are suggested, along with some guidelines.
56

A Convex Decomposition Perspective on Dynamic Bandwidth Allocation and Applications

Morell Pérez, Antoni 23 September 2008 (has links)
Tradicionalment, les tècniques d'accés múltiple en sistemes de comunicacions multi-usuari han estat desenvolupades o bé orientades a la connexió o bé orientades al tràfic. En el primer cas, l'objectiu és establir tants canals ortogonals com sigui possible per tal d'assignar-los als usuaris. Aquesta idea va motivar el disseny de les estratègies més conegudes, com són FDMA, TDMA i CDMA. Per altra banda, però, els mètodes d'accés aleatori que s'iniciaren amb el conegut ALOHA pretenen compartir estadísticament un mateix medi de comunicació aprofitant la necessitat de transmetre la informació a ràfegues que s'origina en les xarxes de dades. Així, molts dels actuals sistemes es poden encabir dins d'aquest esquema si a més a més, tenim en compte combinacions d'aquestes. No obstant, sistemes moderns com el DVB-RCS en l'entorn de comunicacions digitals per satèl·lit o el WiMAX en l'accés terrestre de banda ampla implementen mecanismes de petició i assignació de recursos, els quals requereixen una gestió dinàmica d'aquests en el sistema (és el que s'anomena distribució dinàmica de l'amplada de banda en un sentit ampli).L'anterior concepte inclou múltiples variables, configuracions i protocols tant de capa física com de capa d'enllaç. En aquesta tesi s'exploren en primer lloc les bases matemàtiques que permeten coordinar les diferents capes de la divisió OSI dels sistemes i els distints nodes dins la xarxa. Ens referim a les tècniques de descomposició focalitzades en problemes d'optimització convexa, els quals han aportat, durant els últims anys, solucions elegants a molts problemes dins dels camps del processament del senyal i les comunicacions. Revisarem els esquemes coneguts i proposarem una nova metodologia. Acte seguit, es comparen les diferents possibilitats de descomposició, cadascuna de les quals implica diferents maneres d'establir la senyalització. A la pràctica, són aquestes diverses opcions de descomposició les que infereixen les diferents interaccions entre capes o els protocols de control entre elements de la xarxa. Els resultats en quant a nombre d'iteracions requerides per a convergir a la solució òptima són favorables al nou mètode proposat, la qual cosa obra noves línies d'investigació.Finalment, es contribueix també amb dos exemples d'aplicació, en DVB-RCS i en WiMAX. Formulem el problema de gestió de recursos resultant de l'accés múltiple dissenyat per cadascun dels sistemes com un problema de maximització d'utilitat de xarxa (conegut com a NUM en la bibliografia) i el solucionarem aplicant les tècniques anteriors. L'objectiu serà garantir l'equitativitat entre els usuaris i preservar, al mateix temps, la seva qualitat de servei. Per aconseguir-ho, cal seleccionar funcions d'utilitat adequades que permetin balancejar l'assignació de recursos cap als serveis més prioritaris. Mostrarem que en els escenaris considerats, l'ús del mètode proposat comporta guanys significatius ja que requereix menys iteracions en el procés (i per tant, menys senyalització) o bé menys temps de càlcul en un enfoc centralitzat (que es tradueix en la possibilitat d'incloure més usuaris). També es mostren els avantatges de considerar interaccions entre capes, ja que es poden ajustar els paràmetres de capa física per tal d'afavorir els tràfics més prioritaris o bé extreure els requeriments de servei de valors típicament disponibles en capes superiors.En general, la implementació eficient de tècniques de gestió dinàmica de recursos treballant en l'accés múltiple dels sistemes pot aportar guanys significatius però implica establir una bona coordinació entre capes i elements de xarxa. L'eina matemàtica que ho possibilita són les tècniques de descomposició. Cada nou escenari i sistema introdueix un nou repte d'optimització i la capacitat que tinguem de coordinar totes les variables del sistema cap al punt òptim en determinarà el rendiment global. / Traditionally, multiple access schemes in multi-user communications systems have been designed either connection-oriented or traffic-oriented. In the first ones, the goal was to provide as many orthogonal channels as possible, each one serving a different connection. That is the motivation of the so-called FDMA, TDMA and CDMA solutions. On the other hand, random access techniques, which started with the so-called ALOHA protocol, aim to statistically multiplex a shared communication medium by means of exploiting the random and bursty nature of transmission needs in data networks. Most of the multiple access solutions can be interpreted according to that classification or as a combination of those approaches. Notwithstanding, modern systems, such as the digital satellite communications standard DVB-RCS or the broadband wireless access WiMAX, have implemented a multiple access technique where users request for transmission opportunities and receive grants from the network, therefore requiring dynamic bandwidth allocation techniques. The concept of dynamic bandwidth allocation is wide and involves a number of physical and link layer variables, configurations and protocols. In this Ph.D. dissertation we first explore the mathematical foundation that is required to coordinate the distinct layers of the OSI protocol stack and the distinct nodes within the network. We talk about decomposition techniques focused on the resolution of convex programs, which have elegantly solved many problems in the signal processing and communications fields during the last years. Known schemes are reviewed and a novel decomposition methodology is proposed. Thereafter, we compare the four resulting strategies, each one having its own particular signalling needs, which results in distinct cross-layer interactions or signalling protocols at implementation level. The results in terms of iterations required to converge are favourable to the proposed method, thus opening a new line of research.Finally, we contribute with two practical application examples in the DVB-RCS and WiMAX systems. First, we formulate the dynamic bandwidth allocation problem that is derived from the multiple access schemes of both systems. Thereafter, the resulting Network Utility Maximization (NUM) based problem is solved by means of the previous decomposition mechanisms. The goal is to guarantee fairness among the users at the same time that Quality of Service (QoS) is preserved. In order to achieve that, we choose adequate utility functions that allow to balance the allocation towards the most priority traffic flows under a common fairness framework. We show that in the scenarios considered, the novel proposed coupled-decomposition method reports significant gains since it reduces significantly the iterations required (less iterations implies less signalling) or it reduces the time needed to obtain the optimal allocation when it is centrally computed (more users can be managed). We further show the advantages of cross-layer interactions with the physical and upper layers, which allow to benefit from more favourable adjustments of the transmission parameters and to consider the QoS requirements at upper layers. In general, an efficient implementation of dynamic bandwidth allocation techniques in Demand Assignment Multiple Access (DAMA) schemes may report significant performance gains but it requires proper coordination among system layers and network nodes, which is attained thanks to decomposition techniques. Each new scenario and system adds another optimization challenge and, as far as we are able to coordinate all the variables in the system towards that optimal point, the highest will be the revenue.
57

New Convex Relaxations for the Maximum Cut and VLSI Layout Problems

Ferreira Fialho dos Anjos, Miguel Nuno January 2001 (has links)
It is well known that many of the optimization problems which arise in applications are <I>hard</I>, which usually means that they are NP-hard. Hence much research has been devoted to finding <I>good</I> relaxations for these hard problems. Usually a <I>good</I> relaxation is one which can be solved (either exactly or within a prescribed numerical tolerance) in polynomial-time. Nesterov and Nemirovskii showed that by this criterion, many convex optimization problems are good relaxations. This thesis presents new convex relaxations for two such hard problems, namely the Maximum-Cut (Max-Cut) problem and the VLSI (Very Large Scale Integration of electronic circuits) layout problem. We derive and study the properties of two new strengthened semidefinite programming relaxations for the Max-Cut problem. Our theoretical results hold for every instance of Max-Cut; in particular, we make no assumptions about the edge weights. The first relaxation provides a strengthening of the well-known Goemans-Williamson relaxation, and the second relaxation is a further tightening of the first. We prove that the tighter relaxation automatically enforces the well-known triangle inequalities, and in fact is stronger than the simple addition of these inequalities to the Goemans-Williamson relaxation. We further prove that the tighter relaxation fully characterizes some low dimensional faces of the cut polytope via the rank of its feasible matrices. We also address some practical issues arising in the solution of these relaxations and present numerical results showing the remarkably good bounds computed by the tighter relaxation. For the VLSI layout problem, we derive a new relaxation by extending the <I>target distance</I> concept recently introduced by Etawil and Vannelli. The resulting AR (Attractor-Repeller) model improves on the NLT (Non-linear optimization Layout Technique) method of van Camp et al. by efficiently finding a good initial point for the NLT solver. Because the AR model is not a convex optimization problem, we isolate the source of the non-convexity and thereby derive a convex version of the model. This derivation leads to the definition of certain generalized target distances with an interesting practical interpretation. Finally, we present numerical results that demonstrate the potential of the AR model.
58

A Branch-and-Cut Algorithm based on Semidefinite Programming for the Minimum k-Partition Problem

Ghaddar, Bissan January 2007 (has links)
The minimum k-partition (MkP) problem is a well-known optimization problem encountered in various applications most notably in telecommunication and physics. Formulated in the early 1990s by Chopra and Rao, the MkP problem is the problem of partitioning the set of vertices of a graph into k disjoint subsets so as to minimize the total weight of the edges joining vertices in different partitions. In this thesis, we design and implement a branch-and-cut algorithm based on semidefinite programming (SBC) for the MkP problem. We describe and study the properties of two relaxations of the MkP problem, the linear programming and the semidefinite programming relaxations. We then derive a new strengthened relaxation based on semidefinite programming. This new relaxation provides tighter bounds compared to the other two discussed relaxations but suffers in term of computational time. We further devise an iterative clustering heuristic (ICH), a novel heuristic that finds feasible solution to the MkP problem and we compare it to the hyperplane rounding techniques of Goemans and Williamson and Frieze and Jerrum for k=2 and for k=3 respectively. Our computational results support the conclusion that ICH provides a better feasible solution for the MkP. Furthermore, unlike the hyperplane rounding, ICH remains very effective in the presence of negative edge weights. Next we describe in detail the design and implementation of a branch-and-cut algorithm based on semidefinite programming (SBC) to find optimal solution for the MkP problem. The ICH heuristic is used in our SBC algorithm to provide feasible solutions at each node of the branch-and-cut tree. Finally, we present computational results for the SBC algorithm on several classes of test instances with k=3, 5, and 7. Complete graphs with up to 60 vertices and sparse graphs with up to 100 vertices arising from a physics application were considered.
59

New Convex Relaxations for the Maximum Cut and VLSI Layout Problems

Ferreira Fialho dos Anjos, Miguel Nuno January 2001 (has links)
It is well known that many of the optimization problems which arise in applications are <I>hard</I>, which usually means that they are NP-hard. Hence much research has been devoted to finding <I>good</I> relaxations for these hard problems. Usually a <I>good</I> relaxation is one which can be solved (either exactly or within a prescribed numerical tolerance) in polynomial-time. Nesterov and Nemirovskii showed that by this criterion, many convex optimization problems are good relaxations. This thesis presents new convex relaxations for two such hard problems, namely the Maximum-Cut (Max-Cut) problem and the VLSI (Very Large Scale Integration of electronic circuits) layout problem. We derive and study the properties of two new strengthened semidefinite programming relaxations for the Max-Cut problem. Our theoretical results hold for every instance of Max-Cut; in particular, we make no assumptions about the edge weights. The first relaxation provides a strengthening of the well-known Goemans-Williamson relaxation, and the second relaxation is a further tightening of the first. We prove that the tighter relaxation automatically enforces the well-known triangle inequalities, and in fact is stronger than the simple addition of these inequalities to the Goemans-Williamson relaxation. We further prove that the tighter relaxation fully characterizes some low dimensional faces of the cut polytope via the rank of its feasible matrices. We also address some practical issues arising in the solution of these relaxations and present numerical results showing the remarkably good bounds computed by the tighter relaxation. For the VLSI layout problem, we derive a new relaxation by extending the <I>target distance</I> concept recently introduced by Etawil and Vannelli. The resulting AR (Attractor-Repeller) model improves on the NLT (Non-linear optimization Layout Technique) method of van Camp et al. by efficiently finding a good initial point for the NLT solver. Because the AR model is not a convex optimization problem, we isolate the source of the non-convexity and thereby derive a convex version of the model. This derivation leads to the definition of certain generalized target distances with an interesting practical interpretation. Finally, we present numerical results that demonstrate the potential of the AR model.
60

A Branch-and-Cut Algorithm based on Semidefinite Programming for the Minimum k-Partition Problem

Ghaddar, Bissan January 2007 (has links)
The minimum k-partition (MkP) problem is a well-known optimization problem encountered in various applications most notably in telecommunication and physics. Formulated in the early 1990s by Chopra and Rao, the MkP problem is the problem of partitioning the set of vertices of a graph into k disjoint subsets so as to minimize the total weight of the edges joining vertices in different partitions. In this thesis, we design and implement a branch-and-cut algorithm based on semidefinite programming (SBC) for the MkP problem. We describe and study the properties of two relaxations of the MkP problem, the linear programming and the semidefinite programming relaxations. We then derive a new strengthened relaxation based on semidefinite programming. This new relaxation provides tighter bounds compared to the other two discussed relaxations but suffers in term of computational time. We further devise an iterative clustering heuristic (ICH), a novel heuristic that finds feasible solution to the MkP problem and we compare it to the hyperplane rounding techniques of Goemans and Williamson and Frieze and Jerrum for k=2 and for k=3 respectively. Our computational results support the conclusion that ICH provides a better feasible solution for the MkP. Furthermore, unlike the hyperplane rounding, ICH remains very effective in the presence of negative edge weights. Next we describe in detail the design and implementation of a branch-and-cut algorithm based on semidefinite programming (SBC) to find optimal solution for the MkP problem. The ICH heuristic is used in our SBC algorithm to provide feasible solutions at each node of the branch-and-cut tree. Finally, we present computational results for the SBC algorithm on several classes of test instances with k=3, 5, and 7. Complete graphs with up to 60 vertices and sparse graphs with up to 100 vertices arising from a physics application were considered.

Page generated in 0.1139 seconds