• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 14
  • 13
  • 11
  • 6
  • 1
  • 1
  • Tagged with
  • 224
  • 224
  • 53
  • 45
  • 41
  • 39
  • 38
  • 32
  • 24
  • 24
  • 23
  • 23
  • 22
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Optimization Techniques for Image Processing

Chapagain, Prerak 01 April 2019 (has links)
This research thesis starts off with a basic introduction to optimization and image processing. Because there are several different tools to apply optimization in image processing applications, we started researching one category of mathematical optimization techniques, namely Convex Optimization. This thesis provides a basic background consisting of mathematical concepts, as well as some challenges of employing Convex Optimization in solving problems. One major issue is to be able to identify the convexity of the problem in a potential application (Boyd). After spending a couple of months researching and learning Convex Optimization, my advisor and I decided to go on a different route. We decided to use Heuristic Optimization techniques instead, and in particular, Genetic Algorithms (GA). We also conjectured that the application of GA in image processing for the purpose of object matching could potentially yield good results. As a first step, we used MATLAB as the programming language, and we wrote the GA code from scratch. Next, we applied the GA algorithm in object matching. More specifically, we constructed specific images to demonstrate the effectiveness of the algorithm in identifying objects of interest. The results presented in this thesis indicate that the technique is capable of identifying objects under noise conditions.
42

Constrained Motion Particle Swarm Optimization for Non-Linear Time Series Prediction

Sapankevych, Nicholas 13 March 2015 (has links)
Time series prediction techniques have been used in many real-world applications such as financial market prediction, electric utility load forecasting, weather and environmental state prediction, and reliability forecasting. The underlying system models and time series data generating processes are generally complex for these applications and the models for these systems are usually not known a priori. Accurate and unbiased estimation of time series data produced by these systems cannot always be achieved using well known linear techniques, and thus the estimation process requires more advanced time series prediction algorithms. One type of time series interpolation and prediction algorithm that has been proven to be effective for these various types of applications is Support Vector Regression (SVR) [1], which is based on the Support Vector Machine (SVM) developed by Vapnik et al. [2, 3]. The underlying motivation for using SVMs is the ability of this methodology to accurately forecast time series data when the underlying system processes are typically nonlinear, non-stationary and not defined a-priori. SVMs have also been proven to outperform other non-linear techniques including neural-network based non-linear prediction techniques such as multi-layer perceptrons. As with most time series prediction algorithms, there are typically challenges associated in applying a given heuristic to any general problem. One difficult challenge in using SVR to solve these types of problems is the selection of free parameters associated with the SVR algorithm. There is no given heuristic to select SVR free parameters and the user is left to adjust these parameters in an ad hoc manner. The focus of this dissertation is to present an alternative to the typical ad hoc approach of tuning SVR for time series prediction problems by using Particle Swarm Optimization (PSO) to assist in the SVR free parameter selection process. Developed by Kennedy and Eberhart [4-8], PSO is a technique that emulates the process living creatures (such as birds or insects) use to discover food resources at a given geographic location. PSO has been proven to be an effective technique for many different kinds of optimization problems [9-11]. The focus of this dissertation is to present an alternative to the typical ad hoc approach of tuning SVR for time series prediction problems by using Particle Swarm Optimization (PSO) to assist in the SVR free parameter selection process. Developed by Kennedy and Eberhart [4-8], PSO is a technique that emulates the process living creatures (such as birds or insects) use to discover food resources at a given geographic location. PSO has been proven to be an effective technique for many different kinds of optimization problems [9-11].
43

On an Extension of Condition Number Theory to Non-Conic Convex Optimization

Freund, Robert M., Ordóñez, Fernando, 1970- 02 1900 (has links)
The purpose of this paper is to extend, as much as possible, the modern theory of condition numbers for conic convex optimization: z* := minz ctx s.t. Ax - b Cy C Cx , to the more general non-conic format: z* := minx ctx (GPd) s.t. Ax-b E Cy X P, where P is any closed convex set, not necessarily a cone, which we call the groundset. Although any convex problem can be transformed to conic form, such transformations are neither unique nor natural given the natural description of many problems, thereby diminishing the relevance of data-based condition number theory. Herein we extend the modern theory of condition numbers to the problem format (GPd). As a byproduct, we are able to state and prove natural extensions of many theorems from the conic-based theory of condition numbers to this broader problem format.
44

Distance Measurement-Based Cooperative Source Localization: A Convex Range-Free Approach

Kiraz, Fatma January 2013 (has links)
One of the most essential objectives in WSNs is to determine the spatial coordinates of a source or a sensor node having information. In this study, the problem of range measurement-based localization of a signal source or a sensor is revisited. The main challenge of the problem results from the non-convexity associated with range measurements calculated using the distances from the set of nodes with known positions to a xed sen- sor node. Such measurements corresponding to certain distances are non-convex in two and three dimensions. Attempts recently proposed in the literature to eliminate the non- convexity approach the problem as a non-convex geometric minimization problem, using techniques to handle the non-convexity. This study proposes a new fuzzy range-free sensor localization method. The method suggests using some notions of Euclidean geometry to convert the problem into a convex geometric problem. The convex equivalent problem is built using convex fuzzy sets, thus avoiding multiple stable local minima issues, then a gradient based localization algorithm is chosen to solve the problem. Next, the proposed algorithm is simulated considering various scenarios, including the number of available source nodes, fuzzi cation level, and area coverage. The results are compared with an algorithm having similar fuzzy logic settings. Also, the behaviour of both algorithms with noisy measurements are discussed. Finally, future extensions of the algorithm are suggested, along with some guidelines.
45

A Convex Decomposition Perspective on Dynamic Bandwidth Allocation and Applications

Morell Pérez, Antoni 23 September 2008 (has links)
Tradicionalment, les tècniques d'accés múltiple en sistemes de comunicacions multi-usuari han estat desenvolupades o bé orientades a la connexió o bé orientades al tràfic. En el primer cas, l'objectiu és establir tants canals ortogonals com sigui possible per tal d'assignar-los als usuaris. Aquesta idea va motivar el disseny de les estratègies més conegudes, com són FDMA, TDMA i CDMA. Per altra banda, però, els mètodes d'accés aleatori que s'iniciaren amb el conegut ALOHA pretenen compartir estadísticament un mateix medi de comunicació aprofitant la necessitat de transmetre la informació a ràfegues que s'origina en les xarxes de dades. Així, molts dels actuals sistemes es poden encabir dins d'aquest esquema si a més a més, tenim en compte combinacions d'aquestes. No obstant, sistemes moderns com el DVB-RCS en l'entorn de comunicacions digitals per satèl·lit o el WiMAX en l'accés terrestre de banda ampla implementen mecanismes de petició i assignació de recursos, els quals requereixen una gestió dinàmica d'aquests en el sistema (és el que s'anomena distribució dinàmica de l'amplada de banda en un sentit ampli).L'anterior concepte inclou múltiples variables, configuracions i protocols tant de capa física com de capa d'enllaç. En aquesta tesi s'exploren en primer lloc les bases matemàtiques que permeten coordinar les diferents capes de la divisió OSI dels sistemes i els distints nodes dins la xarxa. Ens referim a les tècniques de descomposició focalitzades en problemes d'optimització convexa, els quals han aportat, durant els últims anys, solucions elegants a molts problemes dins dels camps del processament del senyal i les comunicacions. Revisarem els esquemes coneguts i proposarem una nova metodologia. Acte seguit, es comparen les diferents possibilitats de descomposició, cadascuna de les quals implica diferents maneres d'establir la senyalització. A la pràctica, són aquestes diverses opcions de descomposició les que infereixen les diferents interaccions entre capes o els protocols de control entre elements de la xarxa. Els resultats en quant a nombre d'iteracions requerides per a convergir a la solució òptima són favorables al nou mètode proposat, la qual cosa obra noves línies d'investigació.Finalment, es contribueix també amb dos exemples d'aplicació, en DVB-RCS i en WiMAX. Formulem el problema de gestió de recursos resultant de l'accés múltiple dissenyat per cadascun dels sistemes com un problema de maximització d'utilitat de xarxa (conegut com a NUM en la bibliografia) i el solucionarem aplicant les tècniques anteriors. L'objectiu serà garantir l'equitativitat entre els usuaris i preservar, al mateix temps, la seva qualitat de servei. Per aconseguir-ho, cal seleccionar funcions d'utilitat adequades que permetin balancejar l'assignació de recursos cap als serveis més prioritaris. Mostrarem que en els escenaris considerats, l'ús del mètode proposat comporta guanys significatius ja que requereix menys iteracions en el procés (i per tant, menys senyalització) o bé menys temps de càlcul en un enfoc centralitzat (que es tradueix en la possibilitat d'incloure més usuaris). També es mostren els avantatges de considerar interaccions entre capes, ja que es poden ajustar els paràmetres de capa física per tal d'afavorir els tràfics més prioritaris o bé extreure els requeriments de servei de valors típicament disponibles en capes superiors.En general, la implementació eficient de tècniques de gestió dinàmica de recursos treballant en l'accés múltiple dels sistemes pot aportar guanys significatius però implica establir una bona coordinació entre capes i elements de xarxa. L'eina matemàtica que ho possibilita són les tècniques de descomposició. Cada nou escenari i sistema introdueix un nou repte d'optimització i la capacitat que tinguem de coordinar totes les variables del sistema cap al punt òptim en determinarà el rendiment global. / Traditionally, multiple access schemes in multi-user communications systems have been designed either connection-oriented or traffic-oriented. In the first ones, the goal was to provide as many orthogonal channels as possible, each one serving a different connection. That is the motivation of the so-called FDMA, TDMA and CDMA solutions. On the other hand, random access techniques, which started with the so-called ALOHA protocol, aim to statistically multiplex a shared communication medium by means of exploiting the random and bursty nature of transmission needs in data networks. Most of the multiple access solutions can be interpreted according to that classification or as a combination of those approaches. Notwithstanding, modern systems, such as the digital satellite communications standard DVB-RCS or the broadband wireless access WiMAX, have implemented a multiple access technique where users request for transmission opportunities and receive grants from the network, therefore requiring dynamic bandwidth allocation techniques. The concept of dynamic bandwidth allocation is wide and involves a number of physical and link layer variables, configurations and protocols. In this Ph.D. dissertation we first explore the mathematical foundation that is required to coordinate the distinct layers of the OSI protocol stack and the distinct nodes within the network. We talk about decomposition techniques focused on the resolution of convex programs, which have elegantly solved many problems in the signal processing and communications fields during the last years. Known schemes are reviewed and a novel decomposition methodology is proposed. Thereafter, we compare the four resulting strategies, each one having its own particular signalling needs, which results in distinct cross-layer interactions or signalling protocols at implementation level. The results in terms of iterations required to converge are favourable to the proposed method, thus opening a new line of research.Finally, we contribute with two practical application examples in the DVB-RCS and WiMAX systems. First, we formulate the dynamic bandwidth allocation problem that is derived from the multiple access schemes of both systems. Thereafter, the resulting Network Utility Maximization (NUM) based problem is solved by means of the previous decomposition mechanisms. The goal is to guarantee fairness among the users at the same time that Quality of Service (QoS) is preserved. In order to achieve that, we choose adequate utility functions that allow to balance the allocation towards the most priority traffic flows under a common fairness framework. We show that in the scenarios considered, the novel proposed coupled-decomposition method reports significant gains since it reduces significantly the iterations required (less iterations implies less signalling) or it reduces the time needed to obtain the optimal allocation when it is centrally computed (more users can be managed). We further show the advantages of cross-layer interactions with the physical and upper layers, which allow to benefit from more favourable adjustments of the transmission parameters and to consider the QoS requirements at upper layers. In general, an efficient implementation of dynamic bandwidth allocation techniques in Demand Assignment Multiple Access (DAMA) schemes may report significant performance gains but it requires proper coordination among system layers and network nodes, which is attained thanks to decomposition techniques. Each new scenario and system adds another optimization challenge and, as far as we are able to coordinate all the variables in the system towards that optimal point, the highest will be the revenue.
46

New Convex Relaxations for the Maximum Cut and VLSI Layout Problems

Ferreira Fialho dos Anjos, Miguel Nuno January 2001 (has links)
It is well known that many of the optimization problems which arise in applications are <I>hard</I>, which usually means that they are NP-hard. Hence much research has been devoted to finding <I>good</I> relaxations for these hard problems. Usually a <I>good</I> relaxation is one which can be solved (either exactly or within a prescribed numerical tolerance) in polynomial-time. Nesterov and Nemirovskii showed that by this criterion, many convex optimization problems are good relaxations. This thesis presents new convex relaxations for two such hard problems, namely the Maximum-Cut (Max-Cut) problem and the VLSI (Very Large Scale Integration of electronic circuits) layout problem. We derive and study the properties of two new strengthened semidefinite programming relaxations for the Max-Cut problem. Our theoretical results hold for every instance of Max-Cut; in particular, we make no assumptions about the edge weights. The first relaxation provides a strengthening of the well-known Goemans-Williamson relaxation, and the second relaxation is a further tightening of the first. We prove that the tighter relaxation automatically enforces the well-known triangle inequalities, and in fact is stronger than the simple addition of these inequalities to the Goemans-Williamson relaxation. We further prove that the tighter relaxation fully characterizes some low dimensional faces of the cut polytope via the rank of its feasible matrices. We also address some practical issues arising in the solution of these relaxations and present numerical results showing the remarkably good bounds computed by the tighter relaxation. For the VLSI layout problem, we derive a new relaxation by extending the <I>target distance</I> concept recently introduced by Etawil and Vannelli. The resulting AR (Attractor-Repeller) model improves on the NLT (Non-linear optimization Layout Technique) method of van Camp et al. by efficiently finding a good initial point for the NLT solver. Because the AR model is not a convex optimization problem, we isolate the source of the non-convexity and thereby derive a convex version of the model. This derivation leads to the definition of certain generalized target distances with an interesting practical interpretation. Finally, we present numerical results that demonstrate the potential of the AR model.
47

A Branch-and-Cut Algorithm based on Semidefinite Programming for the Minimum k-Partition Problem

Ghaddar, Bissan January 2007 (has links)
The minimum k-partition (MkP) problem is a well-known optimization problem encountered in various applications most notably in telecommunication and physics. Formulated in the early 1990s by Chopra and Rao, the MkP problem is the problem of partitioning the set of vertices of a graph into k disjoint subsets so as to minimize the total weight of the edges joining vertices in different partitions. In this thesis, we design and implement a branch-and-cut algorithm based on semidefinite programming (SBC) for the MkP problem. We describe and study the properties of two relaxations of the MkP problem, the linear programming and the semidefinite programming relaxations. We then derive a new strengthened relaxation based on semidefinite programming. This new relaxation provides tighter bounds compared to the other two discussed relaxations but suffers in term of computational time. We further devise an iterative clustering heuristic (ICH), a novel heuristic that finds feasible solution to the MkP problem and we compare it to the hyperplane rounding techniques of Goemans and Williamson and Frieze and Jerrum for k=2 and for k=3 respectively. Our computational results support the conclusion that ICH provides a better feasible solution for the MkP. Furthermore, unlike the hyperplane rounding, ICH remains very effective in the presence of negative edge weights. Next we describe in detail the design and implementation of a branch-and-cut algorithm based on semidefinite programming (SBC) to find optimal solution for the MkP problem. The ICH heuristic is used in our SBC algorithm to provide feasible solutions at each node of the branch-and-cut tree. Finally, we present computational results for the SBC algorithm on several classes of test instances with k=3, 5, and 7. Complete graphs with up to 60 vertices and sparse graphs with up to 100 vertices arising from a physics application were considered.
48

New Convex Relaxations for the Maximum Cut and VLSI Layout Problems

Ferreira Fialho dos Anjos, Miguel Nuno January 2001 (has links)
It is well known that many of the optimization problems which arise in applications are <I>hard</I>, which usually means that they are NP-hard. Hence much research has been devoted to finding <I>good</I> relaxations for these hard problems. Usually a <I>good</I> relaxation is one which can be solved (either exactly or within a prescribed numerical tolerance) in polynomial-time. Nesterov and Nemirovskii showed that by this criterion, many convex optimization problems are good relaxations. This thesis presents new convex relaxations for two such hard problems, namely the Maximum-Cut (Max-Cut) problem and the VLSI (Very Large Scale Integration of electronic circuits) layout problem. We derive and study the properties of two new strengthened semidefinite programming relaxations for the Max-Cut problem. Our theoretical results hold for every instance of Max-Cut; in particular, we make no assumptions about the edge weights. The first relaxation provides a strengthening of the well-known Goemans-Williamson relaxation, and the second relaxation is a further tightening of the first. We prove that the tighter relaxation automatically enforces the well-known triangle inequalities, and in fact is stronger than the simple addition of these inequalities to the Goemans-Williamson relaxation. We further prove that the tighter relaxation fully characterizes some low dimensional faces of the cut polytope via the rank of its feasible matrices. We also address some practical issues arising in the solution of these relaxations and present numerical results showing the remarkably good bounds computed by the tighter relaxation. For the VLSI layout problem, we derive a new relaxation by extending the <I>target distance</I> concept recently introduced by Etawil and Vannelli. The resulting AR (Attractor-Repeller) model improves on the NLT (Non-linear optimization Layout Technique) method of van Camp et al. by efficiently finding a good initial point for the NLT solver. Because the AR model is not a convex optimization problem, we isolate the source of the non-convexity and thereby derive a convex version of the model. This derivation leads to the definition of certain generalized target distances with an interesting practical interpretation. Finally, we present numerical results that demonstrate the potential of the AR model.
49

A Branch-and-Cut Algorithm based on Semidefinite Programming for the Minimum k-Partition Problem

Ghaddar, Bissan January 2007 (has links)
The minimum k-partition (MkP) problem is a well-known optimization problem encountered in various applications most notably in telecommunication and physics. Formulated in the early 1990s by Chopra and Rao, the MkP problem is the problem of partitioning the set of vertices of a graph into k disjoint subsets so as to minimize the total weight of the edges joining vertices in different partitions. In this thesis, we design and implement a branch-and-cut algorithm based on semidefinite programming (SBC) for the MkP problem. We describe and study the properties of two relaxations of the MkP problem, the linear programming and the semidefinite programming relaxations. We then derive a new strengthened relaxation based on semidefinite programming. This new relaxation provides tighter bounds compared to the other two discussed relaxations but suffers in term of computational time. We further devise an iterative clustering heuristic (ICH), a novel heuristic that finds feasible solution to the MkP problem and we compare it to the hyperplane rounding techniques of Goemans and Williamson and Frieze and Jerrum for k=2 and for k=3 respectively. Our computational results support the conclusion that ICH provides a better feasible solution for the MkP. Furthermore, unlike the hyperplane rounding, ICH remains very effective in the presence of negative edge weights. Next we describe in detail the design and implementation of a branch-and-cut algorithm based on semidefinite programming (SBC) to find optimal solution for the MkP problem. The ICH heuristic is used in our SBC algorithm to provide feasible solutions at each node of the branch-and-cut tree. Finally, we present computational results for the SBC algorithm on several classes of test instances with k=3, 5, and 7. Complete graphs with up to 60 vertices and sparse graphs with up to 100 vertices arising from a physics application were considered.
50

Spectrum Sharing in Cognitive Radio Systems Under Outage Probablility Constraint

Cai, Pei Li 2009 December 1900 (has links)
For traditional wireless communication systems, static spectrum allocation is the major spectrum allocation methodology. However, according to the recent investigations by the FCC, this has led to more than 70 percent of the allocated spectrum in the United States being under-utilized. Cognitive radio (CR) technology, which supports opportunistic spectrum sharing, is one idea that is proposed to improve the overall utilization efficiency of the radio spectrum. In this thesis we consider a CR communication system based on spectrum sharing schemes, where we have a secondary user (SU) link with multiple transmitting antennas and a single receiving antenna, coexisting with a primary user (PU) link with a single receiving antenna. At the SU transmitter (SU-Tx), the channel state information (CSI) of the SU link is assumed to be perfectly known; while the interference channel from the SU-Tx to the PU receiver (PU-Rx) is not perfectly known due to less cooperation between the SU and the PU. As such, the SU-Tx is only assumed to know that the interference channel gain can take values from a finite set with certain probabilities. Considering a SU transmit power constraint, our design objective is to determine the transmit covariance matrix that maximizes the SU rate, while we protect the PU by enforcing both a PU average interference constraint and a PU outage probability constraint. This problem is first formulated as a non-convex optimization problem with a non-explicit probabilistic constraint, which is then approximated as a mixed binary integer programming (MBIP) problem and solved with the Branch and Bound (BB) algorithm. The complexity of the BB algorithm is analyzed and numerical results are presented to validate the eff ectiveness of the proposed algorithm. A key result proved in this thesis is that the rank of the optimal transmit covariance matrix is one, i.e., CR beamforming is optimal under PU outage constraints. Finally, a heuristic algorithm is proposed to provide a suboptimal solution to our MBIP problem by efficiently (in polynomial time) solving a particularly-constructed convex problem.

Page generated in 0.1256 seconds