• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 159
  • 14
  • 14
  • 13
  • 6
  • 1
  • 1
  • 1
  • Tagged with
  • 247
  • 247
  • 54
  • 45
  • 44
  • 42
  • 39
  • 35
  • 28
  • 27
  • 26
  • 25
  • 25
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Aspects of Design and Analysis of Cognitive Radios and Networks

Hanif, Muhammad Fainan January 2010 (has links)
Recent survey campaigns have shown a tremendous under utilization of the bandwidth allocated to various wireless services. Motivated by this and the ever increasing demand for wireless applications, the concept of cognitive radio (CR) systems has rendered hope to end the so called spectrum scarcity. This thesis presents various different facets related to the design and analysis of CR systems in a unified way. We begin the thesis by presenting an information theoretic study of cognitive systems working in the so called low interference regime of the overlay mode. We show that as long as the coverage area of a CR is less than that of a primary user (PU) device, the probability of the cognitive terminal inflicting small interference at the PU is overwhelmingly high. We have also analyzed the effect of a key parameter governing the amount of power allocated to relaying the PU message in the overlay mode of operation in realistic environments by presenting a simple and accurate approximation. Then, we explore the possibilities of statistical modeling of the cumulative interference due to multiple interfering CRs. We show that although it is possible to obtain a closed form expression for such an interference due a single CR, the problem is particularly difficult when it comes to the total CR interference in lognormally faded environments. In particular, we have demonstrated that fitting a two or three parameter lognormal is not a feasible option for all scenarios. We also explore the second-order characteristics of the cumulative interference by evaluating its level crossing rate (LCR) and average exceedance duration (AED) in Rayleigh and Rician channel conditions. We show that the LCRs in both these cases can be evaluated by modeling the interference process with gamma and noncentral χ2 processes, respectively. By exploiting radio environment map (REM) information, we have presented two CR scheduling schemes and compared their performance with the naive primary exclusion zone (PEZ) technique. The results demonstrate the significance of using an intelligent allocation method to reap the benefits of the tremendous information available to exploit in the REM based methods. At this juncture, we divert our attention to multiple-input multiple-output (MIMO) CR systems operating in the underlay mode. Using an antenna selection philosophy, we solve a convex optimization problem accomplishing the task and show via analysis and simulations that antenna selection can be a viable option for CRs operating in relatively sparse PU environments. Finally, we study the impact of imperfect channel state information (CSI) on the downlink of an underlay multiple antenna CR network designed to achieve signal-to-interference-plus-noise ratio (SINR) fairness among the CR terminals. By employing a newly developed convex iteration technique, we solve the relevant optimization problem exactly without performing any relaxation on the variables involved.
12

Learning algorithms and statistical software, with applications to bioinformatics / Algorithmes d'apprentissage et logiciels pour la statistique, avec applications à la bioinformatique

Hocking, Toby Dylan 20 November 2012 (has links)
L'apprentissage statistique est le domaine des mathématiques qui aborde le développement des algorithmes d'analyse de données. Cette thèse est divisée en deux parties : l'introduction de modèles mathématiques et l'implémentation d'outils logiciels. Dans la première partie, je présente de nouveaux algorithmes pour la segmentation et pour le partitionnement de données (clustering). Le partitionnement de données et la segmentation sont des méthodes d'analyse qui cherche des structures dans les données. Je présente les contributions suivantes, en soulignant les applications à la bioinformatique. Dans la deuxième partie, je présente mes contributions au logiciel libre pour la statistique, qui est utilisé pour l'analyse quotidienne du statisticien. / Statistical machine learning is a branch of mathematics concerned with developing algorithms for data analysis. This thesis presents new mathematical models and statistical software, and is organized into two parts. In the first part, I present several new algorithms for clustering and segmentation. Clustering and segmentation are a class of techniques that attempt to find structures in data. I discuss the following contributions, with a focus on applications to cancer data from bioinformatics. In the second part, I focus on statistical software contributions which are practical for use in everyday data analysis.
13

Image restoration in the presence of Poisson-Gaussian noise / Restauration d'images dégradées par un bruit Poisson-Gauss

Jezierska, Anna Maria 13 May 2013 (has links)
Cette thèse porte sur la restauration d'images dégradées à la fois par un flou et par un bruit. Une attention particulière est portée aux images issues de la microscopie confocale et notamment celles de macroscopie. Dans ce contexte, un modèle de bruit Poisson-Gauss apparaît bien adapté car il permet de prendre en compte le faible nombre de photons et le fort bruit enregistrés simultanément par les détecteurs. Cependant, ce type de modèle de bruit a été peu exploité car il pose de nombreuses difficultés tant théoriques que pratiques. Dans ce travail, une approche variationnelle est adoptée pour résoudre le problème de restauration dans le cas où le terme de fidélité exact est considéré. La solution du problème peut aussi être interprétée au sens du Maximum A Posteriori (MAP). L'utilisation d'algorithmes primaux-duaux récemment proposés en optimisation convexe permet d'obtenir de bons résultats comparativement à plusieurs approches existantes qui considèrent des approximations variées du terme de fidélité. En ce qui concerne le terme de régularisation de l'approche MAP, des approximations discrète et continue de la pseudo-norme $ell_0$ sont considérées. Cette mesure, célèbre pour favoriser la parcimonie, est difficile à optimiser car elle est, à la fois, non convexe et non lisse. Dans un premier temps, une méthode basée sur les coupures de graphes est proposée afin de prendre en compte des à priori de type quadratique tronqué. Dans un second temps, un algorithme à mémoire de gradient de type Majoration-Minimisation, dont la convergence est garantie, est considéré afin de prendre en compte des a priori de type norme $ell_2-ell_0$. Cet algorithme permet notamment d'obtenir de bons résultats dans des problèmes de déconvolution. Néanmoins, un inconvénient des approches variationnelles est qu'elles nécessitent la détermination d'hyperparamètres. C'est pourquoi, deux méthodes, reposant sur une approche Espérance-Maximisation (EM) sont proposées, dans ce travail, afin d'estimer les paramètres d'un bruit Poisson-Gauss: (1) à partir d'une série temporelle d'images (dans ce cas, des paramètres de « bleaching » peuvent aussi être estimés) et (2) à partir d'une seule image. De manière générale, cette thèse propose et teste de nombreuses méthodologies adaptées à la prise en compte de bruits et de flous difficiles, ce qui devrait se révéler utile pour des applications variées, au-delà même de la microscopie / This thesis deals with the restoration of images corrupted by blur and noise, with emphasis on confocal microscopy and macroscopy applications. Due to low photon count and high detector noise, the Poisson-Gaussian model is well suited to this context. However, up to now it had not been widely utilized because of theoretical and practical difficulties. In view of this, we formulate the image restoration problem in the presence of Poisson-Gaussian noise in a variational framework, where we express and study the exact data fidelity term. The solution to the problem can also be interpreted as a Maximum A Posteriori (MAP) estimate. Using recent primal-dual convex optimization algorithms, we obtain results that outperform methods relying on a variety of approximations. Turning our attention to the regularization term in the MAP framework, we study both discrete and continuous approximation of the $ell_0$ pseudo-norm. This useful measure, well-known for promoting sparsity, is difficult to optimize due to its non-convexity and its non-smoothness. We propose an efficient graph-cut procedure for optimizing energies with truncated quadratic priors. Moreover, we develop a majorize-minimize memory gradient algorithm to optimize various smooth versions of the $ell_2-ell_0$ norm, with guaranteed convergence properties. In particular, good results are achieved on deconvolution problems. One difficulty with variational formulations is the necessity to tune automatically the model hyperparameters. In this context, we propose to estimate the Poisson-Gaussian noise parameters based on two realistic scenarios: one from time series images, taking into account bleaching effects, and another from a single image. These estimations are grounded on the use of an Expectation-Maximization (EM) approach.Overall, this thesis proposes and evaluates various methodologies for tackling difficult image noise and blur cases, which should be useful in various applicative contexts within and beyond microscopy
14

Maximum Likelihood Identification of an Information Matrix Under Constraints in a Corresponding Graphical Model

Li, Nan 22 January 2017 (has links)
We address the problem of identifying the neighborhood structure of an undirected graph, whose nodes are labeled with the elements of a multivariate normal (MVN) random vector. A semi-definite program is given for estimating the information matrix under arbitrary constraints on its elements. More importantly, a closed-form expression is given for the maximum likelihood (ML) estimator of the information matrix, under the constraint that the information matrix has pre-specified elements in a given pattern (e.g., in a principal submatrix). The results apply to the identification of dependency labels in a graphical model with neighborhood constraints. This neighborhood structure excludes nodes which are conditionally independent of a given node and the graph is determined by the non- zero elements in the information matrix for the random vector. A cross-validation principle is given for determining whether the constrained information matrix returned from this procedure is an acceptable model for the information matrix, and as a consequence for the neighborhood structure of the Markov Random Field (MRF) that is identified with the MVN random vector.
15

Computational Experience and the Explanatory Value of Condition Numbers for Linear Optimization

Ordónez, Fernando, Freund, Robert M. 25 September 2003 (has links)
The goal of this paper is to develop some computational experience and test the practical relevance of the theory of condition numbers C(d) for linear optimization, as applied to problem instances that one might encounter in practice. We used the NETLIB suite of linear optimization problems as a test bed for condition number computation and analysis. Our computational results indicate that 72% of the NETLIB suite problem instances are ill-conditioned. However, after pre-processing heuristics are applied, only 19% of the post-processed problem instances are ill-conditioned, and log C(d) of the finitely-conditioned post-processed problems is fairly nicely distributed. We also show that the number of IPM iterations needed to solve the problems in the NETLIB suite varies roughly linearly (and monotonically) with log C(d) of the post-processed problem instances. Empirical evidence yields a positive linear relationship between IPM iterations and log C(d) for the post-processed problem instances, significant at the 95% confidence level. Furthermore, 42% of the variation in IPM iterations among the NETLIB suite problem instances is accounted for by log C(d) of the problem instances after pre-processin
16

An Improved Convex Optimization Model for Two-Dimensional Facility Layout

Jankovits, Ibolya 22 January 2007 (has links)
The facility layout design problem is a fundamental optimization problem encountered in many manufacturing and service organizations that was originally formulated in 1963 by Armour & Buffa. This thesis derives a convex programming model, IBIMODEL, that is designed to improve upon the ModCoAR model of Anjos & Vannelli for the facility layout problem with unequal areas. The purpose of IBIMODEL is to find 'good' initial locations for the departments that a second model then uses to produce a detailed solution to the facility layout problem. The proposed model has four ideas behind it: unlike ModCoAR, it does not improve the objective function as the departments start overlapping, it takes into account the aspect ratio requirements, it introduces a systematic approach to making parameter choices, and it uses a new second stage recently proposed by Luo, Anjos & Vannelli to obtain the actual facility layouts. In this way, IBIMODEL efficiently generates a reasonably diverse set of superior solutions that allow the second stage to provide a wide variety of layouts with relatively low aspect ratios and total cost. The proposed methodology was implemented and numerical results are presented on well-known large layout problems from the literature. To demonstrate the potential of the combination of IBIMODEL with Luo, Anjos & Vannelli's model, our results are compared with the best layouts found to date for these well-known large facility layout problems. The results support the conclusion that the propose a methodology consistently produces competitive, and often improved, layouts for large instances when compared with other approaches in the literature.
17

An Improved Convex Optimization Model for Two-Dimensional Facility Layout

Jankovits, Ibolya 22 January 2007 (has links)
The facility layout design problem is a fundamental optimization problem encountered in many manufacturing and service organizations that was originally formulated in 1963 by Armour & Buffa. This thesis derives a convex programming model, IBIMODEL, that is designed to improve upon the ModCoAR model of Anjos & Vannelli for the facility layout problem with unequal areas. The purpose of IBIMODEL is to find 'good' initial locations for the departments that a second model then uses to produce a detailed solution to the facility layout problem. The proposed model has four ideas behind it: unlike ModCoAR, it does not improve the objective function as the departments start overlapping, it takes into account the aspect ratio requirements, it introduces a systematic approach to making parameter choices, and it uses a new second stage recently proposed by Luo, Anjos & Vannelli to obtain the actual facility layouts. In this way, IBIMODEL efficiently generates a reasonably diverse set of superior solutions that allow the second stage to provide a wide variety of layouts with relatively low aspect ratios and total cost. The proposed methodology was implemented and numerical results are presented on well-known large layout problems from the literature. To demonstrate the potential of the combination of IBIMODEL with Luo, Anjos & Vannelli's model, our results are compared with the best layouts found to date for these well-known large facility layout problems. The results support the conclusion that the propose a methodology consistently produces competitive, and often improved, layouts for large instances when compared with other approaches in the literature.
18

Convex relaxation for the planted clique, biclique, and clustering problems

Ames, Brendan January 2011 (has links)
A clique of a graph G is a set of pairwise adjacent nodes of G. Similarly, a biclique (U, V ) of a bipartite graph G is a pair of disjoint, independent vertex sets such that each node in U is adjacent to every node in V in G. We consider the problems of identifying the maximum clique of a graph, known as the maximum clique problem, and identifying the biclique (U, V ) of a bipartite graph that maximizes the product |U | · |V |, known as the maximum edge biclique problem. We show that finding a clique or biclique of a given size in a graph is equivalent to finding a rank one matrix satisfying a particular set of linear constraints. These problems can be formulated as rank minimization problems and relaxed to convex programming by replacing rank with its convex envelope, the nuclear norm. Both problems are NP-hard yet we show that our relaxation is exact in the case that the input graph contains a large clique or biclique plus additional nodes and edges. For each problem, we provide two analyses of when our relaxation is exact. In the first, the diversionary edges are added deterministically by an adversary. In the second, each potential edge is added to the graph independently at random with fixed probability p. In the random case, our bounds match the earlier bounds of Alon, Krivelevich, and Sudakov, as well as Feige and Krauthgamer for the maximum clique problem. We extend these results and techniques to the k-disjoint-clique problem. The maximum node k-disjoint-clique problem is to find a set of k disjoint cliques of a given input graph containing the maximum number of nodes. Given input graph G and nonnegative edge weights w, the maximum mean weight k-disjoint-clique problem seeks to identify the set of k disjoint cliques of G that maximizes the sum of the average weights of the edges, with respect to w, of the complete subgraphs of G induced by the cliques. These problems may be considered as a way to pose the clustering problem. In clustering, one wants to partition a given data set so that the data items in each partition or cluster are similar and the items in different clusters are dissimilar. For the graph G such that the set of nodes represents a given data set and any two nodes are adjacent if and only if the corresponding items are similar, clustering the data into k disjoint clusters is equivalent to partitioning G into k-disjoint cliques. Similarly, given a complete graph with nodes corresponding to a given data set and edge weights indicating similarity between each pair of items, the data may be clustered by solving the maximum mean weight k-disjoint-clique problem. We show that both instances of the k-disjoint-clique problem can be formulated as rank constrained optimization problems and relaxed to semidefinite programs using the nuclear norm relaxation of rank. We also show that when the input instance corresponds to a collection of k disjoint planted cliques plus additional edges and nodes, this semidefinite relaxation is exact for both problems. We provide theoretical bounds that guarantee exactness of our relaxation and provide empirical examples of successful applications of our algorithm to synthetic data sets, as well as data sets from clustering applications.
19

Iterative Methods for Common Fixed Points of Nonexpansive Mappings in Hilbert spaces

Lai, Pei-lin 16 May 2011 (has links)
The aim of this work is to propose viscosity-like methods for finding a specific common fixed point of a finite family T={ T_{i} }_{i=1}^{N} of nonexpansive self-mappings of a closed convex subset C of a Hilbert space H.We propose two schemes: one implicit and the other explicit.The implicit scheme determines a set {x_{t} : 0 < t < 1} through the fixed point equation x_{t}= tf (x_{t} ) + (1− t)Tx_{t}, where f : C¡÷C is a contraction.The explicit scheme is the discretization of the implicit scheme and de defines a sequence {x_{n} } by the recursion x_{n+1}=£\\_{n}f(x_{n}) +(1−£\\_{n})Tx_{n} for n ≥ 0, where {£\\_{n} }⊂ (0,1) It has been shown in the literature that both implicit and explicit schemes converge in norm to a fixed point of T (with additional conditions imposed on the sequence {£\ _{n} } in the explicit scheme).We will extend both schemes to the case of a finite family of nonexpansive mappings. Our proposed schemes converge in norm to a common fixed point of the family which in addition solves a variational inequality.
20

Information, complexity and structure in convex optimization

Guzman Paredes, Cristobal 08 June 2015 (has links)
This thesis is focused on the limits of performance of large-scale convex optimization algorithms. Classical theory of oracle complexity, first proposed by Nemirovski and Yudin in 1983, successfully established the worst-case behavior of methods based on local oracles (a generalization of first-order oracle for smooth functions) for nonsmooth convex minimization, both in the large-scale and low-scale regimes; and the complexity of approximately solving linear systems of equations (equivalent to convex quadratic minimization) over Euclidean balls, under a matrix-vector multiplication oracle. Our work extends the applicability of lower bounds in two directions: Worst-Case Complexity of Large-Scale Smooth Convex Optimization: We generalize lower bounds on the complexity of first-order methods for convex optimization, considering classes of convex functions with Hölder continuous gradients. Our technique relies on the existence of a smoothing kernel, which defines a smooth approximation for any convex function via infimal convolution. As a consequence, we derive lower bounds for \ell_p/\ell_q-setups, where 1\leq p,q\leq \infty, and extend to its matrix analogue: Smooth convex minimization (with respect to the Schatten q-norm) over matrices with bounded Schatten p-norm. The major consequences of this result are the near-optimality of the Conditional Gradient method over box-type domains (p=q=\infty), and the near-optimality of Nesterov's accelerated method over the cross-polytope (p=q=1). Distributional Complexity of Nonsmooth Convex Optimization: In this work, we prove average-case lower bounds for the complexity of nonsmooth convex ptimization. We introduce an information-theoretic method to analyze the complexity of oracle-based algorithms solving a random instance, based on the reconstruction principle. Our technique shows that all known lower bounds for nonsmooth convex optimization can be derived by an emulation procedure from a common String-Guessing Problem, which is combinatorial in nature. The derived average-case lower bounds extend to hold with high probability, and for algorithms with bounded probability error, via Fano's inequality. Finally, from the proposed technique we establish the equivalence (up to constant factors) of distributional, randomized, and worst-case complexity for black-box convex optimization. In particular, there is no gain from randomization in this setup.

Page generated in 0.1359 seconds