• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 513
  • 85
  • 53
  • 49
  • 12
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • Tagged with
  • 864
  • 322
  • 133
  • 94
  • 90
  • 88
  • 86
  • 79
  • 76
  • 68
  • 68
  • 67
  • 66
  • 66
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Convex regression and its extensions to learning a Bregman divergence and difference of convex functions

Siahkamari, Ali 26 January 2022 (has links)
Nonparametric convex regression has been extensively studied over the last two decades. It has been shown any Lipschitz convex function can be approximated with arbitrarily accuracy with a max of linear functions. Using this framework, in this thesis we generalize convex regression to learning an arbitrary Bregman divergence and learning a difference of convex functions. We provide approximation guarantees and sample complexity bounds for both these extensions. Furthermore, we provide algorithms to solve the resulting optimization problems based on 2-block alternative direction method of multipliers (ADMM). For this algorithm, we provide convergence guarantees with iteration complexity of O(n√d/𝜖) for a dataset X 𝝐 ℝ^n,d and arbitrary positive 𝜖. Finally we provide experiments for both the Bregman divergence learning and difference of convex functions learning based on UCI datasets that demonstrate the state of the art on regression and classification datasets.
162

The Surface Area Deviation of the Euclidean Ball and a Polytope

Hoehner, Steven Douglas 01 June 2016 (has links)
No description available.
163

Localization algorithms for passive sensor networks

Ismailova, Darya 23 January 2017 (has links)
Locating a radiating source based on range or range measurements obtained from a network of passive sensors has been a subject of research over the past two decades due to the problem’s importance in applications in wireless communications, surveillance, navigation, geosciences, and several other fields. In this thesis, we develop new solution methods for the problem of localizing a single radiating source based on range and range-difference measurements. Iterative re-weighting algorithms are developed for both range-based and range-difference-based least squares localization. Then we propose a penalty convex-concave procedure for finding an approximate solution to nonlinear least squares problems that are related to the range measurements. Finally, the sequential convex relaxation procedures are proposed to obtain the nonlinear least squares estimate of source coordinates. Localization in wireless sensor network, where the RF signals are used to derive the ranging measurements, is the primary application area of this work. However, the solution methods proposed are general and could be applied to range and range-difference measurements derived from other types of signals. / Graduate / 0544 / ismailds@uvic.ca
164

Identification using Convexification and Recursion

Dai, Liang January 2016 (has links)
System identification studies how to construct mathematical models for dynamical systems from the input and output data, which finds applications in many scenarios, such as predicting future output of the system or building model based controllers for regulating the output the system. Among many other methods, convex optimization is becoming an increasingly useful tool for solving system identification problems. The reason is that many identification problems can be formulated as, or transformed into convex optimization problems. This transformation is commonly referred to as the convexification technique. The first theme of the thesis is to understand the efficacy of the convexification idea by examining two specific examples. We first establish that a l1 norm based approach can indeed help in exploiting the sparsity information of the underlying parameter vector under certain persistent excitation assumptions. After that, we analyze how the nuclear norm minimization heuristic performs on a low-rank Hankel matrix completion problem. The underlying key is to construct the dual certificate based on the structure information that is available in the problem setting.         Recursive algorithms are ubiquitous in system identification. The second theme of the thesis is the study of some existing recursive algorithms, by establishing new connections, giving new insights or interpretations to them. We first establish a connection between a basic property of the convolution operator and the score function estimation. Based on this relationship, we show how certain recursive Bayesian algorithms can be exploited to estimate the score function for systems with intractable transition densities. We also provide a new derivation and interpretation of the recursive direct weight optimization method, by exploiting certain structural information that is present in the algorithm. Finally, we study how an improved randomization strategy can be found for the randomized Kaczmarz algorithm, and how the convergence rate of the classical Kaczmarz algorithm can be studied by the stability analysis of a related time varying linear dynamical system.
165

Aspects of Design and Analysis of Cognitive Radios and Networks

Hanif, Muhammad Fainan January 2010 (has links)
Recent survey campaigns have shown a tremendous under utilization of the bandwidth allocated to various wireless services. Motivated by this and the ever increasing demand for wireless applications, the concept of cognitive radio (CR) systems has rendered hope to end the so called spectrum scarcity. This thesis presents various different facets related to the design and analysis of CR systems in a unified way. We begin the thesis by presenting an information theoretic study of cognitive systems working in the so called low interference regime of the overlay mode. We show that as long as the coverage area of a CR is less than that of a primary user (PU) device, the probability of the cognitive terminal inflicting small interference at the PU is overwhelmingly high. We have also analyzed the effect of a key parameter governing the amount of power allocated to relaying the PU message in the overlay mode of operation in realistic environments by presenting a simple and accurate approximation. Then, we explore the possibilities of statistical modeling of the cumulative interference due to multiple interfering CRs. We show that although it is possible to obtain a closed form expression for such an interference due a single CR, the problem is particularly difficult when it comes to the total CR interference in lognormally faded environments. In particular, we have demonstrated that fitting a two or three parameter lognormal is not a feasible option for all scenarios. We also explore the second-order characteristics of the cumulative interference by evaluating its level crossing rate (LCR) and average exceedance duration (AED) in Rayleigh and Rician channel conditions. We show that the LCRs in both these cases can be evaluated by modeling the interference process with gamma and noncentral χ2 processes, respectively. By exploiting radio environment map (REM) information, we have presented two CR scheduling schemes and compared their performance with the naive primary exclusion zone (PEZ) technique. The results demonstrate the significance of using an intelligent allocation method to reap the benefits of the tremendous information available to exploit in the REM based methods. At this juncture, we divert our attention to multiple-input multiple-output (MIMO) CR systems operating in the underlay mode. Using an antenna selection philosophy, we solve a convex optimization problem accomplishing the task and show via analysis and simulations that antenna selection can be a viable option for CRs operating in relatively sparse PU environments. Finally, we study the impact of imperfect channel state information (CSI) on the downlink of an underlay multiple antenna CR network designed to achieve signal-to-interference-plus-noise ratio (SINR) fairness among the CR terminals. By employing a newly developed convex iteration technique, we solve the relevant optimization problem exactly without performing any relaxation on the variables involved.
166

Ordinal and convex assumptions in phylogenetic tree reconstruction

Candy, Robin January 2014 (has links)
Phylogenetics is a field primarily concerned with the reconstruction of the evolutionary history of present day species. Evolutionary history is often modeled by a phylogenetic tree, similar to a family tree. To recreate a phylogenetic tree from information about current species, one needs to make assumptions about the evolutionary process. These assumptions can range from full parametrised models of evolution to simple observations. This thesis looks at the reconstruction of phylogenetic trees under two different assumptions. The first, known as the ordinal assumption, has been previously studied and asserts that as species evolve, they become more dissimilar. The second, the convex assumption, has not previously been studied in this context and asserts that changes species go through to become dissimilar are progressively larger than the current differences between those species. This thesis presents an overview of mathematical results in tree reconstruction from dissimilarity maps (also known as distance matrices) and develops techniques for reasoning about the ordinal and convex assumptions. In particular, three main results are presented: a complete classification of phylogenetic trees with four leaves under the ordinal assumption; a partial classification of phylogenetic trees with four leaves under the convex assumption; and, an independent proof of a result on the relationship between ultrametrics and the ordinal assumption.
167

Modern technologies of fabrication and testing of large convex secondary mirrors

Oh, Chang Jin, Lowman, Andrew E., Dubin, Matt, Smith, Greg, Frater, Eric, Zhao, Chunyu, Burge, James H. 22 July 2016 (has links)
Modern large telescopes such as TAO, LSST, TMT and EELT require 0.9m-4m monolithic convex secondary mirrors. The fabrication and testing of these large convex secondary mirrors of astronomical telescopes is getting challenging as the aperture of the mirror is getting bigger. The biggest challenge to fabricate these large convex aspheric mirrors is to measure the surface figure to a few nanometers, while maintaining the testing and fabrication cycle to be efficient to minimize the downtime. For the last a couple of decades there was huge advancement in the metrology and fabrication of large aspheric secondary mirrors. College of Optical Sciences in the University Arizona developed a full fabrication and metrology process with extremely high accuracy and efficiency for manufacturing the large convex secondary mirrors. In this paper modern metrology systems including Swing-Arm Optical Coordinate Measuring System (SOCMM) which is comparable to Interferometry and a Sub-aperture stitching interferometry scalable to a several meters have been presented. Also a Computer Controlled Fabrication Process which produces extremely fine surface figure and finish has been demonstrated. These most recent development has been applied to the fabrication and testing of 0.9m aspheric convex secondary mirror for the Tokyo Atacama Observatory's 6.5m telescope and the result has been presented.
168

Learning algorithms and statistical software, with applications to bioinformatics / Algorithmes d'apprentissage et logiciels pour la statistique, avec applications à la bioinformatique

Hocking, Toby Dylan 20 November 2012 (has links)
L'apprentissage statistique est le domaine des mathématiques qui aborde le développement des algorithmes d'analyse de données. Cette thèse est divisée en deux parties : l'introduction de modèles mathématiques et l'implémentation d'outils logiciels. Dans la première partie, je présente de nouveaux algorithmes pour la segmentation et pour le partitionnement de données (clustering). Le partitionnement de données et la segmentation sont des méthodes d'analyse qui cherche des structures dans les données. Je présente les contributions suivantes, en soulignant les applications à la bioinformatique. Dans la deuxième partie, je présente mes contributions au logiciel libre pour la statistique, qui est utilisé pour l'analyse quotidienne du statisticien. / Statistical machine learning is a branch of mathematics concerned with developing algorithms for data analysis. This thesis presents new mathematical models and statistical software, and is organized into two parts. In the first part, I present several new algorithms for clustering and segmentation. Clustering and segmentation are a class of techniques that attempt to find structures in data. I discuss the following contributions, with a focus on applications to cancer data from bioinformatics. In the second part, I focus on statistical software contributions which are practical for use in everyday data analysis.
169

Image restoration in the presence of Poisson-Gaussian noise / Restauration d'images dégradées par un bruit Poisson-Gauss

Jezierska, Anna Maria 13 May 2013 (has links)
Cette thèse porte sur la restauration d'images dégradées à la fois par un flou et par un bruit. Une attention particulière est portée aux images issues de la microscopie confocale et notamment celles de macroscopie. Dans ce contexte, un modèle de bruit Poisson-Gauss apparaît bien adapté car il permet de prendre en compte le faible nombre de photons et le fort bruit enregistrés simultanément par les détecteurs. Cependant, ce type de modèle de bruit a été peu exploité car il pose de nombreuses difficultés tant théoriques que pratiques. Dans ce travail, une approche variationnelle est adoptée pour résoudre le problème de restauration dans le cas où le terme de fidélité exact est considéré. La solution du problème peut aussi être interprétée au sens du Maximum A Posteriori (MAP). L'utilisation d'algorithmes primaux-duaux récemment proposés en optimisation convexe permet d'obtenir de bons résultats comparativement à plusieurs approches existantes qui considèrent des approximations variées du terme de fidélité. En ce qui concerne le terme de régularisation de l'approche MAP, des approximations discrète et continue de la pseudo-norme $ell_0$ sont considérées. Cette mesure, célèbre pour favoriser la parcimonie, est difficile à optimiser car elle est, à la fois, non convexe et non lisse. Dans un premier temps, une méthode basée sur les coupures de graphes est proposée afin de prendre en compte des à priori de type quadratique tronqué. Dans un second temps, un algorithme à mémoire de gradient de type Majoration-Minimisation, dont la convergence est garantie, est considéré afin de prendre en compte des a priori de type norme $ell_2-ell_0$. Cet algorithme permet notamment d'obtenir de bons résultats dans des problèmes de déconvolution. Néanmoins, un inconvénient des approches variationnelles est qu'elles nécessitent la détermination d'hyperparamètres. C'est pourquoi, deux méthodes, reposant sur une approche Espérance-Maximisation (EM) sont proposées, dans ce travail, afin d'estimer les paramètres d'un bruit Poisson-Gauss: (1) à partir d'une série temporelle d'images (dans ce cas, des paramètres de « bleaching » peuvent aussi être estimés) et (2) à partir d'une seule image. De manière générale, cette thèse propose et teste de nombreuses méthodologies adaptées à la prise en compte de bruits et de flous difficiles, ce qui devrait se révéler utile pour des applications variées, au-delà même de la microscopie / This thesis deals with the restoration of images corrupted by blur and noise, with emphasis on confocal microscopy and macroscopy applications. Due to low photon count and high detector noise, the Poisson-Gaussian model is well suited to this context. However, up to now it had not been widely utilized because of theoretical and practical difficulties. In view of this, we formulate the image restoration problem in the presence of Poisson-Gaussian noise in a variational framework, where we express and study the exact data fidelity term. The solution to the problem can also be interpreted as a Maximum A Posteriori (MAP) estimate. Using recent primal-dual convex optimization algorithms, we obtain results that outperform methods relying on a variety of approximations. Turning our attention to the regularization term in the MAP framework, we study both discrete and continuous approximation of the $ell_0$ pseudo-norm. This useful measure, well-known for promoting sparsity, is difficult to optimize due to its non-convexity and its non-smoothness. We propose an efficient graph-cut procedure for optimizing energies with truncated quadratic priors. Moreover, we develop a majorize-minimize memory gradient algorithm to optimize various smooth versions of the $ell_2-ell_0$ norm, with guaranteed convergence properties. In particular, good results are achieved on deconvolution problems. One difficulty with variational formulations is the necessity to tune automatically the model hyperparameters. In this context, we propose to estimate the Poisson-Gaussian noise parameters based on two realistic scenarios: one from time series images, taking into account bleaching effects, and another from a single image. These estimations are grounded on the use of an Expectation-Maximization (EM) approach.Overall, this thesis proposes and evaluates various methodologies for tackling difficult image noise and blur cases, which should be useful in various applicative contexts within and beyond microscopy
170

Surface and volumetric parametrisation using harmonic functions in non-convex domains

Klein, Richard 29 July 2013 (has links)
A Dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfillment of the requirements for the degree of Master of Science. Johannesburg, 2013 / Many of the problems in mathematics have very elegant solutions. As complex, real–world geometries come into play, however, this elegance is often lost. This is particularly the case with meshes of physical, real–world problems. Domain mapping helps to move problems from some geometrically complex domain to a regular, easy to use domain. Shape transformation, specifically, allows one to do this in 2D domains where mesh construction can be difficult. Numerical methods usually work over some mesh on the target domain. The structure and detail of these meshes affect the overall computation and accuracy immensely. Unfortunately, building a good mesh is not always a straight forward task. Finite Element Analysis, for example, typically requires 4–10 times the number of tetrahedral elements to achieve the same accuracy as the corresponding hexahedral mesh. Constructing this hexahedral mesh, however, is a difficult task; so in practice many people use tetrahedral meshes instead. By mapping the geometrically complex domain to a regular domain, one can easily construct elegant meshes that bear useful properties. Once a domain has been mapped to a regular domain, the mesh can be constructed and calculations can be performed in the new domain. Later, results from these calculations can be transferred back to the original domain. Using harmonic functions, source domains can be parametrised to spaces with many different desired properties. This allows one to perform calculations that would be otherwise expensive or inaccurate. This research implements and extends the methods developed in Voruganti et al. [2006 2008] for domain mapping using harmonic functions. The method was extended to handle cases where there are voids in the source domain, allowing the user to map domains that are not topologically equivalent to the equivalent dimension hypersphere. This is accomplished through the use of various boundary conditions as the void is mapped to the target domains which allow the user to reshape and shrink the void in the target domain. The voids can now be reduced to arcs, radial lines and even shrunk to single points. The algorithms were implemented in two and three dimensions and ultimately parallelised to run on the Centre for High Performance Computing clusters. The parallel code also allows for arbitrary dimension genus-0 source domains. Finally, applications, such as remeshing and robot path planning were investigated and illustrated.

Page generated in 0.0429 seconds