• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 11
  • 4
  • 3
  • 1
  • Tagged with
  • 56
  • 11
  • 11
  • 9
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Fast Rates for Regularized Least-squares Algorithm

Caponnetto, Andrea, Vito, Ernesto De 14 April 2005 (has links)
We develop a theoretical analysis of generalization performances of regularized least-squares on reproducing kernel Hilbert spaces for supervised learning. We show that the concept of effective dimension of an integral operator plays a central role in the definition of a criterion for the choice of the regularization parameter as a function of the number of samples. In fact, a minimax analysis is performed which shows asymptotic optimality of the above-mentioned criterion.
12

Probabilistic Topic Models for Human Emotion Analysis

January 2015 (has links)
abstract: While discrete emotions like joy, anger, disgust etc. are quite popular, continuous emotion dimensions like arousal and valence are gaining popularity within the research community due to an increase in the availability of datasets annotated with these emotions. Unlike the discrete emotions, continuous emotions allow modeling of subtle and complex affect dimensions but are difficult to predict. Dimension reduction techniques form the core of emotion recognition systems and help create a new feature space that is more helpful in predicting emotions. But these techniques do not necessarily guarantee a better predictive capability as most of them are unsupervised, especially in regression learning. In emotion recognition literature, supervised dimension reduction techniques have not been explored much and in this work a solution is provided through probabilistic topic models. Topic models provide a strong probabilistic framework to embed new learning paradigms and modalities. In this thesis, the graphical structure of Latent Dirichlet Allocation has been explored and new models tuned to emotion recognition and change detection have been built. In this work, it has been shown that the double mixture structure of topic models helps 1) to visualize feature patterns, and 2) to project features onto a topic simplex that is more predictive of human emotions, when compared to popular techniques like PCA and KernelPCA. Traditionally, topic models have been used on quantized features but in this work, a continuous topic model called the Dirichlet Gaussian Mixture model has been proposed. Evaluation of DGMM has shown that while modeling videos, performance of LDA models can be replicated even without quantizing the features. Until now, topic models have not been explored in a supervised context of video analysis and thus a Regularized supervised topic model (RSLDA) that models video and audio features is introduced. RSLDA learning algorithm performs both dimension reduction and regularized linear regression simultaneously, and has outperformed supervised dimension reduction techniques like SPCA and Correlation based feature selection algorithms. In a first of its kind, two new topic models, Adaptive temporal topic model (ATTM) and SLDA for change detection (SLDACD) have been developed for predicting concept drift in time series data. These models do not assume independence of consecutive frames and outperform traditional topic models in detecting local and global changes respectively. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2015
13

Sparse Representations and Nonlinear Image Processing for Inverse Imaging Solutions

Ram, Sundaresh, Ram, Sundaresh January 2017 (has links)
This work applies sparse representations and nonlinear image processing to two inverse imaging problems. The first problem involves image restoration, where the aim is to reconstruct an unknown high-quality image from a low-quality observed image. Sparse representations of images have drawn a considerable amount of interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. The standard sparse representation, however, does not consider the intrinsic geometric structure present in the data, thereby leading to sub-optimal results. Using the concept that a signal is block sparse in a given basis —i.e., the non-zero elements occur in clusters of varying sizes — we present a novel and efficient algorithm for learning a sparse representation of natural images, called graph regularized block sparse dictionary (GRBSD) learning. We apply the proposed method towards two image restoration applications: 1) single-Image super-resolution, where we propose a local regression model that uses learned dictionaries from the GRBSD algorithm for super-resolving a low-resolution image without any external training images, and 2) image inpainting, where we use GRBSD algorithm to learn a multiscale dictionary to generate visually plausible pixels to fill missing regions in an image. Experimental results validate the performance of the GRBSD learning algorithm for single-image super-resolution and image inpainting applications. The second problem addressed in this work involves image enhancement for detection and segmentation of objects in images. We exploit the concept that even though data from various imaging modalities have high dimensionality, the data is sufficiently well described using low-dimensional geometrical structures. To facilitate the extraction of objects having such structure, we have developed general structure enhancement methods that can be used to detect and segment various curvilinear structures in images across different applications. We use the proposed method to detect and segment objects of different size and shape in three applications: 1) segmentation of lamina cribrosa microstructure in the eye from second-harmonic generation microscopy images, 2) detection and segmentation of primary cilia in confocal microscopy images, and 3) detection and segmentation of vehicles in wide-area aerial imagery. Quantitative and qualitative results show that the proposed methods provide improved detection and segmentation accuracy and computational efficiency compared to other recent algorithms.
14

Regularization Techniques for Linear Least-Squares Problems

Suliman, Mohamed Abdalla Elhag 04 1900 (has links)
Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA method deals with discrete ill-posed problems when the singular values of the linear transformation matrix are decaying very fast to a significantly small value. For the both proposed algorithms, the regularization parameter is obtained as a solution of a non-linear characteristic equation. We provide a details study for the general properties of these functions and address the existence and uniqueness of the root. To demonstrate the performance of the derivations, the first proposed COPRA method is applied to estimate different signals with various characteristics, while the second proposed COPRA method is applied to a large set of different real-world discrete ill-posed problems. Simulation results demonstrate that the two proposed methods outperform a set of benchmark regularization algorithms in most cases. In addition, the algorithms are also shown to have the lowest run time.
15

Méthodes régularisées pour l’analyse de données multivariées en grande dimension : théorie et applications. / Regularized methods to study multivariate data in high dimensional settings : theory and applications.

Perrot-Dockès, Marie 08 October 2019 (has links)
Dans cette thèse nous nous intéressons au modèle linéaire général (modèle linéaire multivarié) en grande dimension. Nous proposons un nouvel estimateur parcimonieux des coefficients de ce modèle qui prend en compte la dépendance qui peut exister entre les différentes réponses. Cet estimateur est obtenu en estimant dans un premier temps la matrice de covariance des réponses puis en incluant cette matrice de covariance dans un critère Lasso. Les propriétés théoriques de cet estimateur sont étudiées lorsque le nombre de réponses peut tendre vers l’infini plus vite que la taille de l’échantillon. Plus précisément, nous proposons des conditions générales que doivent satisfaire les estimateurs de la matrice de covariance et de son inverse pour obtenir la consistance en signe des coefficients. Nous avons ensuite mis en place des méthodes, adaptées à la grande dimension, pour l’estimation de matrices de covariance qui sont supposées être des matrices de Toeplitz ou des matrices avec une structure par blocs, pas nécessairement diagonaux. Ces différentes méthodes ont enfin été appliquées à des problématiques de métabolomique, de protéomique et d’immunologie. / In this PhD thesis we study general linear model (multivariate linearmodel) in high dimensional settings. We propose a novel variable selection approach in the framework of multivariate linear models taking into account the dependence that may exist between the responses. It consists in estimating beforehand the covariance matrix of the responses and to plug this estimator in a Lasso criterion, in order to obtain a sparse estimator of the coefficient matrix. The properties of our approach are investigated both from a theoretical and a numerical point of view. More precisely, we give general conditions that the estimators of the covariance matrix and its inverse have to satisfy in order to recover the positions of the zero and non-zero entries of the coefficient matrix when the number of responses is not fixed and can tend to infinity. We also propose novel, efficient and fully data-driven approaches for estimating Toeplitz and large block structured sparse covariance matrices in the case where the number of variables is much larger than the number of samples without limiting ourselves to block diagonal matrices. These approaches are appliedto different biological issues in metabolomics, in proteomics and in immunology.
16

Time Series Forecasting using Temporal Regularized Matrix Factorization and Its Application to Traffic Speed Datasets

Zeng, Jianfeng 30 September 2021 (has links)
No description available.
17

Inverse Scattering Image Quality with Noisy Forward Data

Sorensen, Thomas J. 15 July 2008 (has links) (PDF)
Image quality metrics for several inverse scattering methods and algorithms are presented. Analytical estimates and numerical simulations provide a basis for poor image quality diagnostics. The limitations and noise behavior of reconstructed images are explored analytically and empirically using a contrast ratio. Theoretical contrast ratio estimates using the canonical PEC circular cylinder are derived. Empirical studies are conducted to confirm theoretical estimates and to provide examples of image quality vs SNR for more complex scatterer profiles. Regularized sampling is shown to be more noise sensitive than tomographic reconstructive methods.
18

Community Detection in Directed Networks and its Application to Analysis of Social Networks

Kim, Sungmin 09 July 2014 (has links)
No description available.
19

Asymptotic and Factorization Analysis for Inverse Shape Problems in Tomography and Scattering Theory

Govanni Granados (18283216) 01 April 2024 (has links)
<p dir="ltr">Developing non-invasive and non-destructive testing in complex media continues to be a rich field of study (see e.g.[22, 28, 36, 76, 89] ). These types of tests have applications in medical imaging, geophysical exploration, and engineering where one would like to detect an interior region or estimate a model parameter. With the current rapid development of this enabling technology, there is a growing demand for new mathematical theory and computational algorithms for inverse problems in partial differential equations. Here the physical models are given by a boundary value problem stemming from Electrical Impedance Tomography (EIT), Diffuse Optical Tomography (DOT), as well as acoustic scattering problems. Important mathematical questions arise regarding existence, uniqueness, and continuity with respect to measured surface data. Rather than determining the solution of a given boundary value problem, we are concerned with using surface data in order to develop and implement numerical algorithms to recover unknown subregions within a known domain. A unifying theme of this thesis is to develop Qualitative Methods to solve inverse shape problems using measured surface data. These methods require very few a priori assumptions on the regions of interest, boundary conditions, and model parameter estimation. The counterpart to qualitative methods, iterative methods, typically require a priori information that may not be readily available and can be more computationally expensive. Qualitative Methods usually require more data.</p><p dir="ltr">This thesis expands the library of Qualitative Methods for elliptic problems coming from tomography and scattering theory. We consider inverse shape problems where our goal is to recover extended and small volume regions. For extended regions, we consider applying a modified version of the well-known Factorization Method [73]. Whereas for the small volume regions, we develop a Multiple Signal Classification (MUSIC)-type algorithm (see for e.g. [3, 5]). In all of our problems, we derive an imaging functional that will effectively recover the region of interest. The results of this thesis form part of the theoretical forefront of physical applications. Furthermore, it extends the mathematical theory at the intersection of mathematics, physics and engineering. Lastly, it also advances knowledge and understanding of imaging techniques for non-invasive and non-destructive testing.</p>
20

[en] INVERSE OPTIMIZATION VIA ONLINE LEARNING / [pt] OTIMIZAÇÃO INVERSA VIA ONLINE LEARNING

LUISA SILVEIRA ROSA 02 April 2020 (has links)
[pt] Demonstramos como aprender a função objetivo e as restrições de problemas de otimização enquanto observamos sua solução ótima no decorrer de múltiplas rodadas. Nossa abordagem é baseada em técnicas de Online Learning e funciona para funções objetivo lineares sob conjuntos viáveis arbitrários generalizando trabalhos anteriores. Os dois algoritmos, um para aprender a função objetivo e o outro par aprender as restrições, convergem a uma taxa de O (1 sobre raiz de T) que nos permitem produzir soluções tão boas quanto as ótimas em poucas observações. Finalmente, mostramos a eficácia e possíveis aplicações de nossos métodos em um amplo estudo computacional. / [en] We demonstrate how to learn the objective function and constraints of optimization problems while observing its optimal solution over multiple rounds. Our approach is based on Online Learning techniques and works for linear objective functions under arbitrary feasible sets by generalizing previous work. The two algorithms, one to learn objective function and other to learn constraints, converge at a rate of O (1 on t root) that allow us to produce solutions as good as the optimal in a few observations. Finally, we show the efficacy and possible applications of our methods in a significant computational study.

Page generated in 0.0678 seconds