• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 362
  • 56
  • 52
  • 45
  • 28
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 681
  • 123
  • 109
  • 102
  • 91
  • 88
  • 78
  • 73
  • 72
  • 69
  • 66
  • 65
  • 64
  • 62
  • 59
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Feature distribution learning for covariate shift adaptation using sparse filtering

Zennaro, Fabio January 2017 (has links)
This thesis studies a family of unsupervised learning algorithms called feature distribution learning and their extension to perform covariate shift adaptation. Unsupervised learning is one of the most active areas of research in machine learning, and a central challenge in this field is to develop simple and robust algorithms able to work in real-world scenarios. A traditional assumption of machine learning is the independence and identical distribution of data. Unfortunately, in realistic conditions this assumption is often unmet and the performances of traditional algorithms may be severely compromised. Covariate shift adaptation has then developed as a lively sub-field concerned with designing algorithms that can account for covariate shift, that is for a difference in the distribution of training and test samples. The first part of this dissertation focuses on the study of a family of unsupervised learning algorithms that has been recently proposed and has shown promise: feature distribution learning; in particular, sparse filtering, the most representative feature distribution learning algorithm, has commanded interest because of its simplicity and state-of-the-art performance. Despite its success and its frequent adoption, sparse filtering lacks any strong theoretical justification. This research questions how feature distribution learning can be rigorously formalized and how the dynamics of sparse filtering can be explained. These questions are answered by first putting forward a new definition of feature distribution learning based on concepts from information theory and optimization theory; relying on this, a theoretical analysis of sparse filtering is carried out, which is validated on both synthetic and real-world data sets. In the second part, the use of feature distribution learning algorithms to perform covariate shift adaptation is considered. Indeed, because of their definition and apparent insensitivity to the problem of modelling data distributions, feature distribution learning algorithms seems particularly fit to deal with covariate shift. This research questions whether and how feature distribution learning may be fruitfully employed to perform covariate shift adaptation. After making explicit the conditions of success for performing covariate shift adaptation, a theoretical analysis of sparse filtering and another novel algorithm, periodic sparse filtering, is carried out; this allows for the determination of the specific conditions under which these algorithms successfully work. Finally, a comparison of these sparse filtering-based algorithms against other traditional algorithms aimed at covariate shift adaptation is offered, showing that the novel algorithm is able to achieve competitive performance. In conclusion, this thesis provides a new rigorous framework to analyse and design feature distribution learning algorithms; it sheds light on the hidden assumptions behind sparse filtering, offering a clear understanding of its conditions of success; it uncovers the potential and the limitations of sparse filtering-based algorithm in performing covariate shift adaptation. These results are relevant both for researchers interested in furthering the understanding of unsupervised learning algorithms and for practitioners interested in deploying feature distribution learning in an informed way.
42

An Equivalence Between Sparse Approximation and Support Vector Machines

Girosi, Federico 01 May 1997 (has links)
In the first part of this paper we show a similarity between the principle of Structural Risk Minimization Principle (SRM) (Vapnik, 1982) and the idea of Sparse Approximation, as defined in (Chen, Donoho and Saunders, 1995) and Olshausen and Field (1996). Then we focus on two specific (approximate) implementations of SRM and Sparse Approximation, which have been used to solve the problem of function approximation. For SRM we consider the Support Vector Machine technique proposed by V. Vapnik and his team at AT&T Bell Labs, and for Sparse Approximation we consider a modification of the Basis Pursuit De-Noising algorithm proposed by Chen, Donoho and Saunders (1995). We show that, under certain conditions, these two techniques are equivalent: they give the same solution and they require the solution of the same quadratic programming problem.
43

Application of L1 reconstruction of sparse signals to ambiguity resolution in radar

Shaban, Fahad 13 May 2013 (has links)
The objective of the proposed research is to develop a new algorithm for range and Doppler ambiguity resolution in radar detection data using L1 minimization methods for sparse signals and to investigate the properties of such techniques. This novel approach to ambiguity resolution makes use of the sparse measurement structure of the post-detection data in multiple pulse repetition frequency radars and the resulting equivalence of the computationally intractable L0 minimization and the surrogate L1 minimization methods. The ambiguity resolution problem is cast as a linear system of equations which is then solved for the unique sparse solution in the absence of errors. It is shown that the new technique successfully resolves range and Doppler ambiguities and the recovery is exact in the ideal case of no errors in the system. The behavior of the technique is then investigated in the presence of real world data errors encountered in radar measurement and detection process. Examples of such errors include blind zone effects, collisions, false alarms and missed detections. It is shown that the mathematical model consisting of a linear system of equations developed for the ideal case can be adjusted to account for data errors. Empirical results show that the L1 minimization approach also works well in the presence of errors with minor extensions to the algorithm. Several examples are presented to demonstrate the successful implementation of the new technique for range and Doppler ambiguity resolution in pulse Doppler radars.
44

Image/Video Deblocking via Sparse Representation

Chiou, Yi-Wen 08 September 2012 (has links)
Blocking artifact, characterized by visually noticeable changes in pixel values along block boundaries, is a common problem in block-based image/video compression, especially at low bitrate coding. Various post-processing techniques have been proposed to reduce blocking artifacts, but they usually introduce excessive blurring or ringing effects. This paper proposes a self-learning-based image/ video deblocking framework via properly formulating deblocking as an MCA (morphological component analysis)-based image decomposition problem via sparse representation. The proposed method first decomposes an image/video frame into the low-frequency and high-frequency parts by applying BM3D (block-matching and 3D filtering) algorithm. The high-frequency part is then decomposed into a ¡§blocking component¡¨ and a ¡§non-blocking component¡¨ by performing dictionary learning and sparse coding based on MCA. As a result, the blocking component can be removed from the image/video frame successfully while preserving most original image/video details. Experimental results demonstrate the efficacy of the proposed algorithm.
45

DSJM : a software toolkit for direct determination of sparse Jacobian matrices

Hasan, Mahmudul January 2011 (has links)
DSJM is a software toolkit written in portable C++ that enables direct determination of sparse Jacobian matrices whose sparsity pattern is a priori known. Using the seed matrix S 2 Rn×p, the Jacobian A 2 Rm×n can be determined by solving AS = B, where B 2 Rm×p has been obtained via finite difference approximation or forward automatic differentiation. Seed matrix S is defined by the nonzero unknowns in A. DSJM includes well-known as well as new column ordering heuristics. Numerical testing is highly promising both in terms of running time and the number of matrix-vector products needed to determine A. / x, 71 leaves : ill. ; 29 cm
46

Towards Scalable Analysis of Images and Videos

Zhao, Bin 01 September 2014 (has links)
With widespread availability of low-cost devices capable of photo shooting and high-volume video recording, we are facing explosion of both image and video data. The sheer volume of such visual data poses both challenges and opportunities in machine learning and computer vision research. In image classification, most of previous research has focused on small to mediumscale data sets, containing objects from dozens of categories. However, we could easily access images spreading thousands of categories. Unfortunately, despite the well-known advantages and recent advancements of multi-class classification techniques in machine learning, complexity concerns have driven most research on such super large-scale data set back to simple methods such as nearest neighbor search, one-vs-one or one-vs-rest approach. However, facing image classification problem with such huge task space, it is no surprise that these classical algorithms, often favored for their simplicity, will be brought to their knees not only because of the training time and storage cost they incur, but also because of the conceptual awkwardness of such algorithms in massive multi-class paradigms. Therefore, it is our goal to directly address the bigness of image data, not only the large number of training images and high-dimensional image features, but also the large task space. Specifically, we present algorithms capable of efficiently and effectively training classifiers that could differentiate tens of thousands of image classes. Similar to images, one of the major difficulties in video analysis is also the huge amount of data, in the sense that videos could be hours long or even endless. However, it is often true that only a small portion of video contains important information. Consequently, algorithms that could automatically detect unusual events within streaming or archival video would significantly improve the efficiency of video analysis and save valuable human attention for only the most salient contents. Moreover, given lengthy recorded videos, such as those captured by digital cameras on mobile phones, or surveillance cameras, most users do not have the time or energy to edit the video such that only the most salient and interesting part of the original video is kept. To this end, we also develop algorithm for automatic video summarization, without human intervention. Finally, we further extend our research on video summarization into a supervised formulation, where users are asked to generate summaries for a subset of a class of videos of similar nature. Given such manually generated summaries, our algorithm learns the preferred storyline within the given class of videos, and automatically generates summaries for the rest of videos in the class, capturing the similar storyline as in those manually summarized videos.
47

Structured Sparse Methods for Imaging Genetics

January 2017 (has links)
abstract: Imaging genetics is an emerging and promising technique that investigates how genetic variations affect brain development, structure, and function. By exploiting disorder-related neuroimaging phenotypes, this class of studies provides a novel direction to reveal and understand the complex genetic mechanisms. Oftentimes, imaging genetics studies are challenging due to the relatively small number of subjects but extremely high-dimensionality of both imaging data and genomic data. In this dissertation, I carry on my research on imaging genetics with particular focuses on two tasks---building predictive models between neuroimaging data and genomic data, and identifying disorder-related genetic risk factors through image-based biomarkers. To this end, I consider a suite of structured sparse methods---that can produce interpretable models and are robust to overfitting---for imaging genetics. With carefully-designed sparse-inducing regularizers, different biological priors are incorporated into learning models. More specifically, in the Allen brain image--gene expression study, I adopt an advanced sparse coding approach for image feature extraction and employ a multi-task learning approach for multi-class annotation. Moreover, I propose a label structured-based two-stage learning framework, which utilizes the hierarchical structure among labels, for multi-label annotation. In the Alzheimer's disease neuroimaging initiative (ADNI) imaging genetics study, I employ Lasso together with EDPP (enhanced dual polytope projections) screening rules to fast identify Alzheimer's disease risk SNPs. I also adopt the tree-structured group Lasso with MLFre (multi-layer feature reduction) screening rules to incorporate linkage disequilibrium information into modeling. Moreover, I propose a novel absolute fused Lasso model for ADNI imaging genetics. This method utilizes SNP spatial structure and is robust to the choice of reference alleles of genotype coding. In addition, I propose a two-level structured sparse model that incorporates gene-level networks through a graph penalty into SNP-level model construction. Lastly, I explore a convolutional neural network approach for accurate predicting Alzheimer's disease related imaging phenotypes. Experimental results on real-world imaging genetics applications demonstrate the efficiency and effectiveness of the proposed structured sparse methods. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2017
48

Sparse Methods for Hyperspectral Unmixing and Image Fusion

Bieniarz, Jakub 02 March 2016 (has links)
In recent years, the substantial increase in the number of spectral channels in optical remote sensing sensors allows more detailed spectroscopic analysis of objects on the Earth surface. Modern hyperspectral sensors are able to sample the sunlight reflected from a target on the ground with hundreds of adjacent narrow spectral channels. However, the increased spectral resolution comes at the price of a lower spatial resolution, e.g. the forthcoming German hyperspectral sensor Environmental Mapping and Analysis Program (EnMAP) which will have 244 spectral channels and a pixel size on ground as large as 30 m x 30 m. The main aim of this thesis is dealing with the problem of reduced spatial resolution in hyperspectral sensors. This is addressed first as an unmixing problem, i.e., extraction and quantification of the spectra of pure materials mixed in a single pixel, and second as a resolution enhancement problem based on fusion of multispectral and hyperspectral imagery. This thesis proposes novel methods for hyperspectral unmixing using sparse approximation techniques and external spectral dictionaries, which unlike traditional least squares-based methods, do not require pure material spectrum selection step and are thus able to simultaneously estimate the underlying active materials along with their respective abundances. However, in previous works it has been shown that these methods suffer from some drawbacks, mainly from the intra dictionary coherence. To improve the performance of sparse spectral unmixing, the use of derivative transformation and a novel two step group unmixing algorithm are proposed. Additionally, the spatial homogeneity of abundance vectors by introducing a multi-look model for spectral unmixing is exploited. Based on the above findings, a new method for fusion of hyperspectral images with higher spatial resolution multispectral images is proposed. The algorithm exploits the spectral information of the hyperspectral image and the spatial information from the multispectral image by means of sparse spectral unmixing to form a new high spatial and spectral resolution hyperspectral image. The introduced method is robust when applied to highly mixed scenarios as it relies on external spectral dictionaries. Both the proposed sparse spectral unmixing algorithms as well as the resolution enhancement approach are evaluated quantitatively and qualitatively. Algorithms developed in this thesis are significantly faster and yield better or similar results to state-of-the-art methods.
49

Sparse Optimal Control for Continuous-Time Dynamical Systems / 連続時間システムに対するスパース最適制御

Ikeda, Takuya 25 March 2019 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第21916号 / 情博第699号 / 新制||情||120(附属図書館) / 京都大学大学院情報学研究科数理工学専攻 / (主査)准教授 加嶋 健司, 教授 太田 快人, 教授 山下 信雄 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
50

Temporal Sparse Encoding and Decoding of Arrays in Systems Based on the High Level Architecture Standard

Severinsson, Viktor, Thörnblom, Johan January 2022 (has links)
In this thesis, a method for encoding and decoding arrays in systems based on the standard High Level Architecture is presented. High Level Architecture is a standard in the simulation industry, which enables interoperability between different simulation systems. When simulations share specific data with other simulations, they always send all parts of the data. This can become quite inefficient when the data is of an array type and only one or a few of its elements' values have changed. The whole array is always transmitted regardless whether the other simulations in the system need all elements or just the ones that have been modified since the last transmission. Therefore there might be more traffic on the network than needed in these cases. The proposed method, named Temporal Sparse Encoding, only encodes the modified elements when it needs to, plus some additional bytes as overhead, that allows for only sending updated elements. The method is based on the concept of sparse arrays and matrices, and is inspired by the Coordinate format, which uses extra arrays with indices referring to specific elements of interest. In a small simulation system, acting as a testing environment, it is shown how Temporal Sparse Encoding can save both time and above all, bandwidth, when sharing updates. Each test was carried out 10 times and in each test case 1 000 updates were transmitted. In each test case the transmission time was measured and the compression ratio was calculated by dividing the number of bytes in the encoding containing all elements by number of bytes in the encoding containing just the updated ones. The biggest compression ratio was calculated to be 750.13 and came from when 1 out of 1 000 elements were updated and transmitted. The smallest compression ratio was 1.00 and came from all the cases where all the array's elements were updated and transmitted. Some of the conclusions that were made was that the Temporal Sparse Encoding can save up to 33% of the time compared to the standard encoding and that a lot of the transmission time is spent on extracting elements once they have been decoded. These findings suggest that endeavors in optimization should be focused at the language level, specifically on management of data, rather than the transmission of data when there is not a lot of traffic occurring on the network.

Page generated in 0.03 seconds