• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 363
  • 56
  • 52
  • 45
  • 28
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 684
  • 124
  • 110
  • 102
  • 92
  • 88
  • 79
  • 73
  • 72
  • 69
  • 66
  • 65
  • 64
  • 63
  • 59
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Graph-Based Sparse Learning: Models, Algorithms, and Applications

January 2014 (has links)
abstract: Sparse learning is a powerful tool to generate models of high-dimensional data with high interpretability, and it has many important applications in areas such as bioinformatics, medical image processing, and computer vision. Recently, the a priori structural information has been shown to be powerful for improving the performance of sparse learning models. A graph is a fundamental way to represent structural information of features. This dissertation focuses on graph-based sparse learning. The first part of this dissertation aims to integrate a graph into sparse learning to improve the performance. Specifically, the problem of feature grouping and selection over a given undirected graph is considered. Three models are proposed along with efficient solvers to achieve simultaneous feature grouping and selection, enhancing estimation accuracy. One major challenge is that it is still computationally challenging to solve large scale graph-based sparse learning problems. An efficient, scalable, and parallel algorithm for one widely used graph-based sparse learning approach, called anisotropic total variation regularization is therefore proposed, by explicitly exploring the structure of a graph. The second part of this dissertation focuses on uncovering the graph structure from the data. Two issues in graphical modeling are considered. One is the joint estimation of multiple graphical models using a fused lasso penalty and the other is the estimation of hierarchical graphical models. The key technical contribution is to establish the necessary and sufficient condition for the graphs to be decomposable. Based on this key property, a simple screening rule is presented, which reduces the size of the optimization problem, dramatically reducing the computational cost. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2014
82

Effective Gene Expression Annotation Approaches for Mouse Brain Images

January 2016 (has links)
abstract: Understanding the complexity of temporal and spatial characteristics of gene expression over brain development is one of the crucial research topics in neuroscience. An accurate description of the locations and expression status of relative genes requires extensive experiment resources. The Allen Developing Mouse Brain Atlas provides a large number of in situ hybridization (ISH) images of gene expression over seven different mouse brain developmental stages. Studying mouse brain models helps us understand the gene expressions in human brains. This atlas collects about thousands of genes and now they are manually annotated by biologists. Due to the high labor cost of manual annotation, investigating an efficient approach to perform automated gene expression annotation on mouse brain images becomes necessary. In this thesis, a novel efficient approach based on machine learning framework is proposed. Features are extracted from raw brain images, and both binary classification and multi-class classification models are built with some supervised learning methods. To generate features, one of the most adopted methods in current research effort is to apply the bag-of-words (BoW) algorithm. However, both the efficiency and the accuracy of BoW are not outstanding when dealing with large-scale data. Thus, an augmented sparse coding method, which is called Stochastic Coordinate Coding, is adopted to generate high-level features in this thesis. In addition, a new multi-label classification model is proposed in this thesis. Label hierarchy is built based on the given brain ontology structure. Experiments have been conducted on the atlas and the results show that this approach is efficient and classifies the images with a relatively higher accuracy. / Dissertation/Thesis / Masters Thesis Computer Science 2016
83

Drosophila Stage Annotation using Sparse Learning Method

January 2012 (has links)
abstract: Drosophila melanogaster, as an important model organism, is used to explore the mechanism which governs cell differentiation and embryonic development. Understanding the mechanism will help to reveal the effects of genes on other species or even human beings. Currently, digital camera techniques make high quality Drosophila gene expression imaging possible. On the other hand, due to the advances in biology, gene expression images which can reveal spatiotemporal patterns are generated in a high-throughput pace. Thus, an automated and efficient system that can analyze gene expression will become a necessary tool for investigating the gene functions, interactions and developmental processes. One investigation method is to compare the expression patterns of different developmental stages. Recently, however, the expression patterns are manually annotated with rough stage ranges. The work of annotation requires professional knowledge from experienced biologists. Hence, how to transfer the domain knowledge in biology into an automated system which can automatically annotate the patterns provides a challenging problem for computer scientists. In this thesis, the problem of stage annotation for Drosophila embryo is modeled in the machine learning framework. Three sparse learning algorithms and one ensemble algorithm are used to attack the problem. The sparse algorithms are Lasso, group Lasso and sparse group Lasso. The ensemble algorithm is based on a voting method. Besides that the proposed algorithms can annotate the patterns to stages instead of stage ranges with high accuracy; the decimal stage annotation algorithm presents a novel way to annotate the patterns to decimal stages. In addition, some analysis on the algorithm performance are made and corresponding explanations are given. Finally, with the proposed system, all the lateral view BDGP and FlyFish images are annotated and several interesting applications of decimal stage value are revealed. / Dissertation/Thesis / M.S. Computer Science 2012
84

Structured Sparse Learning and Its Applications to Biomedical and Biological Data

January 2013 (has links)
abstract: Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups or graphs. In this thesis, I first propose to solve a sparse learning model with a general group structure, where the predefined groups may overlap with each other. Then, I present three real world applications which can benefit from the group structured sparse learning technique. In the first application, I study the Alzheimer's Disease diagnosis problem using multi-modality neuroimaging data. In this dataset, not every subject has all data sources available, exhibiting an unique and challenging block-wise missing pattern. In the second application, I study the automatic annotation and retrieval of fruit-fly gene expression pattern images. Combined with the spatial information, sparse learning techniques can be used to construct effective representation of the expression images. In the third application, I present a new computational approach to annotate developmental stage for Drosophila embryos in the gene expression images. In addition, it provides a stage score that enables one to more finely annotate each embryo so that they are divided into early and late periods of development within standard stage demarcations. Stage scores help us to illuminate global gene activities and changes much better, and more refined stage annotations improve our ability to better interpret results when expression pattern matches are discovered between genes. / Dissertation/Thesis / Ph.D. Computer Science 2013
85

Estruturas de aceleração para Ray Tracing em tempo real: um estudo comparativo

Lira dos Santos, Artur 31 January 2011 (has links)
Made available in DSpace on 2014-06-12T16:00:50Z (GMT). No. of bitstreams: 2 arquivo6997_1.pdf: 3788091 bytes, checksum: cf9480da9819849e38359e4e9a2bb074 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2011 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / O poder computacional atual das GPUs possibilita a execução de complexos algoritmos massivamente paralelos, como algoritmos de busca em estruturas de dados específicas para ray tracing em tempo real, comumente conhecidas como estruturas de aceleração. Esta dissertação descreve em detalhes o estudo e implementação de dezesseis diferentes algoritmos de travessia de estruturas de aceleração, utilizando o framework de CUDA, da NVIDIA. Este estudo comparativo teve o intuito de determinar as vantagens e desvantagens de cada técnica, em termos de performance, consumo de memória, grau de divergência em desvios e escalabilidade em múltiplas GPUs. Uma nova estrutura de aceleração, chamada Sparse Box Grid, também é proposta, além de dois novos algoritmos de busca, focando em melhoria de performance. Tais algoritmos são capazes de alcançar speedups de até 2.5x quando comparado com implementações recentes de travessias em GPU. Como consequência, é possível obter simulação em tempo real de cenas com milhões de primitivas para imagens com 1408x768 de resolução
86

Grassmannian Fusion Frames for Block Sparse Recovery and Its Application to Burst Error Correction

Mukund Sriram, N January 2013 (has links) (PDF)
Fusion frames and block sparse recovery are of interest in signal processing and communication applications. In these applications it is required that the fusion frame have some desirable properties. One such requirement is that the fusion frame be tight and its subspaces form an optimal packing in a Grassmannian manifold. Such fusion frames are called Grassmannian fusion frames. Grassmannian frames are known to be optimal dictionaries for sparse recovery as they have minimum coherence. By analogy Grassmannian fusion frames are potential candidates as optimal dictionaries in block sparse processing. The present work intends to study fusion frames in finite dimensional vector spaces assuming a specific structure useful in block sparse signal processing. The main focus of our work is the design of Grassmannian fusion frames and their implication in block sparse recovery. We will consider burst error correction as an application of block sparsity and fusion frame concepts. We propose two new algebraic methods for designing Grassmannian fusion frames. The first method involves use of Fourier matrix and difference sets to obtain a partial Fourier matrix which forms a Grassmannian fusion frame. This fusion frame has a specific structure and the parameters of the fusion frame are determined by the type of difference set used. The second method involves constructing Grassmannian fusion frames from Grassmannian frames which meet the Welch bound. This method uses existing constructions of optimal Grassmannian frames. The method, while fairly general, requires that the dimension of the vector space be divisible by the dimension of the subspaces. A lower bound which is an analog of the Welch bound is derived for the block coherence of dictionaries along with conditions to be satisfied to meet the bound. From these results we conclude that the matrices constructed by us are optimal for block sparse recovery from block coherence viewpoint. There is a strong relation between sparse signal processing and error control coding. It is known that burst errors are block sparse in nature. So, here we attempt to solve the burst error correction problem using block sparse signal recovery methods. The use of Grassmannian fusion frames which we constructed as optimal dictionary allows correction of maximum possible number of errors, when used in conjunction with reconstruction algorithms which exploit block sparsity. We also suggest a modification to improve the applicability of the technique and point out relationship with a method which appeared previously in literature. As an application example, we consider the use of the burst error correction technique for impulse noise cancelation in OFDM system. Impulse noise is bursty in nature and severely degrades OFDM performance. The Grassmannian fusion frames constructed with Fourier matrix and difference sets is ideal for use in this application as it can be easily incorporated into the OFDM system.
87

Sparse Multiclass And Multi-Label Classifier Design For Faster Inference

Bapat, Tanuja 12 1900 (has links) (PDF)
Many real-world problems like hand-written digit recognition or semantic scene classification are treated as multiclass or multi-label classification prob-lems. Solutions to these problems using support vector machines (SVMs) are well studied in literature. In this work, we focus on building sparse max-margin classifiers for multiclass and multi-label classification. Sparse representation of the resulting classifier is important both from efficient training and fast inference viewpoints. This is true especially when the training and test set sizes are large.Very few of the existing multiclass and multi-label classification algorithms have given importance to controlling the sparsity of the designed classifiers directly. Further, these algorithms were not found to be scalable. Motivated by this, we propose new formulations for sparse multiclass and multi-label classifier design and also give efficient algorithms to solve them. The formulation for sparse multi-label classification also incorporates the prior knowledge of label correlations. In both the cases, the classification model is designed using a common set of basis vectors across all the classes. These basis vectors are greedily added to an initially empty model, to approximate the target function. The sparsity of the classifier can be controlled by a user defined parameter, dmax which indicates the max-imum number of common basis vectors. The computational complexity of these algorithms for multiclass and multi-label classifier designisO(lk2d2 max), Where l is the number of training set examples and k is the number of classes. The inference time for the proposed multiclass and multi-label classifiers is O(kdmax). Numerical experiments on various real-world benchmark datasets demonstrate that the proposed algorithms result in sparse classifiers that require lesser number of basis vectors than required by state-of-the-art algorithms, to attain the same generalization performance. Very small value of dmax results in significant reduction in inference time. Thus, the proposed algorithms provide useful alternatives to the existing algorithms for sparse multiclass and multi-label classifier design.
88

Graph Theory for the Discovery of Non-Parametric Audio Objects

Srinivasa, Christopher January 2011 (has links)
A novel framework based on cluster co-occurrence and graph theory for structure discovery is applied to audio to find new types of audio objects which enable the compression of an input signal. These new objects differ from those found in current object coding schemes as their shape is not restricted by any a priori psychoacoustic knowledge. The framework is novel from an application perspective, as it marks the first time that graph theory is applied to audio, and with regards to theoretical developments, as it involves new extensions to the areas of unsupervised learning algorithms and frequent subgraph mining methods. Tests are performed using a corpus of audio files spanning a wide range of sounds. Results show that the framework discovers new types of audio objects which yield average respective overall and relative compression gains of 15.90% and 23.53% while maintaining a very good average audio quality with imperceptible changes.
89

Déconvolution d'images en radioastronomie centimétrique pour l'exploitation des nouveaux interféromètres radio : caractérisation du milieu non thermique des amas de galaxies / Deconvolution of images in centimeter-band radio astronomy for the exploitation of new radio interferometers : characterization of non thermal components in galaxy clusters

Dabbech, Arwa 28 April 2015 (has links)
Dans le cadre de la préparation du Square Kilometre Array (SKA), le plus large radio interféromètre au monde, de nouveaux défis de traitement d'images sont à relever. En effet, les données fournies par SKA auront un débit énorme, nécessitant ainsi un traitement en temps réel. En outre, grâce à sa résolution et sa sensibilité sans précédent, les observations seront dotées d'une très forte dynamique sur des champs de vue très grands. De nouvelles méthodes de traitement d'images robustes, efficaces et automatisées sont alors exigées. L'objectif de la thèse consiste à développer une nouvelle méthode permettant la restauration du modèle de l'image du ciel à partir des observations. La méthode est conçue pour l'estimation des images de très forte dynamique avec une attention particulière à restaurer les émissions étendues et faibles en intensité, souvent noyées dans les lobes secondaires de la PSF et le bruit. L'approche proposée est basée sur les représentations parcimonieuses, nommée MORESANE. L'image du ciel est modélisée comme étant la superposition de sources, qui constitueront les atomes d'un dictionnaire de synthèse inconnu, ce dernier sera estimé par des a priori d'analyses. Les résultats obtenus sur des simulations réalistes montrent que MORESANE est plus performant que les outils standards et très compétitifs avec les méthodes récemment proposées dans la littérature. MORESANE est appliqué sur des simulations d'observations d'amas de galaxies avec SKA1 afin d'investiguer la détectabilité du milieu non thermique intra-amas. Nos résultats indiquent que cette émission, avec SKA, sera étudiée jusqu'à l'époque de la formation des amas de galaxies massifs. / Within the framework of the preparation for the Square Kilometre Array (SKA), that is the world largest radio telescope, new imaging challenges has to be conquered. The data acquired by SKA will have to be processed on real time because of their huge rate. In addition, thanks to its unprecedented resolution and sensitivity, SKA images will have very high dynamic range over wide fields of view. Hence, there is an urgent need for the design of new imaging techniques that are robust and efficient and fully automated. The goal of this thesis is to develop a new technique aiming to reconstruct a model image of the radio sky from the radio observations. The method have been designed to estimate images with high dynamic range with a particular attention to recover faint extended emission usually completely buried in the PSF sidelobes of the brighter sources and the noise. We propose a new approach, based on sparse representations, called MORESANE. The radio sky is assumed to be a summation of sources, considered as atoms of an unknown synthesis dictionary. These atoms are learned using analysis priors from the observed image. Results obtained on realistic simulations show that MORESANE is very promising in the restoration of radio images; it is outperforming the standard tools and very competitive with the newly proposed methods in the literature. MORESANE is also applied on simulations of observations using the SKA1 with the aim to investigate the detectability of the intracluster non thermal component. Our results indicate that these diffuse sources, characterized by very low surface brightness will be investigated up to the epoch of massive cluster formation with the SKA.
90

Bayesian Recovery of Clipped OFDM Signals: A Receiver-based Approach

Al-Rabah, Abdullatif R. 05 1900 (has links)
Recently, orthogonal frequency-division multiplexing (OFDM) has been adopted for high-speed wireless communications due to its robustness against multipath fading. However, one of the main fundamental drawbacks of OFDM systems is the high peak-to-average-power ratio (PAPR). Several techniques have been proposed for PAPR reduction. Most of these techniques require transmitter-based (pre-compensated) processing. On the other hand, receiver-based alternatives would save the power and reduce the transmitter complexity. By keeping this in mind, a possible approach is to limit the amplitude of the OFDM signal to a predetermined threshold and equivalently a sparse clipping signal is added. Then, estimating this clipping signal at the receiver to recover the original signal. In this work, we propose a Bayesian receiver-based low-complexity clipping signal recovery method for PAPR reduction. The method is able to i) effectively reduce the PAPR via simple clipping scheme at the transmitter side, ii) use Bayesian recovery algorithm to reconstruct the clipping signal at the receiver side by measuring part of subcarriers, iii) perform well in the absence of statistical information about the signal (e.g. clipping level) and the noise (e.g. noise variance), and at the same time iv is energy efficient due to its low complexity. Specifically, the proposed recovery technique is implemented in data-aided based. The data-aided method collects clipping information by measuring reliable 
data subcarriers, thus makes full use of spectrum for data transmission without the need for tone reservation. The study is extended further to discuss how to improve the recovery of the clipping signal utilizing some features of practical OFDM systems i.e., the oversampling and the presence of multiple receivers. Simulation results demonstrate the superiority of the proposed technique over other recovery algorithms. The overall objective is to show that the receiver-based Bayesian technique is highly recommended to be an effective and practical alternative to state-of-art PAPR reduction techniques.

Page generated in 0.0242 seconds