• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 10
  • 8
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 62
  • 62
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Multi-color Fluorescence In-situ Hybridization (m-fish) Image Analysis Based On Sparse Representation Models

January 2015 (has links)
There are a variety of chromosomal abnormalities such as translocation, duplication, deletion, insertion and inversion, which may cause severe diseases, e.g., cancers and birth defects. Multi-color fluorescence in-situ hybridization (M-FISH) is an imaging technique popularly used for simultaneously detecting and visualizing these complex abnormalities in a single hybridization. In spite of the advancement of fluorescence microscopy for chromosomal abnormality detection, the quality of the fluorescence images is still limited, due to the spectral overlap, uneven intensity level across multiple channels, variations of background and inhomogeneous intensity within intra-channels. Therefore, it is critical but challenging to distinguish the different types of chromosomes accurately in order to detect the chromosomal abnormalities from M-FISH images. The main contribution of this dissertation is to develop an M-FISH image analysis pipeline by taking full advantage of spatial and spectral information from M-FISH imaging. In addition, novel image analysis approaches such as the sparse representation are applied in this work. The pipeline starts with the image preprocessing to extract the background to improve the quality of the raw images by low-rank plus group lasso decomposition. Then, the image segmentation is performed by incorporating both spatial and spectral information by total variation (TV) and row-wise constraints. Finally image classification is conducted by considering the structural information of neighboring pixels with a row-wise sparse representation model. In each step, new methods and sophisticated algorithms were developed and compared with several popularly used methods, It shows that (1) the preprocessing model improves the quality of the raw images; (2) the segmentation model outperforms than both fuzzy c-means (FCM) and improved adaptive fuzzy c-means (IAFCM) models in terms of correct ratio and false rate; and (3) the classification model corrects the misclassification to improve the accuracy of chromosomal abnormalities detection, especially for the complex inter-chromosomal rearrangements. / 1 / Jingyao Li
2

Sparse Models For Multimodal Imaging And Omics Data Integration

January 2015 (has links)
1 / DONGDONG LIN
3

Real-time 3D visualization of organ deformations based on structured dictionary

Wang, Dan 11 July 2012 (has links)
Minimally invasive technique (MIS) revolutionized the field of surgery for its shorter hospitalization time, lower complication rates, and ultimately reduced morbidity and mortality. However, one of the critical challenges that prevent it from reaching the full potentials is the restricted visualization from the traditional monocular camera systems at the presence of tissue deformations. This dissertation aims to design a new approach which can provide the surgeons with real time 3D visualization of complete organ deformations during the MIS operation. This new approach even allows the surgeon to see through the wall of an organ rather than just looking at its surface. The proposed design consists of two stages. The first training stage identified the deformation subspaces from a training data set in the transformed spherical harmonic domain, such that each surface can be sparsely represented in the structured dictionary with low dimensionality. This novel idea is based on our experimental discovery that the spherical harmonic coefficients of any organ surface lie in specific low dimensional subspaces. The second reconstruction stage reconstructs the complete deformations in realtime using surface samples obtained with an optical device from a limited field of view while applying the structured dictionary. The sparse surface representation algorithm is also applied to ultrasound image enhancement and efficient surgical simulation. The former is achieved by fusing ultrasound samples 5 with optical data under proper weighting strategies. The high speed of surgical simulation is obtained by decreasing the computational cost based on the high compactness of the surface representation algorithm. In order to verify the proposed approaches, we first use the computer models to demonstrate that the proposed approach matches the accuracy of complex mathematical modeling techniques. Then ex-vivo experiments are conducted on freshly excised porcine kidneys utilizing a 3D MRI machine, a 3D optical device and an ultrasound machine to further test the feasibility under practical settings. / text
4

Application of Sparse Representation to Radio Frequency Emitter Geolocation from an Airborne Antenna Array

Compaleo, Jacob January 2022 (has links)
No description available.
5

Distortion Robust Biometric Recognition

January 2018 (has links)
abstract: Information forensics and security have come a long way in just a few years thanks to the recent advances in biometric recognition. The main challenge remains a proper design of a biometric modality that can be resilient to unconstrained conditions, such as quality distortions. This work presents a solution to face and ear recognition under unconstrained visual variations, with a main focus on recognition in the presence of blur, occlusion and additive noise distortions. First, the dissertation addresses the problem of scene variations in the presence of blur, occlusion and additive noise distortions resulting from capture, processing and transmission. Despite their excellent performance, ’deep’ methods are susceptible to visual distortions, which significantly reduce their performance. Sparse representations, on the other hand, have shown huge potential capabilities in handling problems, such as occlusion and corruption. In this work, an augmented SRC (ASRC) framework is presented to improve the performance of the Spare Representation Classifier (SRC) in the presence of blur, additive noise and block occlusion, while preserving its robustness to scene dependent variations. Different feature types are considered in the performance evaluation including image raw pixels, HoG and deep learning VGG-Face. The proposed ASRC framework is shown to outperform the conventional SRC in terms of recognition accuracy, in addition to other existing sparse-based methods and blur invariant methods at medium to high levels of distortion, when particularly used with discriminative features. In order to assess the quality of features in improving both the sparsity of the representation and the classification accuracy, a feature sparse coding and classification index (FSCCI) is proposed and used for feature ranking and selection within both the SRC and ASRC frameworks. The second part of the dissertation presents a method for unconstrained ear recognition using deep learning features. The unconstrained ear recognition is performed using transfer learning with deep neural networks (DNNs) as a feature extractor followed by a shallow classifier. Data augmentation is used to improve the recognition performance by augmenting the training dataset with image transformations. The recognition performance of the feature extraction models is compared with an ensemble of fine-tuned networks. The results show that, in the case where long training time is not desirable or a large amount of data is not available, the features from pre-trained DNNs can be used with a shallow classifier to give a comparable recognition accuracy to the fine-tuned networks. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2018
6

Compressed Sampling for High Frequency Receivers Applications

bi, xiaofei January 2011 (has links)
In digital signal processing field, for recovering the signal without distortion, Shannon sampling theory must be fulfilled in the traditional signal sampling. However, in some practical applications, it is becoming an obstacle because of the dramatic increase of the costs due to increased volume of the storage and transmission as a function of frequency for sampling. Therefore, how to reduce the number of the sampling in analog to digital conversion (ADC) for wideband and how to compress the large data effectively has been becoming major subject for study. Recently, a novel technique, so-called “compressed sampling”, abbreviated as CS, has been proposed to solve the problem. This method will capture and represent compressible signals at a sampling rate significantly lower than the Nyquist rate.   This paper not only surveys the theory of compressed sampling, but also simulates the CS with the software Matlab. The error between the recovered signal and original signal for simulation is around -200dB. The attempts were made to apply CS. The error between the recovered signal and original one for experiment is around -40 dB which means the CS is realized in a certain extent. Furthermore, some related applications and the suggestions of the further work are discussed.
7

Motor imagery classification using sparse representation of EEG signals

Saidi, Pouria 01 January 2015 (has links)
The human brain is unquestionably the most complex organ of the body as it controls and processes its movement and senses. A healthy brain is able to generate responses to the signals it receives, and transmit messages to the body. Some neural disorders can impair the communication between the brain and the body preventing the transmission of these messages. Brain Computer Interfaces (BCIs) are devices that hold immense potential to assist patients with such disorders by analyzing brain signals, translating and classifying various brain responses, and relaying them to external devices and potentially back to the body. Classifying motor imagery brain signals where the signals are obtained based on imagined movement of the limbs is a major, yet very challenging, step in developing Brain Computer Interfaces (BCIs). Of primary importance is to use less data and computationally efficient algorithms to support real-time BCI. To this end, in this thesis we explore and develop algorithms that exploit the sparse characteristics of EEGs to classify these signals. Different feature vectors are extracted from EEG trials recorded by electrodes placed on the scalp. In this thesis, features from a small spatial region are approximated by a sparse linear combination of few atoms from a multi-class dictionary constructed from the features of the EEG training signals for each class. This is used to classify the signals based on the pattern of their sparse representation using a minimum-residual decision rule. We first attempt to use all the available electrodes to verify the effectiveness of the proposed methods. To support real time BCI, the electrodes are reduced to those near the sensorimotor cortex which are believed to be crucial for motor preparation and imagination. In a second approach, we try to incorporate the effect of spatial correlation across the neighboring electrodes near the sensorimotor cortex. To this end, instead of considering one feature vector at a time, we use a collection of feature vectors simultaneously to find the joint sparse representation of these vectors. Although we were not able to see much improvement with respect to the first approach, we envision that such improvements could be achieved using more refined models that can be subject of future works. The performance of the proposed approaches is evaluated using different features, including wavelet coefficients, energy of the signals in different frequency sub-bands, and also entropy of the signals. The results obtained from real data demonstrate that the combination of energy and entropy features enable efficient classification of motor imagery EEG trials related to hand and foot movements. This underscores the relevance of the energies and their distribution in different frequency sub-bands for classifying movement-specific EEG patterns in agreement with the existence of different levels within the alpha band. The proposed approach is also shown to outperform the state-of-the-art algorithm that uses feature vectors obtained from energies of multiple spatial projections.
8

Loughborough University Spontaneous Expression Database and baseline results for automatic emotion recognition

Aina, Segun January 2015 (has links)
The study of facial expressions in humans dates back to the 19th century and the study of the emotions that these facial expressions portray dates back even further. It is a natural part of non-verbal communication for humans to pass across messages using facial expressions either consciously or subconsciously, it is also routine for other humans to recognize these facial expressions and understand or deduce the underlying emotions which they represent. Over two decades ago and following technological advances, particularly in the area of image processing, research began into the use of machines for the recognition of facial expressions from images with the aim of inferring the corresponding emotion. Given a previously unknown test sample, the supervised learning problem is to accurately determine the facial expression class to which the test sample belongs using the knowledge of the known class memberships of each image from a set of training images. The solution to this problem building an effective classifier to recognize the facial expression is hinged on the availability of representative training data. To date, much of the research in the area of Facial Expression Recognition (FER) is still based on posed (acted) facial expression databases, which are often exaggerated and therefore not representative of real life affective displays, as such there is a need for more publically accessible spontaneous databases that are well labelled. This thesis therefore reports on the development of the newly collected Loughborough University Spontaneous Expression Database (LUSED); designed to bolster the development of new recognition systems and to provide a benchmark for researchers to compare results with more natural expression classes than most existing databases. To collect the database, an experiment was set up where volunteers were discretely videotaped while they watched a selection of emotion inducing video clips. The utility of the new LUSED dataset is validated using both traditional and more recent pattern recognition techniques; (1) baseline results are presented using the combination of Principal Component Analysis (PCA), Fisher Linear Discriminant Analysis (FLDA) and their kernel variants Kernel Principal Component Analysis (KPCA), Kernel Fisher Discriminant Analysis (KFDA) with a Nearest Neighbour-based classifier. These results are compared to the performance of an existing natural expression database Natural Visible and Infrared Expression (NVIE) database. A scheme for the recognition of encrypted facial expression images is also presented. (2) Benchmark results are presented by combining PCA, FLDA, KPCA and KFDA with a Sparse Representation-based Classifier (SRC). A maximum accuracy of 68% was obtained recognizing five expression classes, which is comparatively better than the known maximum for a natural database; around 70% (from recognizing only three classes) obtained from NVIE.
9

Transfer Learning for Image Classification / Transfert de connaissances pour la classification des images -

Lu, Ying 09 November 2017 (has links)
Lors de l’apprentissage d’un modèle de classification pour un nouveau domaine cible avec seulement une petite quantité d’échantillons de formation, l’application des algorithmes d’apprentissage automatiques conduit généralement à des classifieurs surdimensionnés avec de mauvaises compétences de généralisation. D’autre part, recueillir un nombre suffisant d’échantillons de formation étiquetés manuellement peut s’avérer très coûteux. Les méthodes de transfert d’apprentissage visent à résoudre ce type de problèmes en transférant des connaissances provenant d’un domaine source associé qui contient beaucoup plus de données pour faciliter la classification dans le domaine cible. Selon les différentes hypothèses sur le domaine cible et le domaine source, l’apprentissage par transfert peut être classé en trois catégories: apprentissage par transfert inductif, apprentissage par transfert transducteur (adaptation du domaine) et apprentissage par transfert non surveillé. Nous nous concentrons sur le premier qui suppose que la tâche cible et la tâche source sont différentes mais liées. Plus précisément, nous supposons que la tâche cible et la tâche source sont des tâches de classification, tandis que les catégories cible et les catégories source sont différentes mais liées. Nous proposons deux méthodes différentes pour aborder ce problème. Dans le premier travail, nous proposons une nouvelle méthode d’apprentissage par transfert discriminatif, à savoir DTL(Discriminative Transfer Learning), combinant une série d’hypothèses faites à la fois par le modèle appris avec les échantillons de cible et les modèles supplémentaires appris avec des échantillons des catégories sources. Plus précisément, nous utilisons le résidu de reconstruction creuse comme discriminant de base et améliore son pouvoir discriminatif en comparant deux résidus d’un dictionnaire positif et d’un dictionnaire négatif. Sur cette base, nous utilisons des similitudes et des dissemblances en choisissant des catégories sources positivement corrélées et négativement corrélées pour former des dictionnaires supplémentaires. Une nouvelle fonction de coût basée sur la statistique de Wilcoxon-Mann-Whitney est proposée pour choisir les dictionnaires supplémentaires avec des données non équilibrées. En outre, deux processus de Boosting parallèles sont appliqués à la fois aux distributions de données positives et négatives pour améliorer encore les performances du classificateur. Sur deux bases de données de classification d’images différentes, la DTL proposée surpasse de manière constante les autres méthodes de l’état de l’art du transfert de connaissances, tout en maintenant un temps d’exécution très efficace. Dans le deuxième travail, nous combinons le pouvoir du transport optimal (OT) et des réseaux de neurones profond (DNN) pour résoudre le problème ITL. Plus précisément, nous proposons une nouvelle méthode pour affiner conjointement un réseau de neurones avec des données source et des données cibles. En ajoutant une fonction de perte du transfert optimal (OT loss) entre les prédictions du classificateur source et cible comme une contrainte sur le classificateur source, le réseau JTLN (Joint Transfer Learning Network) proposé peut effectivement apprendre des connaissances utiles pour la classification cible à partir des données source. En outre, en utilisant différents métriques comme matrice de coût pour la fonction de perte du transfert optimal, JTLN peut intégrer différentes connaissances antérieures sur la relation entre les catégories cibles et les catégories sources. Nous avons effectué des expérimentations avec JTLN basées sur Alexnet sur les jeux de données de classification d’image et les résultats vérifient l’efficacité du JTLN proposé. A notre connaissances, ce JTLN proposé est le premier travail à aborder ITL avec des réseaux de neurones profond (DNN) tout en intégrant des connaissances antérieures sur la relation entre les catégories cible et source. / When learning a classification model for a new target domain with only a small amount of training samples, brute force application of machine learning algorithms generally leads to over-fitted classifiers with poor generalization skills. On the other hand, collecting a sufficient number of manually labeled training samples may prove very expensive. Transfer Learning methods aim to solve this kind of problems by transferring knowledge from related source domain which has much more data to help classification in the target domain. Depending on different assumptions about target domain and source domain, transfer learning can be further categorized into three categories: Inductive Transfer Learning, Transductive Transfer Learning (Domain Adaptation) and Unsupervised Transfer Learning. We focus on the first one which assumes that the target task and source task are different but related. More specifically, we assume that both target task and source task are classification tasks, while the target categories and source categories are different but related. We propose two different methods to approach this ITL problem. In the first work we propose a new discriminative transfer learning method, namely DTL, combining a series of hypotheses made by both the model learned with target training samples, and the additional models learned with source category samples. Specifically, we use the sparse reconstruction residual as a basic discriminant, and enhance its discriminative power by comparing two residuals from a positive and a negative dictionary. On this basis, we make use of similarities and dissimilarities by choosing both positively correlated and negatively correlated source categories to form additional dictionaries. A new Wilcoxon-Mann-Whitney statistic based cost function is proposed to choose the additional dictionaries with unbalanced training data. Also, two parallel boosting processes are applied to both the positive and negative data distributions to further improve classifier performance. On two different image classification databases, the proposed DTL consistently out performs other state-of-the-art transfer learning methods, while at the same time maintaining very efficient runtime. In the second work we combine the power of Optimal Transport and Deep Neural Networks to tackle the ITL problem. Specifically, we propose a novel method to jointly fine-tune a Deep Neural Network with source data and target data. By adding an Optimal Transport loss (OT loss) between source and target classifier predictions as a constraint on the source classifier, the proposed Joint Transfer Learning Network (JTLN) can effectively learn useful knowledge for target classification from source data. Furthermore, by using different kind of metric as cost matrix for the OT loss, JTLN can incorporate different prior knowledge about the relatedness between target categories and source categories. We carried out experiments with JTLN based on Alexnet on image classification datasets and the results verify the effectiveness of the proposed JTLN in comparison with standard consecutive fine-tuning. To the best of our knowledge, the proposed JTLN is the first work to tackle ITL with Deep Neural Networks while incorporating prior knowledge on relatedness between target and source categories. This Joint Transfer Learning with OT loss is general and can also be applied to other kind of Neural Networks.
10

Kernelized Supervised Dictionary Learning

Jabbarzadeh Gangeh, Mehrdad 24 April 2013 (has links)
The representation of a signal using a learned dictionary instead of predefined operators, such as wavelets, has led to state-of-the-art results in various applications such as denoising, texture analysis, and face recognition. The area of dictionary learning is closely associated with sparse representation, which means that the signal is represented using few atoms in the dictionary. Despite recent advances in the computation of a dictionary using fast algorithms such as K-SVD, online learning, and cyclic coordinate descent, which make the computation of a dictionary from millions of data samples computationally feasible, the dictionary is mainly computed using unsupervised approaches such as k-means. These approaches learn the dictionary by minimizing the reconstruction error without taking into account the category information, which is not optimal in classification tasks. In this thesis, we propose a supervised dictionary learning (SDL) approach by incorporating information on class labels into the learning of the dictionary. To this end, we propose to learn the dictionary in a space where the dependency between the signals and their corresponding labels is maximized. To maximize this dependency, the recently-introduced Hilbert Schmidt independence criterion (HSIC) is used. The learned dictionary is compact and has closed form; the proposed approach is fast. We show that it outperforms other unsupervised and supervised dictionary learning approaches in the literature on real-world data. Moreover, the proposed SDL approach has as its main advantage that it can be easily kernelized, particularly by incorporating a data-driven kernel such as a compression-based kernel, into the formulation. In this thesis, we propose a novel compression-based (dis)similarity measure. The proposed measure utilizes a 2D MPEG-1 encoder, which takes into consideration the spatial locality and connectivity of pixels in the images. The proposed formulation has been carefully designed based on MPEG encoder functionality. To this end, by design, it solely uses P-frame coding to find the (dis)similarity among patches/images. We show that the proposed measure works properly on both small and large patch sizes on textures. Experimental results show that by incorporating the proposed measure as a kernel into our SDL, it significantly improves the performance of a supervised pixel-based texture classification on Brodatz and outdoor images compared to other compression-based dissimilarity measures, as well as state-of-the-art SDL methods. It also improves the computation speed by about 40% compared to its closest rival. Eventually, we have extended the proposed SDL to multiview learning, where more than one representation is available on a dataset. We propose two different multiview approaches: one fusing the feature sets in the original space and then learning the dictionary and sparse coefficients on the fused set; and the other by learning one dictionary and the corresponding coefficients in each view separately, and then fusing the representations in the space of the dictionaries learned. We will show that the proposed multiview approaches benefit from the complementary information in multiple views, and investigate the relative performance of these approaches in the application of emotion recognition.

Page generated in 0.1417 seconds