• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 53
  • 53
  • 25
  • 15
  • 13
  • 12
  • 11
  • 10
  • 10
  • 10
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Real-time 3D visualization of organ deformations based on structured dictionary

Wang, Dan 11 July 2012 (has links)
Minimally invasive technique (MIS) revolutionized the field of surgery for its shorter hospitalization time, lower complication rates, and ultimately reduced morbidity and mortality. However, one of the critical challenges that prevent it from reaching the full potentials is the restricted visualization from the traditional monocular camera systems at the presence of tissue deformations. This dissertation aims to design a new approach which can provide the surgeons with real time 3D visualization of complete organ deformations during the MIS operation. This new approach even allows the surgeon to see through the wall of an organ rather than just looking at its surface. The proposed design consists of two stages. The first training stage identified the deformation subspaces from a training data set in the transformed spherical harmonic domain, such that each surface can be sparsely represented in the structured dictionary with low dimensionality. This novel idea is based on our experimental discovery that the spherical harmonic coefficients of any organ surface lie in specific low dimensional subspaces. The second reconstruction stage reconstructs the complete deformations in realtime using surface samples obtained with an optical device from a limited field of view while applying the structured dictionary. The sparse surface representation algorithm is also applied to ultrasound image enhancement and efficient surgical simulation. The former is achieved by fusing ultrasound samples 5 with optical data under proper weighting strategies. The high speed of surgical simulation is obtained by decreasing the computational cost based on the high compactness of the surface representation algorithm. In order to verify the proposed approaches, we first use the computer models to demonstrate that the proposed approach matches the accuracy of complex mathematical modeling techniques. Then ex-vivo experiments are conducted on freshly excised porcine kidneys utilizing a 3D MRI machine, a 3D optical device and an ultrasound machine to further test the feasibility under practical settings. / text
2

Development of statistical methods for DNA copy number analysis in cancerology / Développement de méthodes statistiques pour l'analyse du nombre de copies d'ADN en cancérologie

Pierre-Jean, Morgane 02 December 2016 (has links)
Les données génomiques issues d'expériences de puces à ADN ou de séquençage ont deux caractéristiques principales: leur grande dimension (le nombre de marqueurs dépassant de plusieurs ordres de grandeurs le nombre d'observations), et leur forte structuration (notamment via les dépendances entre marqueurs). La prise en compte de cette structuration est un enjeu clé pour le développement de méthodes performantes en grande dimension.Cette thèse est axée sur les données présentant une forte structure le long du génome. C'est le cas des données de nombres de copies d'ADN, mais aussi des données de génotypes. La thèse couvre à la fois le développement de méthodes statistiques, l'implémentation logicielle, et l'application des méthodes développées à des jeux de données réelles. Nous avons, en particulier, étudié des méthodes de segmentation, et de dictionary learning. Toutes les implémentations logiciel de ces méthodes sont librement disponibles sous forme de packages R. / Genomic data from DNA microarray or sequencing technologies have two major characteristics: their high dimension (number of markers larger than the number of observations), and their strong structuration (dependence between markers). Taking into account this structuration, it is a challenging issue for the development of efficient methods.This work is focused on the data with a strong spatial structuration, namely DNA copy number data in tumor samples. We developed statistical models, software implementations and we applied these developments to real data. We explored in particular segmentation models and dictionary learning methods. All the software Implementations of these methods are freely available as R packages.
3

Fast methods for Magnetic Resonance Angiography (MRA)

Vafadar, Bahareh January 2014 (has links)
Magnetic resonance imaging (MRI) is a highly exible and non-invasive medical imaging modality based on the concept of nuclear magnetic resonance (NMR). Compared to other imaging techniques, major limitation of MRI is the relatively long acquisition time. The slowness of acquisition makes MRI difficult to apply to time-sensitive clinical applications. Acquisition of MRA images with a spatial resolution close to conventional digital subtraction angiography is feasible, but at the expense of reduction in temporal resolution. Parallel MRI employs multiple receiver coils to speed up the MRI acquisition by reducing the number of data points collected. Although, the reconstructed images from undersampled data sets often suffer from different different types of degradation and artifacts. In contrast-enhanced magnetic resonance imaging, information is effectively measured in 3D k-space one line at a time therefore the 3D data acquisition extends over several minutes even using parallel receiver coils. This limits the assessment of high ow lesions and some vascular tumors in patients. To improve spatio-temporal resolution in contrast enhanced magnetic resonance angiography (CE-MRA), the use of incorporating prior knowledge in the image recovery process is considered in this thesis. There are five contributions in this thesis. The first contribution is the modification of generalized unaliasing using support and sensitivity encoding (GUISE). GUISE was introduced by this group to explore incorporating prior knowledge of the image to be reconstructed in parallel MRI. In order to provide improved time-resolved MRA image sequences of the blood vessels, the GUISE method requires an accurate segmentation of the relatively noisy 3D data set into vessel and background. The method that was originally used for definition of the effective region of support was primitive and produced a segmented image with much false detection because of the effect of overlying structures and the relatively noisy background in images. We proposed to use the statistical principle as employed for the modified maximum intensity projection (MIP) to achieve better 3D segmentation and optimal visualization of blood vessels. In comparison with the previous region of support (ROS), the new one enables higher accelerations MRA reconstructions due to the decreased volume of the ROS and leads to less computationally expensive reconstruction. In the second contribution we demonstrated the impact of imposing the Karhunen-Loeve transform (KLT) basis for the temporal changes, based on prior expectation of the changes in contrast concentration with time. In contrast with other transformation, KLT of the temporal variation showed a better contrast to noise ratio (CNR) can be achieved. By incorporating a data ordering step with compressed sensing (CS), an improvement in image quality for reconstructing parallel MR images was exhibited in prior estimate based compressed sensing (PECS). However, this method required a prior estimate of the image to be available. A singular value decomposition (SVD) modification of PECS algorithm (SPECS) to explore ways of utilising the data ordering step without requiring a prior estimate was extended as the third contribution. By employing singular value decomposition as the sparsifying transform in the CS algorithm, the recovered image was used to derive the data ordering in PECS. The preliminary results outperformed the PECS results. The fourth contribution is a novel approach for training a dictionary for sparse recovery in CE-MRA. The experimental results demonstrate improved reconstructions on clinical undersampled dynamic images. A new method recently has been developed to exploit the structure of the signal in sparse representation. Group sparse compressed sensing (GSCS) allows the efficient reconstruction of signals whose support is contained in the union of a small number of groups (sets) from a collection of pre-defined disjoint groups. Exploiting CS applications in dynamic MR imaging, a group sparse method was introduced for our contrast-enhanced data set. Instead of incorporating data ordering resulted from prior information, pre-defined sparsity patterns were used in the PECS recovery algorithm, resulting to a suppression of noise in the reconstruction.
4

Exploiting Sparsity and Dictionary Learning to Efficiently Classify Materials in Hyperspectral Imagery

Pound, Andrew E. 01 May 2014 (has links)
Hyperspectral imaging (HSI) produces spatial images with pixels that, instead of consisting of three colors, consist of hundreds of spectral measurements. Because there are so many measurements for each pixel, analysis of HSI is difficult. Frequently, standard techniques are used to help make analysis more tractable by representing the HSI data in a different manner. This research explores the utility of representing the HSI data in a learned dictionary basis for the express purpose of material identification and classification. Multiclass classification is performed on the transformed data using the RandomForests algorithm. Performance results are reported. In addition to classification, single material detection is considered also. Commonly used detection algorithm performance is demonstrated on both raw radiance pixels and HSI represented in dictionary-learned bases. Comparison results are shown which indicate that detection on dictionary-learned sparse representations perform as well as detection on radiance. In addition, a different method of performing detection, capitalizing on dictionary learning is established and performance comparisons are reported, showing gains over traditional detection methods.
5

Unconstrained Periocular Face Recognition: From Reconstructive Dictionary Learning to Generative Deep Learning and Beyond

Juefei-Xu, Felix 01 April 2018 (has links)
Many real-world face recognition tasks are under unconstrained conditions such as off-angle pose variations, illumination variations, facial occlusion, facial expression, etc. In this work, we are focusing on the real-world scenarios where only the periocular region of a face is visible such as in the ISIS case. In Part I of the dissertation, we will showcase the face recognition capability based on the periocular region, which we call the periocular face recognition. We will demonstrate that face matching using the periocular region directly is more robust than the full face in terms of age-tolerant face recognition, expression-tolerant face recognition, pose-tolerant face recognition, as well as contains more cues for determining the gender information of a subject. In this dissertation, we will study direct periocular matching more comprehensively and systematically using both shallow and deep learning methods. Based on this, in Part II and Part III of the dissertation, we will continue to explore an indirect way of carrying out the periocular face recognition: periocular-based full face hallucination, because we want to capitalize on the powerful commercial face matchers and deep learning-based face recognition engines which are all trained on large-scale full face images. The reproducibility and feasibility of re-training for a proprietary facial region, such as the periocular region, is relatively low, due to the nonopen source nature of commercial face matchers as well as the amount of training data and computation power required by the deep learning based models. We will carry out the periocular-based full face hallucination based on two proposed reconstructive dictionary learning methods, including the dimensionally weighted K-SVD (DW-KSVD) dictionary learning approach and its kernel feature space counterpart using Fastfood kernel expansion approximation to reconstruct high-fidelity full face images from the periocular region, as well as two proposed generative deep learning approaches that build upon deep convolutional generative adversarial networks (DCGAN) to generate the full face from the periocular region observations, including the Gang of GANs (GoGAN) method and the discriminant nonlinear many-to-one generative adversarial networks (DNMM-GAN) for applications such as the generative open-set landmark-free frontalization (Golf) for faces and universal face optimization (UFO), which tackles an even broader set of problems than periocular based full face hallucination. Throughout Parts I-III, we will study how to handle challenging realworld scenarios such as unconstrained pose variations, unconstrained illumination conditions, and unconstrained low resolution of the periocular and facial images. Together, we aim to achieve unconstrained periocular face recognition through both direct periocular face matching and indirect periocular-based full face hallucination. In the final Part IV of the dissertation, we will go beyond and explore several new methods in deep learning that are statistically efficient for generalpurpose image recognition. Methods include the local binary convolutional neural networks (LBCNN), the perturbative neural networks (PNN), and the polynomial convolutional neural networks (PolyCNN).
6

Adaptive sparse coding and dictionary selection

Yaghoobi Vaighan, Mehrdad January 2010 (has links)
The sparse coding is approximation/representation of signals with the minimum number of coefficients using an overcomplete set of elementary functions. This kind of approximations/ representations has found numerous applications in source separation, denoising, coding and compressed sensing. The adaptation of the sparse approximation framework to the coding problem of signals is investigated in this thesis. Open problems are the selection of appropriate models and their orders, coefficient quantization and sparse approximation method. Some of these questions are addressed in this thesis and novel methods developed. Because almost all recent communication and storage systems are digital, an easy method to compute quantized sparse approximations is introduced in the first part. The model selection problem is investigated next. The linear model can be adapted to better fit a given signal class. It can also be designed based on some a priori information about the model. Two novel dictionary selection methods are separately presented in the second part of the thesis. The proposed model adaption algorithm, called Dictionary Learning with the Majorization Method (DLMM), is much more general than current methods. This generality allowes it to be used with different constraints on the model. Particularly, two important cases have been considered in this thesis for the first time, Parsimonious Dictionary Learning (PDL) and Compressible Dictionary Learning (CDL). When the generative model order is not given, PDL not only adapts the dictionary to the given class of signals, but also reduces the model order redundancies. When a fast dictionary is needed, the CDL framework helps us to find a dictionary which is adapted to the given signal class without increasing the computation cost so much. Sometimes a priori information about the linear generative model is given in format of a parametric function. Parametric Dictionary Design (PDD) generates a suitable dictionary for sparse coding using the parametric function. Basically PDD finds a parametric dictionary with a minimal dictionary coherence, which has been shown to be suitable for sparse approximation and exact sparse recovery. Theoretical analyzes are accompanied by experiments to validate the analyzes. This research was primarily used for audio applications, as audio can be shown to have sparse structures. Therefore, most of the experiments are done using audio signals.
7

Novel dictionary learning algorithm for accelerating multi-dimensional MRI applications

Bhave, Sampada Vasant 01 December 2016 (has links)
The clinical utility of multi-dimensional MRI applications like multi-parameter mapping and 3D dynamic lung imaging is limited by long acquisition times. Quantification of multiple tissue MRI parameters has been shown to be useful for early detection and diagnosis of various neurological diseases and psychiatric disorders. They also provide useful information about disease progression and treatment efficacy. Dynamic lung imaging enables the diagnosis of abnormalities in respiratory mechanics in dyspnea and regional lung function in pulmonary diseases like chronic obstructive pulmonary disease (COPD), asthma etc. However, the need for acquisition of multiple contrast weighted images as in case of multi-parameter mapping or multiple time points as in case of pulmonary imaging, makes it less applicable in the clinical setting as it increases the scan time considerably. In order to achieve reasonable scan times, there is often tradeoffs between SNR and resolution. Since, most MRI images are sparse in known transform domain; they can be recovered from fewer samples. Several compressed sensing schemes have been proposed which exploit the sparsity of the signal in pre-determined transform domains (eg. Fourier transform) or exploit the low rank characteristic of the data. However, these methods perform sub-optimally in the presence of inter-frame motion since the pre-determined dictionary does not account for the motion and the rank of the data is considerably higher. These methods rely on two step approach where they first estimate the dictionary from the low resolution data and using these basis functions they estimate the coefficients by fitting the measured data to the signal model. The main focus of the thesis is accelerating the multi-parameter mapping and 3D dynamic lung imaging applications to achieve desired volume coverage and spatio-temporal resolution. We propose a novel dictionary learning framework called the Blind compressed sensing (BCS) scheme to recover the underlying data from undersampled measurements, in which the underlying signal is represented as a sparse linear combination of basic functions from a learned dictionary. We also provide an efficient implementation using variable splitting technique to reduce the computational complexity by up to 15 fold. In both multi- parameter mapping and 3D dynamic lung imaging, the comparisons of BCS scheme with other schemes indicates superior performance as it provides a richer presentation of the data. The reconstructions from BCS scheme result in high accuracy parameter maps for parameter imaging and diagnostically relevant image series to characterize respiratory mechanics in pulmonary imaging.
8

Novel adaptive reconstruction schemes for accelerated myocardial perfusion magnetic resonance imaging

Lingala, Sajan Goud 01 December 2013 (has links)
Coronary artery disease (CAD) is one of the leading causes of death in the world. In the United States alone, it is estimated that approximately every 25 seconds, a new CAD event will occur, and approximately every minute, someone will die of one. The detection of CAD during in its early stages is very critical to reduce the mortality rates. Magnetic resonance imaging of myocardial perfusion (MR-MPI) has been receiving significant attention over the last decade due to its ability to provide a unique view of the microcirculation blood flow in the myocardial tissue through the coronary vascular network. The ability of MR-MPI to detect changes in microcirculation during early stages of ischemic events makes it a useful tool in identifying myocardial tissues that are alive but at the risk of dying. However this technique is not yet fully established clinically due to fundamental limitations imposed by the MRI device physics. The limitations of current MRI schemes often make it challenging to simultaneously achieve high spatio-temporal resolution, sufficient spatial coverage, and good image quality in myocardial perfusion MRI. Furthermore, the acquisitions are typically set up to acquire images during breath holding. This often results in motion artifacts due to improper breath hold patterns. This dissertation deals with developing novel image reconstruction methods in conjunction with non-Cartesian sampling for the reconstruction of dynamic MRI data from highly accelerated / under-sampled Fourier measurements. The reconstruction methods are based on adaptive signal models to represent the dynamic data using few model coefficients. Three novel adaptive reconstruction methods are developed and validated: (a) low rank and sparsity based modeling, (b) blind compressed sensing, and (c) motion compensated compressed sensing. The developed methods are applicable to a wide range of dynamic imaging problems. In the context of MR-MPI, this dissertation show feasibilities that the developed methods can enable free breathing myocardial perfusion MRI acquisitions with high spatio-temporal resolutions ( < 2mm x 2mm, 1 heart beat) and slice coverage (upto 8 slices).
9

Cardiac motion estimation in ultrasound images using a sparse representation and dictionary learning / Estimation du mouvement cardiaque en imagerie ultrasonore par représentation parcimonieuse et apprentissage de dictionnaire

Ouzir, Nora 16 October 2018 (has links)
Les maladies cardiovasculaires sont de nos jours un problème de santé majeur. L'amélioration des méthodes liées au diagnostic de ces maladies représente donc un réel enjeu en cardiologie. Le coeur étant un organe en perpétuel mouvement, l'analyse du mouvement cardiaque est un élément clé pour le diagnostic. Par conséquent, les méthodes dédiées à l'estimation du mouvement cardiaque à partir d'images médicales, plus particulièrement en échocardiographie, font l'objet de nombreux travaux de recherches. Cependant, plusieurs difficultés liées à la complexité du mouvement du coeur ainsi qu'à la qualité des images échographiques restent à surmonter afin d'améliorer la qualité et la précision des estimations. Dans le domaine du traitement d'images, les méthodes basées sur l'apprentissage suscitent de plus en plus d'intérêt. Plus particulièrement, les représentations parcimonieuses et l'apprentissage de dictionnaires ont démontré leur efficacité pour la régularisation de divers problèmes inverses. Cette thèse a ainsi pour but d'explorer l'apport de ces méthodes, qui allient parcimonie et apprentissage, pour l'estimation du mouvement cardiaque. Trois principales contributions sont présentées, chacune traitant différents aspects et problématiques rencontrées dans le cadre de l'estimation du mouvement en échocardiographie. Dans un premier temps, une méthode d'estimation du mouvement cardiaque se basant sur une régularisation parcimonieuse est proposée. Le problème d'estimation du mouvement est formulé dans le cadre d'une minimisation d'énergie, dont le terme d'attache aux données est construit avec l'hypothèse d'un bruit de Rayleigh multiplicatif. Une étape d'apprentissage de dictionnaire permet une régularisation exploitant les propriétés parcimonieuses du mouvement cardiaque, combinée à un terme classique de lissage spatial. Dans un second temps, une méthode robuste de flux optique est présentée. L'objectif de cette approche est de robustifier la méthode d'estimation développée au premier chapitre de manière à la rendre moins sensible aux éléments aberrants. Deux régularisations sont mises en oeuvre, imposant d'une part un lissage spatial et de l'autre la parcimonie des champs de mouvements dans un dictionnaire approprié. Afin d'assurer la robustesse de la méthode vis-à-vis des anomalies, une stratégie de minimisation récursivement pondérée est proposée. Plus précisément, les fonctions employées pour cette pondération sont basées sur la théorie des M-estimateurs. Le dernier travail présenté dans cette thèse, explore une méthode d'estimation du mouvement cardiaque exploitant une régularisation parcimonieuse combinée à un lissage à la fois dans les domaines spatial et temporel. Le problème est formulé dans un cadre général d'estimation de flux optique. La régularisation temporelle proposée impose des trajectoires de mouvement lisses entre images consécutives. De plus, une méthode itérative d'estimation permet d'incorporer les trois termes de régularisations, tout en rendant possible le traitement simultané d'un ensemble d'images. Dans cette thèse, les contributions proposées sont validées en employant des images synthétiques et des simulations réalistes d'images ultrasonores. Ces données avec vérité terrain permettent d'évaluer la précision des approches considérées, et de souligner leur compétitivité par rapport à des méthodes de l'état-del'art. Pour démontrer la faisabilité clinique, des images in vivo de patients sains ou atteints de pathologies sont également considérées pour les deux premières méthodes. Pour la dernière contribution de cette thèse, i.e., exploitant un lissage temporel, une étude préliminaire est menée en utilisant des données de simulation. / Cardiovascular diseases have become a major healthcare issue. Improving the diagnosis and analysis of these diseases have thus become a primary concern in cardiology. The heart is a moving organ that undergoes complex deformations. Therefore, the quantification of cardiac motion from medical images, particularly ultrasound, is a key part of the techniques used for diagnosis in clinical practice. Thus, significant research efforts have been directed toward developing new cardiac motion estimation methods. These methods aim at improving the quality and accuracy of the estimated motions. However, they are still facing many challenges due to the complexity of cardiac motion and the quality of ultrasound images. Recently, learning-based techniques have received a growing interest in the field of image processing. More specifically, sparse representations and dictionary learning strategies have shown their efficiency in regularizing different ill-posed inverse problems. This thesis investigates the benefits that such sparsity and learning-based techniques can bring to cardiac motion estimation. Three main contributions are presented, investigating different aspects and challenges that arise in echocardiography. Firstly, a method for cardiac motion estimation using a sparsity-based regularization is introduced. The motion estimation problem is formulated as an energy minimization, whose data fidelity term is built using the assumption that the images are corrupted by multiplicative Rayleigh noise. In addition to a classical spatial smoothness constraint, the proposed method exploits the sparse properties of the cardiac motion to regularize the solution via an appropriate dictionary learning step. Secondly, a fully robust optical flow method is proposed. The aim of this work is to take into account the limitations of ultrasound imaging and the violations of the regularization constraints. In this work, two regularization terms imposing spatial smoothness and sparsity of the motion field in an appropriate cardiac motion dictionary are also exploited. In order to ensure robustness to outliers, an iteratively re-weighted minimization strategy is proposed using weighting functions based on M-estimators. As a last contribution, we investigate a cardiac motion estimation method using a combination of sparse, spatial and temporal regularizations. The problem is formulated within a general optical flow framework. The proposed temporal regularization enforces smoothness of the motion trajectories between consecutive images. Furthermore, an iterative groupewise motion estimation allows us to incorporate the three regularization terms, while enabling the processing of the image sequence as a whole. Throughout this thesis, the proposed contributions are validated using synthetic and realistic simulated cardiac ultrasound images. These datasets with available groundtruth are used to evaluate the accuracy of the proposed approaches and show their competitiveness with state-of-the-art algorithms. In order to demonstrate clinical feasibility, in vivo sequences of healthy and pathological subjects are considered for the first two methods. A preliminary investigation is conducted for the last contribution, i.e., exploiting temporal smoothness, using simulated data.
10

Kernelized Supervised Dictionary Learning

Jabbarzadeh Gangeh, Mehrdad 24 April 2013 (has links)
The representation of a signal using a learned dictionary instead of predefined operators, such as wavelets, has led to state-of-the-art results in various applications such as denoising, texture analysis, and face recognition. The area of dictionary learning is closely associated with sparse representation, which means that the signal is represented using few atoms in the dictionary. Despite recent advances in the computation of a dictionary using fast algorithms such as K-SVD, online learning, and cyclic coordinate descent, which make the computation of a dictionary from millions of data samples computationally feasible, the dictionary is mainly computed using unsupervised approaches such as k-means. These approaches learn the dictionary by minimizing the reconstruction error without taking into account the category information, which is not optimal in classification tasks. In this thesis, we propose a supervised dictionary learning (SDL) approach by incorporating information on class labels into the learning of the dictionary. To this end, we propose to learn the dictionary in a space where the dependency between the signals and their corresponding labels is maximized. To maximize this dependency, the recently-introduced Hilbert Schmidt independence criterion (HSIC) is used. The learned dictionary is compact and has closed form; the proposed approach is fast. We show that it outperforms other unsupervised and supervised dictionary learning approaches in the literature on real-world data. Moreover, the proposed SDL approach has as its main advantage that it can be easily kernelized, particularly by incorporating a data-driven kernel such as a compression-based kernel, into the formulation. In this thesis, we propose a novel compression-based (dis)similarity measure. The proposed measure utilizes a 2D MPEG-1 encoder, which takes into consideration the spatial locality and connectivity of pixels in the images. The proposed formulation has been carefully designed based on MPEG encoder functionality. To this end, by design, it solely uses P-frame coding to find the (dis)similarity among patches/images. We show that the proposed measure works properly on both small and large patch sizes on textures. Experimental results show that by incorporating the proposed measure as a kernel into our SDL, it significantly improves the performance of a supervised pixel-based texture classification on Brodatz and outdoor images compared to other compression-based dissimilarity measures, as well as state-of-the-art SDL methods. It also improves the computation speed by about 40% compared to its closest rival. Eventually, we have extended the proposed SDL to multiview learning, where more than one representation is available on a dataset. We propose two different multiview approaches: one fusing the feature sets in the original space and then learning the dictionary and sparse coefficients on the fused set; and the other by learning one dictionary and the corresponding coefficients in each view separately, and then fusing the representations in the space of the dictionaries learned. We will show that the proposed multiview approaches benefit from the complementary information in multiple views, and investigate the relative performance of these approaches in the application of emotion recognition.

Page generated in 0.1341 seconds