Spelling suggestions: "subject:"mparse representation"" "subject:"lparse representation""
11 |
Image/Video Deblocking via Sparse RepresentationChiou, Yi-Wen 08 September 2012 (has links)
Blocking artifact, characterized by visually noticeable changes in pixel values along block boundaries, is a common problem in block-based image/video compression, especially at low bitrate coding. Various post-processing techniques have been proposed to reduce blocking artifacts, but they usually introduce excessive blurring or ringing effects. This paper proposes a self-learning-based image/ video deblocking framework via properly formulating deblocking as an MCA (morphological component analysis)-based image decomposition problem via sparse representation. The proposed method first decomposes an image/video frame into the low-frequency and high-frequency parts by applying BM3D (block-matching and 3D filtering) algorithm. The high-frequency part is then decomposed into a ¡§blocking component¡¨ and a ¡§non-blocking component¡¨ by performing dictionary learning and sparse coding based on MCA. As a result, the blocking component can be removed from the image/video frame successfully while preserving most original image/video details. Experimental results demonstrate the efficacy of the proposed algorithm.
|
12 |
Contributions to generic visual object categorizationFu, Huanzhang 14 December 2010 (has links) (PDF)
This thesis is dedicated to the active research topic of generic Visual Object Categorization(VOC), which can be widely used in many applications such as videoindexation and retrieval, video monitoring, security access control, automobile drivingsupport etc. Due to many realistic difficulties, it is still considered to be one ofthe most challenging problems in computer vision and pattern recognition. In thiscontext, we have proposed in this thesis our contributions, especially concerning thetwo main components of the methods addressing VOC problems, namely featureselection and image representation.Firstly, an Embedded Sequential Forward feature Selection algorithm (ESFS)has been proposed for VOC. Its aim is to select the most discriminant features forobtaining a good performance for the categorization. It is mainly based on thecommonly used sub-optimal search method Sequential Forward Selection (SFS),which relies on the simple principle to add incrementally most relevant features.However, ESFS not only adds incrementally most relevant features in each stepbut also merges them in an embedded way thanks to the concept of combinedmass functions from the evidence theory which also offers the benefit of obtaining acomputational cost much lower than the one of original SFS.Secondly, we have proposed novel image representations to model the visualcontent of an image, namely Polynomial Modeling and Statistical Measures basedImage Representation, called PMIR and SMIR respectively. They allow to overcomethe main drawback of the popular "bag of features" method which is the difficultyto fix the optimal size of the visual vocabulary. They have been tested along withour proposed region based features and SIFT. Two different fusion strategies, earlyand late, have also been considered to merge information from different "channels"represented by the different types of features.Thirdly, we have proposed two approaches for VOC relying on sparse representation,including a reconstructive method (R_SROC) as well as a reconstructiveand discriminative one (RD_SROC). Indeed, sparse representation model has beenoriginally used in signal processing as a powerful tool for acquiring, representingand compressing the high-dimensional signals. Thus, we have proposed to adaptthese interesting principles to the VOC problem. R_SROC relies on the intuitiveassumption that an image can be represented by a linear combination of trainingimages from the same category. Therefore, the sparse representations of images arefirst computed through solving the ℓ1 norm minimization problem and then usedas new feature vectors for images to be classified by traditional classifiers such asSVM. To improve the discrimination ability of the sparse representation to betterfit the classification problem, we have also proposed RD_SROC which includes adiscrimination term, such as Fisher discrimination measure or the output of a SVMclassifier, to the standard sparse representation objective function in order to learna reconstructive and discriminative dictionary. Moreover, we have also proposedChapter 0. Abstractto combine the reconstructive and discriminative dictionary and the adapted purereconstructive dictionary for a given category so that the discrimination power canfurther be increased.The efficiency of all the methods proposed in this thesis has been evaluated onpopular image datasets including SIMPLIcity, Caltech101 and Pascal2007.
|
13 |
Nuclei/Cell Detection in Microscopic Skeletal Muscle Fiber Images and Histopathological Brain Tumor Images Using Sparse OptimizationsSu, Hai 01 January 2014 (has links)
Nuclei/Cell detection is usually a prerequisite procedure in many computer-aided biomedical image analysis tasks. In this thesis we propose two automatic nuclei/cell detection frameworks. One is for nuclei detection in skeletal muscle fiber images and the other is for brain tumor histopathological images.
For skeletal muscle fiber images, the major challenges include: i) shape and size variations of the nuclei, ii) overlapping nuclear clumps, and iii) a series of z-stack images with out-of-focus regions. We propose a novel automatic detection algorithm consisting of the following components: 1) The original z-stack images are first converted into one all-in-focus image. 2) A sufficient number of hypothetical ellipses are then generated for each nuclei contour. 3) Next, a set of representative training samples and discriminative features are selected by a two-stage sparse model. 4) A classifier is trained using the refined training data. 5) Final nuclei detection is obtained by mean-shift clustering based on inner distance. The proposed method was tested on a set of images containing over 1500 nuclei. The results outperform the current state-of-the-art approaches.
For brain tumor histopathological images, the major challenges are to handle significant variations in cell appearance and to split touching cells. The proposed novel automatic cell detection consists of: 1) Sparse reconstruction for splitting touching cells. 2) Adaptive dictionary learning for handling cell appearance variations. The proposed method was extensively tested on a data set with over 2000 cells. The result outperforms other state-of-the-art algorithms with F1 score = 0.96.
|
14 |
HIV Drug Resistant Prediction and Featured Mutants Selection using Machine Learning ApproachesYu, Xiaxia 16 December 2014 (has links)
HIV/AIDS is widely spread and ranks as the sixth biggest killer all over the world. Moreover, due to the rapid replication rate and the lack of proofreading mechanism of HIV virus, drug resistance is commonly found and is one of the reasons causing the failure of the treatment. Even though the drug resistance tests are provided to the patients and help choose more efficient drugs, such experiments may take up to two weeks to finish and are expensive. Because of the fast development of the computer, drug resistance prediction using machine learning is feasible.
In order to accurately predict the HIV drug resistance, two main tasks need to be solved: how to encode the protein structure, extracting the more useful information and feeding it into the machine learning tools; and which kinds of machine learning tools to choose. In our research, we first proposed a new protein encoding algorithm, which could convert various sizes of proteins into a fixed size vector. This algorithm enables feeding the protein structure information to most state of the art machine learning algorithms. In the next step, we also proposed a new classification algorithm based on sparse representation. Following that, mean shift and quantile regression were included to help extract the feature information from the data. Our results show that encoding protein structure using our newly proposed method is very efficient, and has consistently higher accuracy regardless of type of machine learning tools. Furthermore, our new classification algorithm based on sparse representation is the first application of sparse representation performed on biological data, and the result is comparable to other state of the art classification algorithms, for example ANN, SVM and multiple regression. Following that, the mean shift and quantile regression provided us with the potentially most important drug resistant mutants, and such results might help biologists/chemists to determine which mutants are the most representative candidates for further research.
|
15 |
Kernelized Supervised Dictionary LearningJabbarzadeh Gangeh, Mehrdad 24 April 2013 (has links)
The representation of a signal using a learned dictionary instead of predefined operators, such as wavelets, has led to state-of-the-art results in various applications such as denoising, texture analysis, and face recognition. The area of dictionary learning is closely associated with sparse representation, which means that the signal is represented using few atoms in the dictionary. Despite recent advances in the computation of a dictionary using fast algorithms such as K-SVD, online learning, and cyclic coordinate descent, which make the computation of a dictionary from millions of data samples computationally feasible, the dictionary is mainly computed using unsupervised approaches such as k-means. These approaches learn the dictionary by minimizing the reconstruction error without taking into account the category information, which is not optimal in classification tasks.
In this thesis, we propose a supervised dictionary learning (SDL) approach by incorporating information on class labels into the learning of the dictionary. To this end, we propose to learn the dictionary in a space where the dependency between the signals and their corresponding labels is maximized. To maximize this dependency, the recently-introduced Hilbert Schmidt independence criterion (HSIC) is used. The learned dictionary is compact and has closed form; the proposed approach is fast. We show that it outperforms other unsupervised and supervised dictionary learning approaches in the literature on real-world data.
Moreover, the proposed SDL approach has as its main advantage that it can be easily kernelized, particularly by incorporating a data-driven kernel such as a compression-based kernel, into the formulation. In this thesis, we propose a novel compression-based (dis)similarity measure. The proposed measure utilizes a 2D MPEG-1 encoder, which takes into consideration the spatial locality and connectivity of pixels in the images. The proposed formulation has been carefully designed based on MPEG encoder functionality. To this end, by design, it solely uses P-frame coding to find the (dis)similarity among patches/images. We show that the proposed measure works properly on both small and large patch sizes on textures. Experimental results show that by incorporating the proposed measure as a kernel into our SDL, it significantly improves the performance of a supervised pixel-based texture classification on Brodatz and outdoor images compared to other compression-based dissimilarity measures, as well as state-of-the-art SDL methods. It also improves the computation speed by about 40% compared to its closest rival.
Eventually, we have extended the proposed SDL to multiview learning, where more than one representation is available on a dataset. We propose two different multiview approaches: one fusing the feature sets in the original space and then learning the dictionary and sparse coefficients on the fused set; and the other by learning one dictionary and the corresponding coefficients in each view separately, and then fusing the representations in the space of the dictionaries learned. We will show that the proposed multiview approaches benefit from the complementary information in multiple views, and investigate the relative performance of these approaches in the application of emotion recognition.
|
16 |
Compartimentation et transfert de contaminants dans les milieux souterrains : interaction entre transport physique, réactivité chimique et activité biologique / Compartimentalization and contaminant transfer in underground media : interaction between transport processes, chemical reactivity and biological activityBabey, Tristan 08 December 2016 (has links)
Classiquement le transfert des contaminants dans le milieu souterrain est modélisé par un couplage des processus de transport physiques (écoulements contrôlés par les structures géologiques poreuses) et des processus de dégradation ou d'immobilisation chimiques et biologiques. Tant sur les structures géologiques que sur la chimie et la physique, les modèles sont de plus en plus détaillés mais de plus en plus difficiles à calibrer sur des données toujours très parcellaires. Dans cette thèse, nous développons une approche alternative basée sur des modèles parcimonieux sous la forme d’un simple graphe de compartiments interconnectés généralisant les modèles d’interaction de continuums (MINC) ou de transfert à taux multiples (MRMT). Nous montrons que ces modèles sont particulièrement adaptés aux milieux dans lesquels la diffusion de solutés occupe un rôle prépondérant par rapport à l’advection, tels les sols ou les aquifères très hétérogènes comme les aquifères fracturés. L'homogénéisation induite par la diffusion réduit les gradients de concentration, accélère les mélanges entre espèces et fait de la distribution des temps de résidence un excellent proxy de la réactivité. En effet, ces structures simplifiées reconstituées à partir d’informations de temps de résidence se révèlent également pertinentes pour des réactions chimiques non linéaires (e.g. sorption, précipitation/dissolution). Nous montrons finalement comment ces modèles peuvent être adaptés automatiquement à des observations d’essais de traceurs ou de réactions de biodégradation. Ces approches parcimonieuses présentent de nombreux avantages dont la simplicité de développement et de mise en œuvre. Elles permettent d’identifier les déterminants majeurs des échanges entre zones advectives et diffusives ou entre zones inertes et réactives, et d’extrapoler des processus de réactivité à des échelles plus larges. L’utilisation de données de fractionnement isotopique est proposée pour améliorer la dissociation entre l’effet des structures et de la réactivité. / Modelling of contaminant transfer in the subsurface classically relies on a detailed representation of transport processes (groundwater flow controlled by geological structures) coupled to chemical and biological reactivity (immobilization, degradation). Calibration of such detailed models is however often limited by the small amount of available data on the subsurface structures and characteristics. In this thesis, we develop an alternative approach of parsimonious models based on simple graphs of interconnected compartments, taken as generalized multiple interacting continua (MINC) and multiple rate mass transfer (MRMT). We show that this approach is well suited to systems where diffusion-like processes are dominant over advection, like for instance in soils or highly heterogeneous aquifers like fractured aquifers. Homogenization induced by diffusion reduces concentration gradients, speeds up mixing between chemical species and makes residence time distributions excellent proxies for reactivity. Indeed, simplified structures calibrated solely from transit time information prove to provide consistent estimations of non-linear reactivity (e.g. sorption and precipitation/dissolution). Finally, we show how these models can be applied to tracer observations and to biodegradation reactions. Two important advantages of these parsimonious approaches are their facility of development and application. They help identifying the major controls of exchanges between advective and diffusive zones or between inert and reactive zones. They are also amenable to extrapolate reactive processes at larger scale. The use of isotopic fractionation data is proposed to help discriminating between structure-induced effects and reactivity.
|
17 |
Sparse Representations and Nonlinear Image Processing for Inverse Imaging SolutionsRam, Sundaresh, Ram, Sundaresh January 2017 (has links)
This work applies sparse representations and nonlinear image processing to two inverse imaging problems. The first problem involves image restoration, where the aim is to reconstruct an unknown high-quality image from a low-quality observed image. Sparse representations of images have drawn a considerable amount of interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. The standard sparse representation, however, does not consider the intrinsic geometric structure present in the data, thereby leading to sub-optimal results. Using the concept that a signal is block sparse in a given basis —i.e., the non-zero elements occur in clusters of varying sizes — we present a novel and efficient algorithm for learning a sparse representation of natural images, called graph regularized block sparse dictionary (GRBSD) learning. We apply the proposed method towards two image restoration applications: 1) single-Image super-resolution, where we propose a local regression model that uses learned dictionaries from the GRBSD algorithm for super-resolving a low-resolution image without any external training images, and 2) image inpainting, where we use GRBSD algorithm to learn a multiscale dictionary to generate visually plausible pixels to fill missing regions in an image. Experimental results validate the performance of the GRBSD learning algorithm for single-image super-resolution and image inpainting applications. The second problem addressed in this work involves image enhancement for detection and segmentation of objects in images. We exploit the concept that even though data from various imaging modalities have high dimensionality, the data is sufficiently well described using low-dimensional geometrical structures. To facilitate the extraction of objects having such structure, we have developed general structure enhancement methods that can be used to detect and segment various curvilinear structures in images across different applications. We use the proposed method to detect and segment objects of different size and shape in three applications: 1) segmentation of lamina cribrosa microstructure in the eye from second-harmonic generation microscopy images, 2) detection and segmentation of primary cilia in confocal microscopy images, and 3) detection and segmentation of vehicles in wide-area aerial imagery. Quantitative and qualitative results show that the proposed methods provide improved detection and segmentation accuracy and computational efficiency compared to other recent algorithms.
|
18 |
Contribution to dimension reduction techniques : application to object tracking / Contribution aux techniques de la réduction de dimension : application au suivi d'objetLu, Weizhi 16 July 2014 (has links)
Cette thèse étudie et apporte des améliorations significatives sur trois techniques répandues en réduction de dimension : l'acquisition parcimonieuse (ou l'échantillonnage parcimonieux), la projection aléatoire et la représentation parcimonieuse. En acquisition parcimonieuse, la construction d’une matrice de réduction possédant à la fois de bonnes performances et une structure matérielle adéquate reste un défi de taille. Ici, nous proposons explicitement la matrice binaire optimale, avec éléments zéro-Un, en recherchant la meilleure propriété d’isométrie restreinte (RIP). Dans la pratique, un algorithme glouton efficace est successivement développé pour construire la matrice binaire optimale avec une taille arbitraire. Par ailleurs, nous étudions également un autre problème intéressant pour l'acquisition parcimonieuse, c'est celui de la performance des matrices d'acquisition parcimonieuse avec des taux de compression élevés. Pour la première fois, la limite inférieure de la performance des matrices aléatoires de Bernoulli pour des taux de compression croissants est observée et estimée. La projection aléatoire s'utilise principalement en classification mais la construction de la matrice de projection aléatoire s'avère également critique en termes de performance et de complexité. Cette thèse présente la matrice de projection aléatoire, de loin, la plus éparse. Celle-Ci est démontrée présenter la meilleure performance en sélection de caractéristiques, comparativement à d’autres matrices aléatoires plus denses. Ce résultat théorique est confirmé par de nombreuses expériences. Comme nouvelle technique pour la sélection de caractéristiques ou d’échantillons, la représentation parcimonieuse a récemment été largement appliquée dans le domaine du traitement d'image. Dans cette thèse, nous nous concentrons principalement sur ses applications de suivi d'objets dans une séquence d'images. Pour réduire la charge de calcul liée à la représentation parcimonieuse, un système simple mais efficace est proposé pour le suivi d'un objet unique. Par la suite, nous explorons le potentiel de cette représentation pour le suivi d'objets multiples. / This thesis studies three popular dimension reduction techniques: compressed sensing, random projection and sparse representation, and brings significant improvements on these techniques. In compressed sensing, the construction of sensing matrix with both good performance and hardware-Friendly structure has been a significant challenge. In this thesis, we explicitly propose the optimal zero-One binary matrix by searching the best Restricted Isometry Property. In practice, an efficient greedy algorithm is successively developed to construct the optimal binary matrix with arbitrary size. Moreover, we also study another interesting problem for compressed sensing, that is the performance of sensing matrices with high compression rates. For the first time, the performance floor of random Bernoulli matrices over increasing compression rates is observed and effectively estimated. Random projection is mainly used in the task of classification, for which the construction of random projection matrix is also critical in terms of both performance and complexity. This thesis presents so far the most sparse random projection matrix, which is proved holding better feature selection performance than other more dense random matrices. The theoretical result is confirmed with extensive experiments. As a novel technique for feature or sample selection, sparse representation has recently been widely applied in the area of image processing. In this thesis, we mainly focus our attention on its applications to visual object tracking. To reduce the computation load related to sparse representation, a simple but efficient scheme is proposed for the tracking of single object. Subsequently, the potential of sparse representation to multiobject tracking is investigated.
|
19 |
Advances in RGB and RGBD Generic Object TrackersBibi, Adel 04 1900 (has links)
Visual object tracking is a classical and very popular problem in computer vision
with a plethora of applications such as vehicle navigation, human computer interface, human motion analysis, surveillance, auto-control systems and many more. Given the initial state of a target in the first frame, the goal of tracking is to predict states of the target over time where the states describe a bounding box covering the target. Despite numerous object tracking methods that have been proposed in recent years [1-4], most of these trackers suffer a degradation in performance mainly because of several challenges that include illumination changes, motion blur, complex motion, out of plane rotation, and partial or full occlusion, while occlusion is usually the most contributing factor in degrading the majority of trackers, if not all of them. This thesis is devoted to the advancement of generic object trackers tackling different challenges through different proposed methods. The work presented propose four
new state-of-the-art trackers. One of which is 3D based tracker in a particle filter framework where both synchronization and registration of RGB and depth streams are adjusted automatically, and three works in correlation filters that achieve state-of-the-art performance in terms of accuracy while maintaining reasonable speeds.
|
20 |
Restaurace signálu s omezenou okamžitou hodnotou pro vícekanálový audio signál / Restoration of signals with limited instantaneous value for the multichannel audio signalHájek, Vojtěch January 2019 (has links)
This master’s thesis deals with the restoration of clipped multichannel audio signals based on sparse representations. First, a general theory of clipping and theory of sparse representations of audio signals is described. A short overview of existing restoration methods is part of this thesis as well. Subsequently, two declipping algorithms are introduced and are also implemented in the Matlab environment as a part of the thesis. The first one, SPADE, is considered a state- of-the-art method for mono audio signals declipping and the second one, CASCADE, which is derived from SPADE, is designed for the restoration of multichannel signals. In the last part of the thesis, both algorithms are tested and the results are compared using the objective measures SDR and PEAQ, and also using the subjective listening test MUSHRA.
|
Page generated in 0.1438 seconds