• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 1
  • Tagged with
  • 14
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

[en] DATA-SELECTIVE ADAPTIVE LINEAR AND KERNEL-BASED ALGORITHMS / [pt] ALGORITMOS DE PROCESSAMENTO DE SINAIS COM SELEÇÃO DE DADOS PARA FILTROS LINEARES E BASEADOS EM KERNELS

ANDRÉ ROBERT FLORES MANRIQUE 18 July 2017 (has links)
[pt] Nesta dissertação, diversos algoritmos adaptativos para processamento de sinais com seleção de dados são desenvolvidos e estudados, com o objetivo de resolver dois problemas diferentes. O primeiro problema envolve ambientes com sistemas esparsos, onde uma função penalidade é incorporada na função de custo para aproveitar a esparsidade do modelo. Nesta perspectiva, são propostos três algoritmos com função penalidade ajustável, o primeiro baseado na função penalidade l1 é denominado SM-NLMS com atração para zero e função penalidade ajustável (ZA-SM-NLMS-ADP). O segundo algoritmo está baseado na função penalidade log-sum e o terceiro na função penalidade l0 , denominados SM-NLMS com atração ponderada para zero e função de penalidade ajustável (RZA-SM-NLMS-ADP) e SM-NLMS com atração para zero exponencial e função de penalidade ajustável (EZA-SM-NLMSADP), respectivamente. Além disso, foi desenvolvida uma análise estatística do algoritmo SM-NLMS com uma função penalidade genérica, obtendo expressões matemáticas para o erro médio quadrático em estado estacionário. O segundo problema abordado, considera algoritmos adaptativos não lineares baseados em funções de kernels. Neste contexto, são desenvolvidos dois algoritmos com seleção de dados, o algoritmo SM-NKLMS e o algoritmo SM-KAP, os quais possuem a capacidade de limitar o crescimento da estrutura criada pelas funções de kernels, tratando um dos maiores problemas que surge quando se utilizam algoritmos baseados em kernels. Os algoritmos baseados em kernels foram testados para predição de séries temporais. Também é realizada uma análise estatística do algoritmo SM-NKLMS. As simulações mostram que os algoritmos desenvolvidos superam os algoritmos lineares e não lineares convencionais tanto na velocidade de convergência quanto no erro médio quadrático atingido. / [en] In this dissertation, several data-selective adaptive signal processing algorithms are derived and investigated for solving two different problems. The first one involves scenarios handling sparse systems, where we introduce a framework in which a general penalty function is incorporated into the cost function for exploiting the sparsity of the model. Under this scope, we propose three algorithms with an adjustable penalty function, the first one based on the l1 - norm, which we term zero-attracting SM-NLMS with adjustable penalty function (ZA-SM-NLMS-ADP). The second algorithm is based on the log-sum penalty function and the third one on the l0 - norm, named reweighted ZASM- NLMS (RZA-SM-NLMS-ADP) and the exponential ZA-SM-NLMS (EZASM- NLMS-ADP), respectively. We also carry out a statistical analysis of the sparsity-aware SM-NLMS algorithms with a general penalty function, arriving at mathematical expressions for the mean-square error at steady state. The second problem addressed considers nonlinear adaptive algorithms based on kernel functions. In this context, we develop two data selective algorithms, the Set-Membership Normalized Kernel Least Mean Squares (SM-NKLMS) algorithm and the Set-Membership Kernel Affine Projection (SM-KAP) algorithm, which have the capability of naturally limiting the growing structure created by the kernels, dealing with one of the major problems presented when working with kernel algorithms. The kernel algorithms developed have been tested for a time series prediction task. A statistical analysis of the proposed SM-NKLMS algorithm is also developed. Simulation results show that the proposed algorithms, outperform standard linear and nonlinear adaptive algorithms in both convergence rate and steady state performance.
12

Sparse RNA folding revisited: space‑efficient minimum free energy structure prediction

Will, Sebastian, Jabbari, Hosna January 2016 (has links)
Background: RNA secondary structure prediction by energy minimization is the central computational tool for the analysis of structural non-coding RNAs and their interactions. Sparsification has been successfully applied to improve the time efficiency of various structure prediction algorithms while guaranteeing the same result; however, for many such folding problems, space efficiency is of even greater concern, particularly for long RNA sequences. So far, spaceefficient sparsified RNA folding with fold reconstruction was solved only for simple base-pair-based pseudo-energy models. Results: Here, we revisit the problem of space-efficient free energy minimization. Whereas the space-efficient minimization of the free energy has been sketched before, the reconstruction of the optimum structure has not even been discussed. We show that this reconstruction is not possible in trivial extension of the method for simple energy models. Then, we present the time- and space-efficient sparsified free energy minimization algorithm SparseMFEFold that guarantees MFE structure prediction. In particular, this novel algorithm provides efficient fold reconstruction based on dynamically garbage-collected trace arrows. The complexity of our algorithm depends on two parameters, the number of candidates Z and the number of trace arrows T; both are bounded by n2, but are typically much smaller. The time complexity of RNA folding is reduced from O(n3) to O(n2 + nZ); the space complexity, from O(n2) to O(n + T + Z). Our empirical results show more than 80 % space savings over RNAfold [Vienna RNA package] on the long RNAs from the RNA STRAND database (≥2500 bases). Conclusions: The presented technique is intentionally generalizable to complex prediction algorithms; due to their high space demands, algorithms like pseudoknot prediction and RNA–RNA-interaction prediction are expected to profit even stronger than \"standard\" MFE folding. SparseMFEFold is free software, available at http://www.bioinf.unileipzig. de/~will/Software/SparseMFEFold.
13

Sparse instances of hard problems

Dell, Holger 01 September 2011 (has links)
Diese Arbeit nutzt und verfeinert Methoden der Komplexitätstheorie, um mit diesen die Komplexität dünner Instanzen zu untersuchen. Dazu gehören etwa Graphen mit wenigen Kanten oder Formeln mit wenigen Bedingungen beschränkter Weite. Dabei ergeben sich zwei natürliche Fragestellungen: (a) Gibt es einen effizienten Algorithmus, der beliebige Instanzen eines NP-schweren Problems auf äquivalente, dünne Instanzen reduziert? (b) Gibt es einen Algorithmus, der dünne Instanzen NP-schwerer Probleme bedeutend schneller löst als allgemeine Instanzen gelöst werden können? Wir formalisieren diese Fragen für verschiedene Probleme und zeigen, dass positive Antworten jeweils zu komplexitätstheoretischen Konsequenzen führen, die als unwahrscheinlich gelten. Frage (a) wird als Kommunikation modelliert, in der zwei Akteure kooperativ eine NP-schwere Sprache entscheiden möchten und dabei möglichst wenig kommunizieren. Unter der komplexitätstheoretischen Annahme, dass coNP keine Teilmenge von NP/poly ist, erhalten wir aus unseren Ergebnissen erstaunlich scharfe untere Schranken für interessante Parameter aus verschiedenen Teilgebieten der theoretischen Informatik. Im Speziellen betrifft das die Ausdünnung von Formeln, die Kernelisierung aus der parameterisierten Komplexitätstheorie, die verlustbehaftete Kompression von Entscheidungsproblemen, und die Theorie der probabilistisch verifizierbaren Beweise. Wir untersuchen Fragestellung (b) anhand der Exponentialzeitkomplexität von Zählproblemen. Unter (Varianten) der bekannten Exponentialzeithypothese (ETH) erhalten wir exponentielle untere Schranken für wichtige #P-schwere Probleme: das Berechnen der Zahl der erfüllenden Belegungen einer 2-KNF Formel, das Berechnen der Zahl aller unabhängigen Mengen in einem Graphen, das Berechnen der Permanente einer Matrix mit Einträgen 0 und 1, das Auswerten des Tuttepolynoms an festen Punkten. / In this thesis, we use and refine methods of computational complexity theory to analyze the complexity of sparse instances, such as graphs with few edges or formulas with few constraints of bounded width. Two natural questions arise in this context: (a) Is there an efficient algorithm that reduces arbitrary instances of an NP-hard problem to equivalent, sparse instances? (b) Is there an algorithm that solves sparse instances of an NP-hard problem significantly faster than general instances can be solved? We formalize these questions for different problems and show that positive answers for these formalizations would lead to consequences in complexity theory that are considered unlikely. Question (a) is modeled by a communication process, in which two players want to cooperatively decide an NP-hard language and at the same time communicate as few as possible. Under the complexity-theoretic hypothesis that coNP is not in NP/poly, our results imply surprisingly tight lower bounds for parameters of interest in several areas, namely sparsification, kernelization in parameterized complexity, lossy compression, and probabilistically checkable proofs. We study the question (b) for counting problems in the exponential time setting. Assuming (variants of) the exponential time hypothesis (ETH), we obtain asymptotically tight, exponential lower bounds for well-studied #P-hard problems: Computing the number of satisfying assignments of a 2-CNF formula, computing the number of all independent sets in a graph, computing the permanent of a matrix with entries 0 and 1, evaluating the Tutte polynomial at fixed evaluation points.
14

Structural Similarity: Applications to Object Recognition and Clustering

Curado, Manuel 03 September 2018 (has links)
In this thesis, we propose many developments in the context of Structural Similarity. We address both node (local) similarity and graph (global) similarity. Concerning node similarity, we focus on improving the diffusive process leading to compute this similarity (e.g. Commute Times) by means of modifying or rewiring the structure of the graph (Graph Densification), although some advances in Laplacian-based ranking are also included in this document. Graph Densification is a particular case of what we call graph rewiring, i.e. a novel field (similar to image processing) where input graphs are rewired to be better conditioned for the subsequent pattern recognition tasks (e.g. clustering). In the thesis, we contribute with an scalable an effective method driven by Dirichlet processes. We propose both a completely unsupervised and a semi-supervised approach for Dirichlet densification. We also contribute with new random walkers (Return Random Walks) that are useful structural filters as well as asymmetry detectors in directed brain networks used to make early predictions of Alzheimer's disease (AD). Graph similarity is addressed by means of designing structural information channels as a means of measuring the Mutual Information between graphs. To this end, we first embed the graphs by means of Commute Times. Commute times embeddings have good properties for Delaunay triangulations (the typical representation for Graph Matching in computer vision). This means that these embeddings can act as encoders in the channel as well as decoders (since they are invertible). Consequently, structural noise can be modelled by the deformation introduced in one of the manifolds to fit the other one. This methodology leads to a very high discriminative similarity measure, since the Mutual Information is measured on the manifolds (vectorial domain) through copulas and bypass entropy estimators. This is consistent with the methodology of decoupling the measurement of graph similarity in two steps: a) linearizing the Quadratic Assignment Problem (QAP) by means of the embedding trick, and b) measuring similarity in vector spaces. The QAP problem is also investigated in this thesis. More precisely, we analyze the behaviour of $m$-best Graph Matching methods. These methods usually start by a couple of best solutions and then expand locally the search space by excluding previous clamped variables. The next variable to clamp is usually selected randomly, but we show that this reduces the performance when structural noise arises (outliers). Alternatively, we propose several heuristics for spanning the search space and evaluate all of them, showing that they are usually better than random selection. These heuristics are particularly interesting because they exploit the structure of the affinity matrix. Efficiency is improved as well. Concerning the application domains explored in this thesis we focus on object recognition (graph similarity), clustering (rewiring), compression/decompression of graphs (links with Extremal Graph Theory), 3D shape simplification (sparsification) and early prediction of AD. / Ministerio de Economía, Industria y Competitividad (Referencia TIN2012-32839 BES-2013-064482)

Page generated in 0.0986 seconds