• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 133
  • 24
  • 12
  • 7
  • 5
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 213
  • 53
  • 31
  • 30
  • 29
  • 27
  • 24
  • 21
  • 21
  • 20
  • 19
  • 19
  • 19
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

O Método Primal Dual Barreira Logarítmica aplicado ao problema de fluxo de carga ótimo / Optimal power flow by a Logarithmic-Barrier Primal-Dual method

Souza, Alessandra Macedo de 18 February 1998 (has links)
Neste trabalho será apresentado um algoritmo de pontos interiores para a solução do problema de fluxo de carga ótimo (FCO). A abordagem proposta é o método primai dual barreira logarítmica. As restrições de desigualdade do problema de FCO são transformadas em igualdades pelo uso de variáveis de folga, e estas são incorporadas na função objetivo através da função barreira logarítmica. A esparsidade da matriz Lagrangeana é explorada e o processo de fatoração é feito por elementos e não por submatrizes. Resultados numéricos de testes realizados em sistemas de 3, 14, 30 e 118 barras serão apresentados com o objetivo de mostrar a eficiência do método. / In this thesis an interior point algorithm is presented for the solution of the optimal power flow problem (OPF). The approach proposed here is the logarithmic barrier primal-dual method. The inequality constraints of the optimal power flow problem are transformed into equalities by slack variables that are incorporated into the objective function through the logarithmic barrier function. The sparsity of the Lagrangian matrix is explored and the factorization process is carried out by elements rather than submatrices. Numerical tests results obtained with systems of 3, 14, 30 and 118 buses are presented to show the efficiency of the method.
102

Geometric algorithms for component analysis with a view to gene expression data analysis

Journée, Michel 04 June 2009 (has links)
The research reported in this thesis addresses the problem of component analysis, which aims at reducing large data to lower dimensions, to reveal the essential structure of the data. This problem is encountered in almost all areas of science - from physics and biology to finance, economics and psychometrics - where large data sets need to be analyzed. Several paradigms for component analysis are considered, e.g., principal component analysis, independent component analysis and sparse principal component analysis, which are naturally formulated as an optimization problem subject to constraints that endow the problem with a well-characterized matrix manifold structure. Component analysis is so cast in the realm of optimization on matrix manifolds. Algorithms for component analysis are subsequently derived that take advantage of the geometrical structure of the problem. When formalizing component analysis into an optimization framework, three main classes of problems are encountered, for which methods are proposed. We first consider the problem of optimizing a smooth function on the set of n-by-p real matrices with orthonormal columns. Then, a method is proposed to maximize a convex function on a compact manifold, which generalizes to this context the well-known power method that computes the dominant eigenvector of a matrix. Finally, we address the issue of solving problems defined in terms of large positive semidefinite matrices in a numerically efficient manner by using low-rank approximations of such matrices. The efficiency of the proposed algorithms for component analysis is evaluated on the analysis of gene expression data related to breast cancer, which encode the expression levels of thousands of genes gained from experiments on hundreds of cancerous cells. Such data provide a snapshot of the biological processes that occur in tumor cells and offer huge opportunities for an improved understanding of cancer. Thanks to an original framework to evaluate the biological significance of a set of components, well-known but also novel knowledge is inferred about the biological processes that underlie breast cancer. Hence, to summarize the thesis in one sentence: We adopt a geometric point of view to propose optimization algorithms performing component analysis, which, applied on large gene expression data, enable to reveal novel biological knowledge.
103

Non-uniform sampling: algorithms and architectures

Luo, Chenchi 09 November 2012 (has links)
Modern signal processing applications emerging in telecommunication and instrumentation industries have placed an increasing demand for ADCs with higher speed and resolution. The most fundamental challenge in such a progress lies at the heart of the classic signal processing: the Shannon-Nyquist sampling theorem which stated that when sampled uniformly, there is no way to increase the upper frequency in the signal spectrum and still unambiguously represent the signal except by raising the sampling rate. This thesis is dedicated to the exploration of the ways to break through the Shannon-Nyquist sampling rate by applying non-uniform sampling techniques. Time interleaving is probably the most intuitive way to parallel the uniform sampling process in order to achieve a higher sampling rate. Unfortunately, the channel mismatches in the TIADC system make the system an instance of a recurrent non-uniform sampling system whose non-uniformities are detrimental to the performance of the system and need to be calibrated. Accordingly, this thesis proposed a flexible and efficient architecture to compensate for the channel mismatches in the TIADC system. As a key building block in the calibration architecture, the design of the Farrow structured adjustable fractional delay filter has been investigated in detail. A new modified Farrow structure is proposed to design the adjustable FD filters that are optimized for a given range of bandwidth and fractional delays. The application of the Farrow structure is not limited to the design of adjustable fractional delay filters. It can also be used to implement adjustable lowpass, highpass and bandpass filters as well as adjustable multirate filters. This thesis further extends the Farrow structure to the design of filters with adjustable polynomial phase responses. Inspired by the theory of compressive sensing, another contribution of this thesis is to use randomization as a means to overcome the limit of the Nyquist rate. This thesis investigates the impact of random sampling intervals or jitters on the power spectrum of the sampled signal. It shows that the aliases of the original signal can be well shaped by choosing an appropriate probability distribution of the sampling intervals or jitters such that aliases can be viewed as a source of noise in the signal power spectrum. A new theoretical framework has been established to associate the probability mass function of the random sampling intervals or jitters with the aliasing shaping effect. Based on the theoretical framework, this thesis proposes three random sampling architectures, i.e., SAR ADC, ramp ADC and level crossing ADC, that can be easily implemented based on the corresponding standard ADC architectures. Detailed models and simulations are established to verify the effectiveness of the proposed architectures. A new reconstruction algorithm called the successive sine matching pursuit has also been proposed to recover a class of spectrally sparse signals from a sparse set of non-uniform samples onto a denser uniform time grid so that classic signal processing techniques can be applied afterwards.
104

Electromagnetic induction spectroscopy for the detection of subsurface targets

Wei, Mu-Hsin 06 November 2012 (has links)
This thesis presents a robust method for estimating the relaxations of a metallic object from its electromagnetic induction (EMI) response. The EMI response of a metallic object can be accurately modeled by a sum of real decaying exponentials. However, it is difficult to obtain the model parameters from measurements when the number of exponentials in the sum is unknown or the terms are strongly correlated. Traditionally, the relaxation constants are estimated by nonlinear iterative search that often leads to unsatisfactory results. An effective EMI modeling technique is developed by first linearizing the problem through enumeration and then solving the linearized model using a sparsity-regularized minimization. This approach overcomes several long-standing challenges in EMI signal modeling, including finding the unknown model order as well as handling the ill-posed nature of the problem. The resulting algorithm does not require a good initial guess to converge to a satisfactory solution. This new modeling technique is extended to incorporate multiple measurements in a single parameter estimation step. More accurate estimates are obtained by exploiting an invariance property of the EMI response, which states that the relaxation frequencies do not change for different locations and orientations of a metallic object. Using tests on synthetic data and laboratory measurement of known targets, the proposed multiple-measurement method is shown to provide accurate and stable estimates of the model parameters. The ability to estimate the relaxation constants of targets enables more robust subsurface target discrimination using the relaxations. A simple relaxation-based subsurface target detection algorithm is developed to demonstrate the potential of the estimated relaxations.
105

Dynamics and correlations in sparse signal acquisition

Charles, Adam Shabti 08 June 2015 (has links)
One of the most important parts of engineered and biological systems is the ability to acquire and interpret information from the surrounding world accurately and in time-scales relevant to the tasks critical to system performance. This classical concept of efficient signal acquisition has been a cornerstone of signal processing research, spawning traditional sampling theorems (e.g. Shannon-Nyquist sampling), efficient filter designs (e.g. the Parks-McClellan algorithm), novel VLSI chipsets for embedded systems, and optimal tracking algorithms (e.g. Kalman filtering). Traditional techniques have made minimal assumptions on the actual signals that were being measured and interpreted, essentially only assuming a limited bandwidth. While these assumptions have provided the foundational works in signal processing, recently the ability to collect and analyze large datasets have allowed researchers to see that many important signal classes have much more regularity than having finite bandwidth. One of the major advances of modern signal processing is to greatly improve on classical signal processing results by leveraging more specific signal statistics. By assuming even very broad classes of signals, signal acquisition and recovery can be greatly improved in regimes where classical techniques are extremely pessimistic. One of the most successful signal assumptions that has gained popularity in recet hears is notion of sparsity. Under the sparsity assumption, the signal is assumed to be composed of a small number of atomic signals from a potentially large dictionary. This limit in the underlying degrees of freedom (the number of atoms used) as opposed to the ambient dimension of the signal has allowed for improved signal acquisition, in particular when the number of measurements is severely limited. While techniques for leveraging sparsity have been explored extensively in many contexts, typically works in this regime concentrate on exploring static measurement systems which result in static measurements of static signals. Many systems, however, have non-trivial dynamic components, either in the measurement system's operation or in the nature of the signal being observed. Due to the promising prior work leveraging sparsity for signal acquisition and the large number of dynamical systems and signals in many important applications, it is critical to understand whether sparsity assumptions are compatible with dynamical systems. Therefore, this work seeks to understand how dynamics and sparsity can be used jointly in various aspects of signal measurement and inference. Specifically, this work looks at three different ways that dynamical systems and sparsity assumptions can interact. In terms of measurement systems, we analyze a dynamical neural network that accumulates signal information over time. We prove a series of bounds on the length of the input signal that drives the network that can be recovered from the values at the network nodes~[1--9]. We also analyze sparse signals that are generated via a dynamical system (i.e. a series of correlated, temporally ordered, sparse signals). For this class of signals, we present a series of inference algorithms that leverage both dynamics and sparsity information, improving the potential for signal recovery in a host of applications~[10--19]. As an extension of dynamical filtering, we show how these dynamic filtering ideas can be expanded to the broader class of spatially correlated signals. Specifically, explore how sparsity and spatial correlations can improve inference of material distributions and spectral super-resolution in hyperspectral imagery~[20--25]. Finally, we analyze dynamical systems that perform optimization routines for sparsity-based inference. We analyze a networked system driven by a continuous-time differential equation and show that such a system is capable of recovering a large variety of different sparse signal classes~[26--30].
106

Interactive Object Retrieval using Interpretable Visual Models

Rebai, Ahmed 18 May 2011 (has links) (PDF)
This thesis is an attempt to improve visual object retrieval by allowing users to interact with the system. Our solution lies in constructing an interactive system that allows users to define their own visual concept from a concise set of visual patches given as input. These patches, which represent the most informative clues of a given visual category, are trained beforehand with a supervised learning algorithm in a discriminative manner. Then, and in order to specialize their models, users have the possibility to send their feedback on the model itself by choosing and weighting the patches they are confident of. The real challenge consists in how to generate concise and visually interpretable models. Our contribution relies on two points. First, in contrast to the state-of-the-art approaches that use bag-of-words, we propose embedding local visual features without any quantization, which means that each component of the high-dimensional feature vectors used to describe an image is associated to a unique and precisely localized image patch. Second, we suggest using regularization constraints in the loss function of our classifier to favor sparsity in the models produced. Sparsity is indeed preferable for concision (a reduced number of patches in the model) as well as for decreasing prediction time. To meet these objectives, we developed a multiple-instance learning scheme using a modified version of the BLasso algorithm. BLasso is a boosting-like procedure that behaves in the same way as Lasso (Least Absolute Shrinkage and Selection Operator). It efficiently regularizes the loss function with an additive L1-constraint by alternating between forward and backward steps at each iteration. The method we propose here is generic in the sense that it can be used with any local features or feature sets representing the content of an image region.
107

Surface related multiple prediction from incomplete data

Herrmann, Felix J. January 2007 (has links)
Incomplete data, unknown source-receiver signatures and free-surface reflectivity represent challenges for a successful prediction and subsequent removal of multiples. In this paper, a new method will be represented that tackles these challenges by combining what we know about wavefield (de-)focussing, by weighted convolutions/correlations, and recently developed curvelet-based recovery by sparsity-promoting inversion (CRSI). With this combination, we are able to leverage recent insights from wave physics towards a nonlinear formulation for the multiple-prediction problem that works for incomplete data and without detailed knowledge on the surface effects.
108

Seismic data processing with curvelets: a multiscale and nonlinear approach

Herrmann, Felix J. January 2007 (has links)
In this abstract, we present a nonlinear curvelet-based sparsity-promoting formulation of a seismic processing flow, consisting of the following steps: seismic data regularization and the restoration of migration amplitudes. We show that the curvelet's wavefront detection capability and invariance under the migration-demigration operator lead to a formulation that is stable under noise and missing data.
109

Near real-time estimation of the seismic source parameters in a compressed domain

Vera Rodriguez, Ismael A. Unknown Date
No description available.
110

Classification in high dimensional feature spaces / by H.O. van Dyk

Van Dyk, Hendrik Oostewald January 2009 (has links)
In this dissertation we developed theoretical models to analyse Gaussian and multinomial distributions. The analysis is focused on classification in high dimensional feature spaces and provides a basis for dealing with issues such as data sparsity and feature selection (for Gaussian and multinomial distributions, two frequently used models for high dimensional applications). A Naïve Bayesian philosophy is followed to deal with issues associated with the curse of dimensionality. The core treatment on Gaussian and multinomial models consists of finding analytical expressions for classification error performances. Exact analytical expressions were found for calculating error rates of binary class systems with Gaussian features of arbitrary dimensionality and using any type of quadratic decision boundary (except for degenerate paraboloidal boundaries). Similarly, computationally inexpensive (and approximate) analytical error rate expressions were derived for classifiers with multinomial models. Additional issues with regards to the curse of dimensionality that are specific to multinomial models (feature sparsity) were dealt with and tested on a text-based language identification problem for all eleven official languages of South Africa. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2009.

Page generated in 0.0249 seconds