• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 139
  • 128
  • 75
  • 31
  • 15
  • 11
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 515
  • 515
  • 107
  • 97
  • 97
  • 78
  • 72
  • 71
  • 70
  • 66
  • 64
  • 60
  • 57
  • 50
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Correction des effets de volume partiel en tomographie d'émission

Le Pogam, Adrien 29 April 2010 (has links)
Ce mémoire est consacré à la compensation des effets de flous dans une image, communément appelés effets de volume partiel (EVP), avec comme objectif d’application l’amélioration qualitative et quantitative des images en médecine nucléaire. Ces effets sont la conséquence de la faible résolutions spatiale qui caractérise l’imagerie fonctionnelle par tomographie à émission mono-photonique (TEMP) ou tomographie à émission de positons (TEP) et peuvent être caractérisés par une perte de signal dans les tissus présentant une taille comparable à celle de la résolution spatiale du système d’imagerie, représentée par sa fonction de dispersion ponctuelle (FDP). Outre ce phénomène, les EVP peuvent également entrainer une contamination croisée des intensités entre structures adjacentes présentant des activités radioactives différentes. Cet effet peut conduire à une sur ou sous estimation des activités réellement présentes dans ces régions voisines. Différentes techniques existent actuellement pour atténuer voire corriger les EVP et peuvent être regroupées selon le fait qu’elles interviennent avant, durant ou après le processus de reconstruction des images et qu’elles nécessitent ou non la définition de régions d’intérêt provenant d’une imagerie anatomique de plus haute résolution(tomodensitométrie TDM ou imagerie par résonance magnétique IRM). L’approche post-reconstruction basée sur le voxel (ne nécessitant donc pas de définition de régions d’intérêt) a été ici privilégiée afin d’éviter la dépendance aux reconstructions propres à chaque constructeur, exploitée et améliorée afin de corriger au mieux des EVP. Deux axes distincts ont été étudiés. Le premier est basé sur une approche multi-résolution dans le domaine des ondelettes exploitant l’apport d’une image anatomique haute résolution associée à l’image fonctionnelle. Le deuxième axe concerne l’amélioration de processus de déconvolution itérative et ce par l’apport d’outils comme les ondelettes et leurs extensions que sont les curvelets apportant une dimension supplémentaire à l’analyse par la notion de direction. Ces différentes approches ont été mises en application et validées par des analyses sur images synthétiques, simulées et cliniques que ce soit dans le domaine de la neurologie ou dans celui de l’oncologie. Finalement, les caméras commerciales actuelles intégrant de plus en plus des corrections de résolution spatiale dans leurs algorithmes de reconstruction, nous avons choisi de comparer de telles approches en TEP et en TEMP avec une approche de déconvolution itérative proposée dans ce mémoire. / Partial Volume Effects (PVE) designates the blur commonly found in nuclear medicine images andthis PhD work is dedicated to their correction with the objectives of qualitative and quantitativeimprovement of such images. PVE arise from the limited spatial resolution of functional imaging witheither Positron Emission Tomography (PET) or Single Photon Emission Computed Tomography(SPECT). They can be defined as a signal loss in tissues of size similar to the Full Width at HalfMaximum (FWHM) of the PSF of the imaging device. In addition, PVE induce activity crosscontamination between adjacent structures with different tracer uptakes. This can lead to under or overestimation of the real activity of such analyzed regions. Various methodologies currently exist tocompensate or even correct for PVE and they may be classified depending on their place in theprocessing chain: either before, during or after the image reconstruction process, as well as theirdependency on co-registered anatomical images with higher spatial resolution, for instance ComputedTomography (CT) or Magnetic Resonance Imaging (MRI). The voxel-based and post-reconstructionapproach was chosen for this work to avoid regions of interest definition and dependency onproprietary reconstruction developed by each manufacturer, in order to improve the PVE correction.Two different contributions were carried out in this work: the first one is based on a multi-resolutionmethodology in the wavelet domain using the higher resolution details of a co-registered anatomicalimage associated to the functional dataset to correct. The second one is the improvement of iterativedeconvolution based methodologies by using tools such as directional wavelets and curveletsextensions. These various developed approaches were applied and validated using synthetic, simulatedand clinical images, for instance with neurology and oncology applications in mind. Finally, ascurrently available PET/CT scanners incorporate more and more spatial resolution corrections in theirimplemented reconstruction algorithms, we have compared such approaches in SPECT and PET to aniterative deconvolution methodology that was developed in this work.
492

Análise de formas usando wavelets em grafos / Shape analysis using wavelets on graphs

Jorge de Jesus Gomes Leandro 11 February 2014 (has links)
O presente texto descreve a tese de doutorado intitulada Análise de Formas usando Wavelets em Grafos. O tema está relacionado à área de Visão Computacional, particularmente aos tópicos de Caracterização, Descrição e Classificação de Formas. Dentre os métodos da extensa literatura em Análise de Formas 2D, percebe-se uma presença menor daqueles baseados em grafos com topologia arbitrária e irregular. As contribuições desta tese procuram preencher esta lacuna. É proposta uma metodologia baseada no seguinte pipeline : (i) Amostragem da forma, (ii) Estruturação das amostras em grafos, (iii) Função-base definida nos vértices, (iv) Análise multiescala de grafos por meio da Transformada Wavelet Espectral em grafos, (v) Extração de Características da Transformada Wavelet e (vi) Discriminação. Para cada uma das etapas (i), (ii), (iii), (v) e (vi), são inúmeras as abordagens possíveis. Um dos desafios é encontrar uma combinação de abordagens, dentre as muitas alternativas, que resulte em um pipeline eficaz para nossos propósitos. Em particular, para a etapa (iii), dado um grafo que representa uma forma, o desafio é identificar uma característica associada às amostras que possa ser definida sobre os vértices do grafo. Esta característica deve capturar a influência subjacente da estrutura combinatória de toda a rede sobre cada vértice, em diversas escalas. A Transformada Wavelet Espectral sobre os Grafos revelará esta influência subjacente em cada vértice. São apresentados resultados obtidos de experimentos usando formas 2D de benchmarks conhecidos na literatura, bem como de experimentos de aplicações em astronomia para análise de formas de galáxias do Sloan Digital Sky Survey não-rotuladas e rotuladas pelo projeto Galaxy Zoo 2 , demonstrando o sucesso da técnica proposta, comparada a abordagens clássicas como Transformada de Fourier e Transformada Wavelet Contínua 2D. / This document describes the PhD thesis entitled Shape Analysis by using Wavelets on Graphs. The addressed theme is related to Computer Vision, particularly to the Characterization, Description and Classication topics. Amongst the methods presented in an extensive literature on Shape Analysis 2D, it is perceived a smaller presence of graph-based methods with arbitrary and irregular topologies. The contributions of this thesis aim at fullling this gap. A methodology based on the following pipeline is proposed: (i) Shape sampling, (ii) Samples structuring in graphs, (iii) Function dened on vertices, (iv) Multiscale analysis of graphs through the Spectral Wavelet Transform, (v) Features extraction from the Wavelet Transforms and (vi) Classication. For the stages (i), (ii), (iii), (v) and (vi), there are numerous possible approaches. One great challenge is to nd a proper combination of approaches from the several available alternatives, which may be able to yield an eective pipeline for our purposes. In particular, for the stage (iii), given a graph representing a shape, the challenge is to identify a feature, which may be dened over the graph vertices. This feature should capture the underlying inuence from the combinatorial structure of the entire network over each vertex, in multiple scales. The Spectral Graph Wavelet Transform will reveal such an underpining inuence over each vertex. Yielded results from experiments on 2D benchmarks shapes widely known in literature, as well as results from astronomy applications to the analysis of unlabeled galaxies shapes from the Sloan Digital Sky Survey and labeled galaxies shapes by the Galaxy Zoo 2 Project are presented, demonstrating the achievements of the proposed technique, in comparison to classic approaches such as the 2D Fourier Transform and the 2D Continuous Wavelet Transform.
493

Processamento Inteligente de Sinais de Press?o e Temperatura Adquiridos Atrav?s de Sensores Permanentes em Po?os de Petr?leo

Pires, Paulo Roberto da Motta 06 February 2012 (has links)
Made available in DSpace on 2014-12-17T14:08:50Z (GMT). No. of bitstreams: 1 PauloRMP_capa_ate_pag32.pdf: 5057325 bytes, checksum: bf8da0b02ad06ee116c93344fb67e976 (MD5) Previous issue date: 2012-02-06 / Originally aimed at operational objectives, the continuous measurement of well bottomhole pressure and temperature, recorded by permanent downhole gauges (PDG), finds vast applicability in reservoir management. It contributes for the monitoring of well performance and makes it possible to estimate reservoir parameters on the long term. However, notwithstanding its unquestionable value, data from PDG is characterized by a large noise content. Moreover, the presence of outliers within valid signal measurements seems to be a major problem as well. In this work, the initial treatment of PDG signals is addressed, based on curve smoothing, self-organizing maps and the discrete wavelet transform. Additionally, a system based on the coupling of fuzzy clustering with feed-forward neural networks is proposed for transient detection. The obtained results were considered quite satisfactory for offshore wells and matched real requisites for utilization / Originalmente voltadas ao monitoramento da opera??o, as medi??es cont?nuas de press?o e temperatura no fundo de po?o, realizadas atrav?s de PDGs (do ingl?s, Permanent Downhole Gauges), encontram vasta aplicabilidade no gerenciamento de reservat?rios. Para tanto, permitem o monitoramento do desempenho de po?os e a estimativa de par?metros de reservat?rios no longo prazo. Contudo, a despeito de sua inquestion?vel utilidade, os dados adquiridos de PDG apresentam grande conte?do de ru?do. Outro aspecto igualmente desfavor?vel reside na ocorr?ncia de valores esp?rios (outliers) imersos entre as medidas registradas pelo PDG. O presente trabalho aborda o tratamento inicial de sinais de press?o e temperatura, mediante t?cnicas de suaviza??o, mapas auto-organiz?veis e transformada wavelet discreta. Ademais, prop?e-se um sistema de detec??o de transientes relevantes para an?lise no longo hist?rico de registros, baseado no acoplamento entre clusteriza??o fuzzy e redes neurais feed-forward. Os resultados alcan?ados mostraram-se de todo satisfat?rios para po?os marinhos, atendendo a requisitos reais de utiliza??o dos sinais registrados por PDGs
494

Predictability of Nonstationary Time Series using Wavelet and Empirical Mode Decomposition Based ARMA Models

Lanka, Karthikeyan January 2013 (has links) (PDF)
The idea of time series forecasting techniques is that the past has certain information about future. So, the question of how the information is encoded in the past can be interpreted and later used to extrapolate events of future constitute the crux of time series analysis and forecasting. Several methods such as qualitative techniques (e.g., Delphi method), causal techniques (e.g., least squares regression), quantitative techniques (e.g., smoothing method, time series models) have been developed in the past in which the concept lies in establishing a model either theoretically or mathematically from past observations and estimate future from it. Of all the models, time series methods such as autoregressive moving average (ARMA) process have gained popularity because of their simplicity in implementation and accuracy in obtaining forecasts. But, these models were formulated based on certain properties that a time series is assumed to possess. Classical decomposition techniques were developed to supplement the requirements of time series models. These methods try to define a time series in terms of simple patterns called trend, cyclical and seasonal patterns along with noise. So, the idea of decomposing a time series into component patterns, later modeling each component using forecasting processes and finally combining the component forecasts to obtain actual time series predictions yielded superior performance over standard forecasting techniques. All these methods involve basic principle of moving average computation. But, the developed classical decomposition methods are disadvantageous in terms of containing fixed number of components for any time series, data independent decompositions. During moving average computation, edges of time series might not get modeled properly which affects long range forecasting. So, these issues are to be addressed by more efficient and advanced decomposition techniques such as Wavelets and Empirical Mode Decomposition (EMD). Wavelets and EMD are some of the most innovative concepts considered in time series analysis and are focused on processing nonlinear and nonstationary time series. Hence, this research has been undertaken to ascertain the predictability of nonstationary time series using wavelet and Empirical Mode Decomposition (EMD) based ARMA models. The development of wavelets has been made based on concepts of Fourier analysis and Window Fourier Transform. In accordance with this, initially, the necessity of involving the advent of wavelets has been presented. This is followed by the discussion regarding the advantages that are provided by wavelets. Primarily, the wavelets were defined in the sense of continuous time series. Later, in order to match the real world requirements, wavelets analysis has been defined in discrete scenario which is called as Discrete Wavelet Transform (DWT). The current thesis utilized DWT for performing time series decomposition. The detailed discussion regarding the theory behind time series decomposition is presented in the thesis. This is followed by description regarding mathematical viewpoint of time series decomposition using DWT, which involves decomposition algorithm. EMD also comes under same class as wavelets in the consequence of time series decomposition. EMD is developed out of the fact that most of the time series in nature contain multiple frequencies leading to existence of different scales simultaneously. This method, when compared to standard Fourier analysis and wavelet algorithms, has greater scope of adaptation in processing various nonstationary time series. The method involves decomposing any complicated time series into a very small number of finite empirical modes (IMFs-Intrinsic Mode Functions), where each mode contains information of the original time series. The algorithm of time series decomposition using EMD is presented post conceptual elucidation in the current thesis. Later, the proposed time series forecasting algorithm that couples EMD and ARMA model is presented that even considers the number of time steps ahead of which forecasting needs to be performed. In order to test the methodologies of wavelet and EMD based algorithms for prediction of time series with non stationarity, series of streamflow data from USA and rainfall data from India are used in the study. Four non-stationary streamflow sites (USGS data resources) of monthly total volumes and two non-stationary gridded rainfall sites (IMD) of monthly total rainfall are considered for the study. The predictability by the proposed algorithm is checked in two scenarios, first being six months ahead forecast and the second being twelve months ahead forecast. Normalized Root Mean Square Error (NRMSE) and Nash Sutcliffe Efficiency Index (Ef) are considered to evaluate the performance of the proposed techniques. Based on the performance measures, the results indicate that wavelet based analyses generate good variations in the case of six months ahead forecast maintaining harmony with the observed values at most of the sites. Although the methods are observed to capture the minima of the time series effectively both in the case of six and twelve months ahead predictions, better forecasts are obtained with wavelet based method over EMD based method in the case of twelve months ahead predictions. It is therefore inferred that wavelet based method has better prediction capabilities over EMD based method despite some of the limitations of time series methods and the manner in which decomposition takes place. Finally, the study concludes that the wavelet based time series algorithm could be used to model events such as droughts with reasonable accuracy. Also, some modifications that could be made in the model have been suggested which can extend the scope of applicability to other areas in the field of hydrology.
495

Wave Transmission Characteristics in Honeycomb Sandwich Structures using the Spectral Finite Element Method

Murthy, MVVS January 2014 (has links) (PDF)
Wave propagation is a phenomenon resulting from high transient loadings where the duration of the load is in µ seconds range. In aerospace and space craft industries it is important to gain knowledge about the high frequency characteristics as it aids in structural health monitoring, wave transmission/attenuation for vibration and noise level reduction. The wave propagation problem can be approached by the conventional Finite Element Method(FEM); but at higher frequencies, the wavelengths being small, the size of the finite element is reduced to capture the response behavior accurately and thus increasing the number of equations to be solved, leading to high computational costs. On the other hand such problems are handled in the frequency domain using Fourier transforms and one such method is the Spectral Finite Element Method(SFEM). This method is introduced first by Doyle ,for isotropic case and later popularized in developing specific purpose elements for structural diagnostics for inhomogeneous materials, by Gopalakrishnan. The general approach in this method is that the partial differential wave equations are reduced to a set of ordinary differential equations(ODEs) by transforming these equations to another space(transformed domain, say Fourier domain). The reduced ODEs are usually solved exactly, the solution of which gives the dynamic shape functions. The interpolating functions used here are exact solution of the governing differential equations and hence, the exact elemental dynamic stiffness matrix is derived. Thus, in the absence of any discontinuities, one element is sufficient to model 1-D waveguide of any length. This elemental stiffness matrix can be assembled to obtain the global matrix as in FEM, but in the transformed space. Thus after obtaining the solution, the original domain responses are obtained using the inverse transform. Both the above mentioned manuscripts present the Fourier transform based spectral finite element (FSFE), which has the inherent aliasing problem that is persistent in the application of the Fourier series/Fourier transforms. This is alleviated by using an additional throw-off element and/or introducing slight damping in to the system. More recently wave let transform based spectral finite element(WSFE) has been formulated which alleviated the aliasing problem; but has a limitation in obtaining the frequency characteristics, like the group speeds are accurate only up-to certain fraction of the Nyquist(central frequency). Currently in this thesis Laplace transform based spectral finite elements(LSFE) are developed for sandwich members. The advantages and limitations of the use of different transforms in the spectral finite element framework is presented in detail in Chapter-1. Sandwich structures are used in the space craft industry due to higher stiffness to weight ratio. Many issues considered in the design and analysis of sandwich structures are discussed in the well known books(by Zenkert, Beitzer). Typically the main load bearing structures are modeled as beam sand plates. Plate structures with kh<1 is analysed based on the Kirch off plate theory/Classical Plate Theory(CPT) and when the bending wavelength is small compared to the plate thickness, the effect of shear deformation and rotary inertia needs to be included where, k is the wave number and h is the thickness of the plate. Many works regarding the wave propagation in sandwich structures has been published in the past literature for wave propagation in infinite sandwich structure and giving the complete description of dispersion relation with no restriction on frequency and wavelength. More recently exact analytical solution or simply supported sandwich plate has been derived. Also it is seen by comparison of dispersion curves obtained with exact (3D formulation of theory of elasticity) and simplified theories (2D formulation as generalization of Timoshenko theory) made on infinite domain and concluded that the simplified theory can be reliably used to assess the waveguide properties of sandwich plate in the frequency range of interest. In order to approach the problems with finite domain and their implementation in the use of general purpose code; finite degrees of freedom is enforced. The concept of displacement based theories provides the flexibility in assuming different kinematic deformations to approach these problems. Many of the displacement based theories incorporate the Equivalent Single Layer(ESL) approach and these can capture the global behavior with relative ease. Chapter-2 presents the Laplace spectral finite element for thick beams based on the First order Shear Deformation Theory (FSDT). Here the effect of different choices of the real part of the Laplace variable is demonstrated. It is shown that the real part of the Laplace variable acts as a numerical damping factor. The spectrum and dispersion relations are obtained and the use of these relations are demonstrated by an example. Here, for sandwich members based on FSDT, an appropriate choice of the correction factor ,which arises due to the inconsistency between the kinematic hypothesis and the desired accuracy is presented. Finally the response obtained by the use of the element is validated with experimental results. For high shock loading cases, the core flexibility induces local effects which are very predominant and this can lead to debonding of face sheets. The ESL theories mentioned above cannot capture these effects due to the computation of equivalent through the thickness section properties. Thus, higher order theories such as the layer-wise theories are required to capture the local behaviour. One such theory for sandwich panels is the Higher order Sandwich Plate theory (HSaPT). Here, the in-plane stress in the core has been neglected; but gives a good approximation for sandwich construction with soft cores. Including the axial inertial terms of the core will not yield constant shear stress distribution through the height of the core and hence more recently the Extended Higher order Sandwich Plate theory (EHSaPT) is proposed. The LSFE based on this theory has been formulated and is presented in Chapter-4. Detailed 3D orthotropic properties of typical sandwich construction is considered and the core compressibility effect of local behavior due to high shock loading is clearly brought out. As detailed local behavior is sought the degrees of freedom per element is high and the specific need for such theory as compared with the ESL theories is discussed. Chapter-4 presents the spectral finite element for plates based on FSDT. Here, multi-transform method is used to solve the partial differential equations of the plate. The effect of shear deformation is brought out in the spectrum and dispersion relations plots. Response results obtained by the formulated element is compared and validated with many different experimental results. Generally structures are built-up by connecting many different sub-structures. These connecting members, called joints play a very important role in the wave transmission/attenuation. Usually these joints are modeled as rigid joints; but in reality these are flexible and exhibits non-linear characteristics and offer high damping to the energy flow in the connected structures. Chapter-5 presents the attenuation and transmission of wave energy using the power flow approach for rigid joints for different configurations. Later, flexible spectral joint model is developed and the transmission/attenuation across the flexible joints is studied. The thesis ends with conclusion and highlighting futures cope based on the developments reported in this thesis.
496

Generalized Analytic Signal Construction and Modulation Analysis

Venkitaraman, Arun January 2013 (has links) (PDF)
This thesis deals with generalizations of the analytic signal (AS) construction proposed by Gabor. Functional extensions of the fractional Hilbert Transform (FrHT) are proposed using which families of analytic signals are obtained. The construction is further applied in the design of a secure communication scheme. A demodulation scheme is developed based on the generalized AS, motivated by perceptual experiments in binaural hearing. Demodulation is achieved using a signal and its arbitrary phase-shifted version which, in turn translated to demodulation using a pair of flat-top bandpass filters that form an FrHT parir. A new family of wavelets based on the popular Gammatone auditory model is proposed and is shown to lead to a good characterization of singularities/transients in a signal. Allied problems of computing smooth amplitude, phase, and frequency modulations from the AS. Construction of FrHT pair of wavelets, and temporal envelope fit of transient audio signals are also addressed.
497

Analysis of Local Field Potential and Gamma Rhythm Using Matching Pursuit Algorithm

Chandran, Subash K S January 2016 (has links) (PDF)
Signals recorded from the brain often show rhythmic patterns at different frequencies, which are tightly coupled to the external stimuli as well as the internal state of the subject. These signals also have transient structures related to spiking or sudden onset of a stimulus, which have a duration not exceeding tens of milliseconds. Further, brain signals are highly non-stationary because both behavioral state and external stimuli can change over a short time scale. It is therefore essential to study brain signals using techniques that can represent both rhythmic and transient components of the signal. In Chapter 2, we describe a multi-scale decomposition technique based on an over-complete dictionary called matching pursuit (MP), and show that it is able to capture both sharp stimulus-onset transient and sustained gamma rhythm in local field potential recorded from the primary visual cortex. Gamma rhythm (30 to 80 Hz), often associated with high-level cortical functions, has been proposed to provide a temporal reference frame (“clock”) for spiking activity, for which it should have least center frequency variation and consistent phase for extended durations. However, recent studies have proposed that gamma occurs in short bursts and it cannot act as a reference. In Chapter 3, we propose another gamma duration estimator based on matching pursuit (MP) algorithm, which is tested with synthetic brain signals and found to be estimating the gamma duration efficiently. Applying this algorithm to real data from awake monkeys, we show that the median gamma duration is more than 330 ms, which could be long enough to support some cortical computations.
498

Modèles de covariance pour l'analyse et la classification de signaux électroencéphalogrammes / Covariance models for electroencephalogramm signals analysis and classification

Spinnato, Juliette 06 July 2015 (has links)
Cette thèse s’inscrit dans le contexte de l’analyse et de la classification de signaux électroencéphalogrammes (EEG) par des méthodes d’analyse discriminante. Ces signaux multi-capteurs qui sont, par nature, très fortement corrélés spatialement et temporellement sont considérés dans le plan temps-fréquence. En particulier, nous nous intéressons à des signaux de type potentiels évoqués qui sont bien représentés dans l’espace des ondelettes. Par la suite, nous considérons donc les signaux représentés par des coefficients multi-échelles et qui ont une structure matricielle électrodes × coefficients. Les signaux EEG sont considérés comme un mélange entre l’activité d’intérêt que l’on souhaite extraire et l’activité spontanée (ou "bruit de fond"), qui est largement prépondérante. La problématique principale est ici de distinguer des signaux issus de différentes conditions expérimentales (classes). Dans le cas binaire, nous nous focalisons sur l’approche probabiliste de l’analyse discriminante et des modèles de mélange gaussien sont considérés, décrivant dans chaque classe les signaux en termes de composantes fixes (moyenne) et aléatoires. Cette dernière, caractérisée par sa matrice de covariance, permet de modéliser différentes sources de variabilité. Essentielle à la mise en oeuvre de l’analyse discriminante, l’estimation de cette matrice (et de son inverse) peut être dégradée dans le cas de grandes dimensions et/ou de faibles échantillons d’apprentissage, cadre applicatif de cette thèse. Nous nous intéressons aux alternatives qui se basent sur la définition de modèle(s) de covariance(s) particulier(s) et qui permettent de réduire le nombre de paramètres à estimer. / The present thesis finds itself within the framework of analyzing and classifying electroencephalogram signals (EEG) using discriminant analysis. Those multi-sensor signals which are, by nature, highly correlated spatially and temporally are considered, in this work, in the timefrequency domain. In particular, we focus on low-frequency evoked-related potential-type signals (ERPs) that are well described in the wavelet domain. Thereafter, we will consider signals represented by multi-scale coefficients and that have a matrix structure electrodes × coefficients. Moreover, EEG signals are seen as a mixture between the signal of interest that we want to extract and spontaneous activity (also called "background noise") which is overriding. The main problematic is here to distinguish signals from different experimental conditions (class). In the binary case, we focus on the probabilistic approach of the discriminant analysis and Gaussian mixtures are used, describing in each class the signals in terms of fixed (mean) and random components. The latter, characterized by its covariance matrix, allow to model different variability sources. The estimation of this matrix (and of its inverse) is essential for the implementation of the discriminant analysis and can be deteriorated by high-dimensional data and/or by small learning samples, which is the application framework of this thesis. We are interested in alternatives that are based on specific covariance model(s) and that allow to decrease the number of parameters to estimate.
499

Diagnosis of electric induction machines in non-stationary regimes working in randomly changing conditions

Vedreño Santos, Francisco Jose 02 December 2013 (has links)
Tradicionalmente, la detección de faltas en máquinas eléctricas se basa en el uso de la Transformada Rápida de Fourier ya que la mayoría de las faltas pueden ser diagnosticadas con ella con seguridad si las máquinas operan en condiciones de régimen estacionario durante un intervalo de tiempo razonable. Sin embargo, para aplicaciones en las que las máquinas operan en condiciones de carga y velocidad fluctuantes (condiciones no estacionarias) como por ejemplo los aerogeneradores, el uso de la Transformada Rápida de Fourier debe ser reemplazado por otras técnicas. La presente tesis desarrolla una nueva metodología para el diagnóstico de máquinas de inducción de rotor de jaula y rotor bobinado operando en condiciones no estacionarias, basada en el análisis de las componentes de falta de las corrientes en el plano deslizamiento frecuencia. La técnica es aplicada al diagnóstico de asimetrías estatóricas, rotóricas y también para la falta de excentricidad mixta. El diagnóstico de las máquinas eléctricas en el dominio deslizamiento-frecuencia confiere un carácter universal a la metodología ya que puede diagnosticar máquinas eléctricas independientemente de sus características, del modo en el que la velocidad de la máquina varía y de su modo de funcionamiento (motor o generador). El desarrollo de la metodología conlleva las siguientes etapas: (i) Caracterización de las evoluciones de las componentes de falta de asimetría estatórica, rotórica y excentricidad mixta para las máquinas de inducción de rotores de jaula y bobinados en función de la velocidad (deslizamiento) y la frecuencia de alimentación de la red a la que está conectada la máquina. (ii) Debido a la importancia del procesado de la señal, se realiza una introducción a los conceptos básicos del procesado de señal antes de centrarse en las técnicas actuales de procesado de señal para el diagnóstico de máquinas eléctricas. (iii) La extracción de las componentes de falta se lleva a cabo a través de tres técnicas de filtrado diferentes: filtros basados en la Transformada Discreta Wavelet, en la Transformada Wavelet Packet y con una nueva técnica de filtrado propuesta en esta tesis, el Filtrado Espectral. Las dos primeras técnicas de filtrado extraen las componentes de falta en el dominio del tiempo mientras que la nueva técnica de filtrado realiza la extracción en el dominio de la frecuencia. (iv) La extracción de las componentes de falta, en algunos casos, conlleva el desplazamiento de la frecuencia de las componentes de falta. El desplazamiento de la frecuencia se realiza a través de dos técnicas: el Teorema del Desplazamiento de la Frecuencia y la Transformada Hilbert. (v) A diferencia de otras técnicas ya desarrolladas, la metodología propuesta no se basa exclusivamente en el cálculo de la energía de la componente de falta sino que también estudia la evolución de la frecuencia instantánea de ellas, calculándola a través de dos técnicas diferentes (la Transformada Hilbert y el operador Teager-Kaiser), frente al deslizamiento. La representación de la frecuencia instantánea frente al deslizamiento elimina la posibilidad de diagnósticos falsos positivos mejorando la precisión y la calidad del diagnóstico. Además, la representación de la frecuencia instantánea frente al deslizamiento permite realizar diagnósticos cualitativos que son rápidos y requieren bajos requisitos computacionales. (vi) Finalmente, debido a la importancia de la automatización de los procesos industriales y para evitar la posible divergencia presente en el diagnóstico cualitativo, tres parámetros objetivos de diagnóstico son desarrollados: el parámetro de la energía, el coeficiente de similitud y los parámetros de regresión. El parámetro de la energía cuantifica la severidad de la falta según su valor y es calculado en el dominio del tiempo y en el dominio de la frecuencia (consecuencia de la extracción de las componentes de falta en el dominio de la frecuencia). El coeficiente de similitud y los parámetros de regresión son parámetros objetivos que permiten descartar diagnósticos falsos positivos aumentando la robustez de la metodología propuesta. La metodología de diagnóstico propuesta se valida experimentalmente para las faltas de asimetría estatórica y rotórica y para el fallo de excentricidad mixta en máquinas de inducción de rotor de jaula y rotor bobinado alimentadas desde la red eléctrica y desde convertidores de frecuencia en condiciones no estacionarias estocásticas. / Vedreño Santos, FJ. (2013). Diagnosis of electric induction machines in non-stationary regimes working in randomly changing conditions [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/34177 / TESIS
500

On the design of fast and efficient wavelet image coders with reduced memory usage

Oliver Gil, José Salvador 06 May 2008 (has links)
Image compression is of great importance in multimedia systems and applications because it drastically reduces bandwidth requirements for transmission and memory requirements for storage. Although earlier standards for image compression were based on the Discrete Cosine Transform (DCT), a recently developed mathematical technique, called Discrete Wavelet Transform (DWT), has been found to be more efficient for image coding. Despite improvements in compression efficiency, wavelet image coders significantly increase memory usage and complexity when compared with DCT-based coders. A major reason for the high memory requirements is that the usual algorithm to compute the wavelet transform requires the entire image to be in memory. Although some proposals reduce the memory usage, they present problems that hinder their implementation. In addition, some wavelet image coders, like SPIHT (which has become a benchmark for wavelet coding), always need to hold the entire image in memory. Regarding the complexity of the coders, SPIHT can be considered quite complex because it performs bit-plane coding with multiple image scans. The wavelet-based JPEG 2000 standard is still more complex because it improves coding efficiency through time-consuming methods, such as an iterative optimization algorithm based on the Lagrange multiplier method, and high-order context modeling. In this thesis, we aim to reduce memory usage and complexity in wavelet-based image coding, while preserving compression efficiency. To this end, a run-length encoder and a tree-based wavelet encoder are proposed. In addition, a new algorithm to efficiently compute the wavelet transform is presented. This algorithm achieves low memory consumption using line-by-line processing, and it employs recursion to automatically place the order in which the wavelet transform is computed, solving some synchronization problems that have not been tackled by previous proposals. The proposed encode / Oliver Gil, JS. (2006). On the design of fast and efficient wavelet image coders with reduced memory usage [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1826 / Palancia

Page generated in 0.0482 seconds