• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 97
  • 40
  • 10
  • 8
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 329
  • 40
  • 36
  • 34
  • 29
  • 29
  • 28
  • 27
  • 25
  • 24
  • 24
  • 23
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Investigation on Gauss-Markov Image Modeling

You, Jhih-siang 30 August 2006 (has links)
Image modeling is a foundation for many image processing applications. The compound Gauss-Markov (CGM) image model has been proven useful in picture restoration for natural images. In contrast, other Markov Random Fields (MRF) such as Gaussian MRF models are specialized on segmentation for texture image. The CGM image is restored in two steps iteratively: restoring the line field by the assumed image field and restoring the image field by the just computed line field. The line fields are most important for a successful CGM modeling. A convincing line fields should be fair on both fields: horizontal and vertical lines. The working order and update occasions have great effects on the results of line fields in iterative computation procedures. The above two techniques are the basic for our research in finding the best modeling for CGM. Besides, we impose an extra condition for a line to exist to compensate the bias of line fields. This condition is based upon a requirement of a brightness contrast on the line field. Our best modeling is verified by the effect of image restoration in visual quality and numerical values for natural images. Furthermore, an artificial image generated by CGM is tested to prove that our best modeling is correct.
142

Image Restoration Based upon Gauss-Markov Random Field

Sheng, Ming-Cheng 20 June 2000 (has links)
Images are liable to being corrupted by noise when they are processed for many applications such as sampling, storage and transmission. In this thesis, we propose a method of image restoration for image corrupted by a white Gaussian noise. This method is based upon Gauss-Markov random field model combined with a technique of image segmentation. As a result, the image can be restored by MAP estimation. In the approach of Gauss-Markov random field model, the image is restored by MAP estimation implemented by simulated annealing or deterministic search methods. By image segmentation, the region parameters and the power of generating noise can be obtained for every region. The above parameters are important for MAP estimation of the Gauss-Markov Random field model. As a summary, we first segment the image to find the important region parameters and then restore the image by MAP estimation with using the above region parameters. Finally, the intermediate image is restored again by the conventional Gauss-Markov random field model method. The advantage of our method is the clear edges by the first restoration and deblured images by the second restoration.
143

Parameter Estimation for Compound Gauss-Markov Random Field and its application to Image Restoration

Hsu, I-Chien 20 June 2001 (has links)
The restoration of degraded images is one important application of image processing. The classical approach of image restoration, such as low-pass filter method, is usually stressed on the numerical error but with a disadvantage in visual quality of blurred texture. Therefore, a new method of image restoration, based upon image model by Compound Gauss-Markov(CGM) Random Fields, using MAP(maximum a posteriori probability) approach focused on image texture effect has been proved to be helpful. However, the contour of the restored image and numerical error for the method is poor because the conventional CGM model uses fixed global parameters for the whole image. To improve these disadvantages, we adopt the adjustable parameters method to estimate model parameters and restore the image. But the parameter estimation for the CGM model is difficult since the CGM model has 80 interdependent parameters. Therefore, we first adopt the parameter reduction approach to reduce the complexity of parameter estimation. Finally, the initial value set of the parameters is important. The different initial value might produce different results. The experiment results show that the proposed method using adjustable parameters has good numerical error and visual quality than the conventional methods using fixed parameters.
144

Investigation of Compound Gauss-Markov Image Field

Lin, Yan-Li 05 August 2002 (has links)
This Compound Gauss-Markov image model has been proven helpful in image restoration. In this model, a pixel in the image random field is determined by the surrounding pixels according to a predetermined line field. In this thesis, we restored the noisy image based upon the traditional Compound Gauss-Markov image field without the constraint of the model parameters introduced in the original work. The image is restored in two steps iteratively: restoring the line field by the assumed image field and restoring the image field by the just computed line field. Two methods are proposed to replace the traditional method in solving for the line field. They are probability method and vector method. In probability method, we break away from the limitation of the energy function Vcl(L) and the mystical system parameters Ckll(m,n) and£mw2. In vector method, the line field appears more reasonable than the original method. The image restored by our methods has a similar visual quality but a better numerical value than the original method.
145

Modélisations polynomiales des signaux ECG. Application à la compression.

Tchiotsop, Daniel 15 November 2007 (has links) (PDF)
La compression des signaux ECG trouve encore plus d'importance avec le développement de la télémédecine. En effet, la compression permet de réduire considérablement les coûts de la transmission des informations médicales à travers les canaux de télécommunication. Notre objectif dans ce travail de thèse est d'élaborer des nouvelles méthodes de compression des signaux ECG à base des polynômes orthogonaux. Pour commencer, nous avons étudié les caractéristiques des signaux ECG, ainsi que différentes opérations de traitements souvent appliquées à ce signal. Nous avons aussi décrit de façon exhaustive et comparative, les algorithmes existants de compression des signaux ECG, en insistant sur ceux à base des approximations et interpolations polynomiales. Nous avons abordé par la suite, les fondements théoriques des polynômes orthogonaux, en étudiant successivement leur nature mathématique, les nombreuses et intéressantes propriétés qu'ils disposent et aussi les caractéristiques de quelques uns de ces polynômes. La modélisation polynomiale du signal ECG consiste d'abord à segmenter ce signal en cycles cardiaques après détection des complexes QRS, ensuite, on devra décomposer dans des bases polynomiales, les fenêtres de signaux obtenues après la segmentation. Les coefficients produits par la décomposition sont utilisés pour synthétiser les segments de signaux dans la phase de reconstruction. La compression revient à utiliser un petit nombre de coefficients pour représenter un segment de signal constitué d'un grand nombre d'échantillons. Nos expérimentations ont établi que les polynômes de Laguerre et les polynômes d'Hermite ne conduisaient pas à une bonne reconstruction du signal ECG. Par contre, les polynômes de Legendre et les polynômes de Tchebychev ont donné des résultats intéressants. En conséquence, nous concevons notre premier algorithme de compression de l'ECG en utilisant les polynômes de Jacobi. Lorsqu'on optimise cet algorithme en supprimant les effets de bords, il dévient universel et n'est plus dédié à la compression des seuls signaux ECG. Bien qu'individuellement, ni les polynômes de Laguerre, ni les fonctions d'Hermite ne permettent une bonne modélisation des segments du signal ECG, nous avons imaginé l'association des deux systèmes de fonctions pour représenter un cycle cardiaque. Le segment de l'ECG correspondant à un cycle cardiaque est scindé en deux parties dans ce cas: la ligne isoélectrique qu'on décompose en séries de polynômes de Laguerre et les ondes P-QRS-T modélisées par les fonctions d'Hermite. On obtient un second algorithme de compression des signaux ECG robuste et performant.
146

Evalutaion of certain exponential sums of quadratic functions over a finite fields of odd characteristic

Draper, Sandra D 01 June 2006 (has links)
Let p be an odd prime, and define f(x) as follows: f(x) as the sum from 1 to k of a_i times x raised to the power of (p to the power of (alpha_i+1)) in F_(p to the power of n)[x] where 0 is less than or equal to alpha_1 < alpha_2 < ... < alpha_k where alpha_k is equal to alpha. We consider the exponential sum S(f, n) equal to the sum_(x as x runs over the finite field with (p to the n elements) of zeta_(p to the power of Tr_n (f(x))), where zeta_p equals e to the power of (2i times pi divided by p) and Tr_n is the trace from the finite field with p to the n elements to the finite field with p elements.We provide necessary background from number theory and review the basic facts about quadratic forms over a finite field with p elements through both the multivariable and single variable approach. Our main objective is to compute S(f, n) explicitly. The sum S(f, n) is determined by two quantities: the nullity and the type of the quadratic form Tr_n (f(x)). We give an effective algorithm for the computation of the nullity. Tables of numerical values of the nullity are included. However, the type is more subtle and more difficult to determine. Most of our investigation concerns the type. We obtain "relative formulas" for S(f, mn) in terms of S(f, n) when the p-adic order of m is less than or equal to the minimum p-adic order of the alphas. The formulas are obtained in three separate cases, using different methods: (i) m is q to the s power, where q is a prime different from 2 and p; (ii) m is 2 to the s power; and (iii) m is p. In case (i), we use a congruence relation resulting from a suitable Galios action. For case (ii), in addition to the congruence in case (i), a special partition of the finite field with p to the 2n elements is needed. In case (iii), the congruence method does not work. However, the Artin-Schreier Theorem allows us to compute the trace of the extension from the finite field with p to the pn elements to the fi nite field with p to the n elements rather explicitly.When the 2-adic order of each of the alphas is equal and it is less than the 2-adic order of n, we are able to determine S(f, n) explicitly. As a special case, we have explicit formulas for the sum of the monomial, S(ax to the power of (1+ (p to the power of alpha)).Most of the results of the thesis are new and generalize previous results by Carlitz, Baumert, McEliece, and Hou.
147

Analyses de l'algorithme de Gauss. Applications à l'analyse de l'algorithme LLL.

Vera, Antonio 17 July 2009 (has links) (PDF)
Cette thèse est dédiée à l'analyse probabiliste d'algorithmes de réduction des réseaux euclidiens. Un réseau euclidien est l'ensemble de combinaisons linéaires à coefficients entiers d'une base (b_1,..., b_n ) \subset R^n. La réduction d'un réseau consiste a en trouver une base formée de vecteurs assez courts et assez orthogonaux, à partir d'une base donnée en entrée. Le célèbre algorithme LLL résout ce problème de manière efficace en dimension arbitraire. Il est très utilisé, mais mal compris. Nous nous concentrons sur son analyse dans le cas n = 2, où LLL devient l'algorithme de Gauss, car cette instance est une brique de base pour le cas n>= 3. Nous analysons précisément l'algorithme de Gauss, tant du point de vue de son exécution (nombre d'itérations, complexité binaire, coûts "additifs") que de la géométrie de la base de sortie (défaut d'Hermite, premier minimum et deuxième minimum orthogonalisé). Nous travaillons dans un modèle probabiliste très général, qui permet d'étudier aussi bien les instances faciles que les instances difficiles. Ce modèle nous a permis d'étudier la transition vers l'algorithme d'Euclide, qui correspond au cas où les vecteurs de la base d'entrée sont colinéaires. Nous utilisons des méthodes dynamiques : les algorithmes sont vus comme des systèmes dynamiques, et les séries génératrices concernées s'expriment en fonction de l'opérateur de transfert. Ces résultats très précis en dimension 2 sont une première étape pour l'analyse de l'algorithme LLL dans le cas général.
148

A study of modified hermite polynomials

Khan, Mumtaz Ahmad, Khan, Abdul Hakim, Ahmad, Naeem 25 September 2017 (has links)
The present paper is a study of modied Hermitepolynomials Hn(x; a) which reduces to Hermite polynomialsHn(x) for a = e.
149

Determinação de espectros de relaxação e distribuição de massa molar de polímeros lineares por reometria

Farias, Thais Machado January 2009 (has links)
A distribuição de massa molar (DMM) e seus parâmetros são de fundamental importância na caracterização dos polímeros. Por este motivo, o desenvolvimento de técnicas que permitam a determinação da DMM de forma mais rápida e a menor custo é de grande importância prática. Os principais objetivos deste trabalho foram a implementação de alguns dos modelos baseados da teoria da reptação dupla propostos na literatura para descrever o mecanismo de relaxação das cadeias poliméricas, a avaliação dessas implementações e a análise de dois passos fundamentais na obtenção da DMM a partir de dados reológicos que são a metodologia de cálculo do espectro de relaxação baseado no modelo de Maxwell e a estratégia para a avaliação numérica das integrais que aparecem nos modelos de relaxação. Foi resolvido o problema denominado problema inverso, ou seja, a determinação da DMM a partir de dados reológicos usando um modelo de relaxação especificado e uma função de distribuição imposta. Foi usada a função Exponencial Generalizada (GEX) para representar a probabilidade de distribuição, sendo consideradas duas abordagens: i) cálculo explícito do espectro de relaxação e ii) aproximações paramétricas de Schwarzl, que evitam a necessidade do cálculo explícito do espectro de relaxação. A metodologia de determinação da DMM foi aplicada para amostras de polietileno e foram estimadas distribuições com boa representação dos dados experimentais do GPC, ao considerarem-se amostras com polidispersões inferiores a 10. Com relação a metodologia de cálculo do espectro de relaxação, foi realizado um estudo comparativo da aplicação de espectros de relaxação discreto e contínuo, com o objetivo de estabelecer critérios para especificação do número ótimo de modos de Maxwell a serem considerados. Ao efetuar-se a comparação entre as técnicas, verificou-se o espectro discreto apresenta como um sistema melhor condicionado, permitindo assim obter maior confiabilidade dos parâmetros estimados. Também é proposta uma modificação da metodologia de determinação da DMM, em que é aplicada a quadratura de Gauss-Hermite para a resolução numérica da integral dos modelos de relaxação. / The molecular weight distribution (MWD) and its parameters are of the fundamental importance in the characterization of polymers. Therefore, the development of techniques for faster and less time consuming determination of the MWD is of great practical relevance. The goals of this work were the implementation of some of the relaxation models from double reptation theory proposed in the literature, the evaluation of these implementations and the analysis of two key points in the recovery of the MWD from rheological data which are the methodology for calculation of the relaxation spectrum based on the Maxwell model and the numeric strategy for the evaluation of the integrals appearing in the relaxation models. The inverse problem, i.e., the determination of the MWD from rheological data using a specified relaxation model and an imposed distribution function, was solved. In the analysis of the inverse problem, the Generalized Exponential (GEX) was used as distribution function and two approaches were considered: i) explicit calculation of the relaxation spectrum and ii) use of the parametric method proposed by Schwarzl to avoid the explicit calculation of the relaxation spectrum. In the test of commercial samples of polyethylene with polidispersity less than 10, the application of this methodology led to MWD curves which provided good fit of the experimental SEC data. Regarding the methodology for calculation of the relaxation spectrum, a comparison between the performance of discrete and continuous relaxation spectrum was performed and some possible a criteria to determine the appropriate number of relaxation modes of Maxwell to be used were evaluated. It was found that the technique of discrete spectrum leads to better conditioned systems and, consequently, greater reliability of the estimated parameters. With relation to the numeric strategy for the evaluation of the integrals appearing in the relaxation models, the use of Gauss-Hermite quadrature using a new change of variables was proposed.
150

Seleção de símbolos piloto em sistemas de comunicação sem fio / Selection of pilot symbols in wireless communication systems

Santos, Daniel Matias Silva dos 19 July 2016 (has links)
SANTOS, D. M. S. Seleção de símbolos piloto em sistemas de comunicação sem fio. 2016. 65 f. Dissertação (Mestrado em Engenharia de Teleinformática) – Centro de Tecnologia, Universidade Federal do Ceará, Fortaleza, 2016. / Submitted by Hohana Sanders (hohanasanders@hotmail.com) on 2016-11-03T10:53:31Z No. of bitstreams: 1 2016_dis_dmssantos.pdf: 2108362 bytes, checksum: 3d19fb1e9ea4fccac493580cb117ccb2 (MD5) / Approved for entry into archive by Marlene Sousa (mmarlene@ufc.br) on 2016-11-16T16:53:45Z (GMT) No. of bitstreams: 1 2016_dis_dmssantos.pdf: 2108362 bytes, checksum: 3d19fb1e9ea4fccac493580cb117ccb2 (MD5) / Made available in DSpace on 2016-11-16T16:53:45Z (GMT). No. of bitstreams: 1 2016_dis_dmssantos.pdf: 2108362 bytes, checksum: 3d19fb1e9ea4fccac493580cb117ccb2 (MD5) Previous issue date: 2016-07-19 / In order to achieve gains on the transmission capacity with lower error probability so the current requirements of mobile communication applications can be met, the way of how data is processed is crucial to improve system performance. In order to improve the quality of the transmission in multi-antenna systems, this work uses techniques of preprocessing of the transmitted signal to improve the system performance measured by the SNR (Signal to Noise Ratio) under a space-time transmit antenna array channel model, where the temporal dynamics of the channel is modeled by a Gauss-Markov process and the spatial correlation by a Kronecker model. Based on the statistical properties of the channel, we use the optimal linear algorithm, also known as a Kalman filter, associated with the transmitted pilot symbols for its estimation. From several sequences of defined pilot symbols, this work proposes an algorithm capable of selecting the best sequences of pilot symbols that maximize the received SNR. In the numerical simulations, we analyze the performance of the proposed method for pilot symbols selection and, as benchmark, the performance of the method of random pilot symbols selection. The results show the proposed method outperforms the random selection one. / Com o objetivo de se alcançar ganhos na capacidade de transmissão com menor probabilidade de erro para atender as atuais aplicações de comunicação móveis, o modo de tratamento dos dados é fundamental para a melhoria do desempenho do sistema. A fim de melhorar a qualidade de transmissão em sistemas de múltiplas antenas, este trabalho faz uso de técnicas de pré-processamento do sinal transmitido de forma a melhorar o desempenho do sistema, medido pela métrica da SNR (do inglês, Signal to Noise Ratio) sob um modelo de canal de arranjo de antenas transmissoras espaço-temporal, onde a dinâmica temporal do canal é modelada por um processo de Gauss-Markov e a correlação espacial por um modelo de Kronecker. Com base nas propriedades estatísticas do canal, faz-se sua estimação pelo algoritmo linear ótimo, também conhecido como filtro de Kalman, associado com os símbolos piloto transmitidos. A partir de várias sequências de símbolos pilotos definidas em um conjunto de palavras códigos, esta dissertação propõe um algoritmo capaz de selecionar as melhores sequências de símbolos pilotos que maximizam a SNR recebida. Nas simulações computacionais, são analisados o desempenho do método proposto de seleção de símbolos piloto e, como um referencial de comparação, o desempenho do método padrão de símbolos piloto escolhidos de maneira aleatória. Os resultados numéricos mostram que o método proposto tem desempenho de SNR recebida melhor do que o método de seleção aleatória.

Page generated in 0.038 seconds