• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 160
  • 68
  • 35
  • 16
  • 13
  • 10
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 365
  • 103
  • 65
  • 54
  • 52
  • 48
  • 45
  • 41
  • 41
  • 41
  • 38
  • 38
  • 34
  • 31
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Gibbs/Equilibrium Measures for Functions of Multidimensional Shifts with Countable Alphabets

Muir, Stephen R. 05 1900 (has links)
Consider a multidimensional shift space with a countably infinite alphabet, which serves in mathematical physics as a classical lattice gas or lattice spin system. A new definition of a Gibbs measure is introduced for suitable real-valued functions of the configuration space, which play the physical role of specific internal energy. The variational principle is proved for a large class of functions, and then a more restrictive modulus of continuity condition is provided that guarantees a function's Gibbs measures to be a nonempty, weakly compact, convex set of measures that coincides with the set of measures obeying a form of the DLR equations (which has been adapted so as to be stated entirely in terms of specific internal energy instead of the Hamiltonians for an interaction potential). The variational equilibrium measures for a such a function are then characterized as the shift invariant Gibbs measures of finite entropy, and a condition is provided to determine if a function's Gibbs measures have infinite entropy or not. Moreover the spatially averaged limiting Gibbs measures, i.e. constructive equilibria, are shown to exist and their weakly closed convex hull is shown to coincide with the set of true variational equilibrium measures. It follows that the "pure thermodynamic phases", which correspond to the extreme points in the convex set of equilibrium measures, must be constructive equilibria. Finally, for an even smoother class of functions a method is presented to construct a compatible interaction potential and it is checked that the two different structures generate the same sets of Gibbs and equilibrium measures, respectively.
22

Modelagem de dados de resposta ao item sob efeito de speededness / Modeling of Item Response Data under Effect of Speededness

Campos, Joelson da Cruz 08 April 2016 (has links)
Em testes nos quais uma quantidade considerável de indivíduos não dispõe de tempo suciente para responder todos os itens temos o que é chamado de efeito de Speededness. O uso do modelo unidimensional da Teoria da Resposta ao Item (TRI) em testes com speededness pode nos levar a uma série de interpretações errôneas uma vez que nesse modelo é suposto que os respondentes possuem tempo suciente para responder todos os itens. Nesse trabalho, desenvolvemos uma análise Bayesiana do modelo tri-dimensional da TRI proposto por Wollack e Cohen (2005) considerando uma estrutura de dependência entre as distribuições a priori dos traços latentes a qual modelamos com o uso de cópulas. Apresentamos um processo de estimação para o modelo proposto e fazemos um estudo de simulação comparativo com a análise realizada por Bazan et al. (2010) na qual foi utilizada distribuições a priori independentes para os traços latentes. Finalmente, fazemos uma análise de sensibilidade do modelo em estudo e apresentamos uma aplicação levando em conta um conjunto de dados reais proveniente de um subteste do EGRA, chamado de Nonsense Words, realizado no Peru em 2007. Nesse subteste os alunos são avaliados por via oral efetuando a leitura, sequencialmente, de 50 palavras sem sentidos em 60 segundos o que caracteriza a presença do efeito speededness. / In tests where a reasonable amount of individuals does not have enough time to answer all items we observe what is called eect of Speededness. The use of a unidimensional model from Item Response Theory (IRT) in tests with speededness can lead us to erroneous interpretations, since this model assumes that the respondents have enough time to answer all items. In this work, we propose a Bayesian analysis of the three-dimensional item response models (IRT) proposed by Wollack and Cohen et al (2005) considering a dependency structure between the prior distributions of the latent traits which is modeled using Copulas. We propose and develop a MCMC algorithm for the estimation of the model. A simulation study comparing with the analysis in Bazan et al (2010), wherein an independent prior distribution assumption was presented. Finally, we apply our model in a set of real data from EGRA, called Nonsense Words, held in Peru in 2007, where students are evaluated for their performance in reading.
23

Modelagem de dados de resposta ao item sob efeito de speededness / Modeling of Item Response Data under Effect of Speededness

Joelson da Cruz Campos 08 April 2016 (has links)
Em testes nos quais uma quantidade considerável de indivíduos não dispõe de tempo suciente para responder todos os itens temos o que é chamado de efeito de Speededness. O uso do modelo unidimensional da Teoria da Resposta ao Item (TRI) em testes com speededness pode nos levar a uma série de interpretações errôneas uma vez que nesse modelo é suposto que os respondentes possuem tempo suciente para responder todos os itens. Nesse trabalho, desenvolvemos uma análise Bayesiana do modelo tri-dimensional da TRI proposto por Wollack e Cohen (2005) considerando uma estrutura de dependência entre as distribuições a priori dos traços latentes a qual modelamos com o uso de cópulas. Apresentamos um processo de estimação para o modelo proposto e fazemos um estudo de simulação comparativo com a análise realizada por Bazan et al. (2010) na qual foi utilizada distribuições a priori independentes para os traços latentes. Finalmente, fazemos uma análise de sensibilidade do modelo em estudo e apresentamos uma aplicação levando em conta um conjunto de dados reais proveniente de um subteste do EGRA, chamado de Nonsense Words, realizado no Peru em 2007. Nesse subteste os alunos são avaliados por via oral efetuando a leitura, sequencialmente, de 50 palavras sem sentidos em 60 segundos o que caracteriza a presença do efeito speededness. / In tests where a reasonable amount of individuals does not have enough time to answer all items we observe what is called eect of Speededness. The use of a unidimensional model from Item Response Theory (IRT) in tests with speededness can lead us to erroneous interpretations, since this model assumes that the respondents have enough time to answer all items. In this work, we propose a Bayesian analysis of the three-dimensional item response models (IRT) proposed by Wollack and Cohen et al (2005) considering a dependency structure between the prior distributions of the latent traits which is modeled using Copulas. We propose and develop a MCMC algorithm for the estimation of the model. A simulation study comparing with the analysis in Bazan et al (2010), wherein an independent prior distribution assumption was presented. Finally, we apply our model in a set of real data from EGRA, called Nonsense Words, held in Peru in 2007, where students are evaluated for their performance in reading.
24

The effects of three different priors for variance parameters in the normal-mean hierarchical model

Chen, Zhu, 1985- 01 December 2010 (has links)
Many prior distributions are suggested for variance parameters in the hierarchical model. The “Non-informative” interval of the conjugate inverse-gamma prior might cause problems. I consider three priors – conjugate inverse-gamma, log-normal and truncated normal for the variance parameters and do the numerical analysis on Gelman’s 8-schools data. Then with the posterior draws, I compare the Bayesian credible intervals of parameters using the three priors. I use predictive distributions to do predictions and then discuss the differences of the three priors suggested. / text
25

Emprego da minimização da energia de Gibbs para predizer a composição dos gases de exaustão oriundos de uma caldeira que utiliza como combustíveis subprodutos gerados na indústria siderúrgica

Turetta, Leticia Fabri 26 February 2016 (has links)
Made available in DSpace on 2016-08-29T15:37:29Z (GMT). No. of bitstreams: 1 tese_8477_DISSERTAÇÃO LETICIA.pdf: 7202 bytes, checksum: 0eaa29cc040b1d62f4542fe106132285 (MD5) Previous issue date: 2016-02-26 / CAPES / Nos processos para produção de aço em uma usina siderúrgica são produzidos gases que normalmente podem ser aproveitados como combustíveis pela própria planta. Os gases de alto forno, de coqueria, de aciaria e o alcatrão compõem frequentemente a mistura de combustíveis alimentados nas caldeiras de diversas usinas siderúrgicas. A combustão de diferentes misturas (mix) de combustíveis na caldeira pode gerar altos níveis de gases não oxidados, especialmente o CO. Altas concentrações destes gases acarretam problemas ambientais, o que não é desejado. Como tentativa de solução do problema, significativos níveis de excesso de ar são inseridos no sistema. Entretanto, o excesso de ar pode acarretar na redução da eficiência energética do processo e pode não solucionar o problema. Neste contexto, os objetivos deste trabalho são: (i) Testar diferentes composições para a alimentação de uma caldeira em operação em uma indústria siderurgia e calcular a composição da saída dos gases de exaustão; (ii) Investigar o efeito do aumento do excesso de ar na composição dos gases de exaustão e na eficiência energética do processo. Para isto é proposta a utilização da técnica de minimização da energia livre de Gibbs. Esta metodologia é frequentemente utilizada para se calcular a composição química de um sistema fechado em equilíbrio químico com uma ou mais fases. Portanto é possível afirmar que a energia livre de Gibbs é mínima quando o sistema atinge o estado de equilíbrio químico. Para a obtenção da composição química do sistema, um problema de otimização restrito deve ser resolvido. As variáveis a serem ajustadas representam a composição de equilíbrio do sistema. Assim, com o desenvolvimento deste estudo espera-se ser possível predizer quais a condições operacionais maximizam a eficiência energética do processo e minimizam a emissão de gases não oxidados. / The development of the steel industry has increased energy demand, exerting strong influence on the use of energy resources. The utilization energetic is extremely important because it enables the steel industry greatly reduce their costs. In the steelmaking process, they are produced four by-products with high capacity for energy generation. The produced by-products are directed to thermoelectric plants and used as fuel for the generation of electricity. In this work it proposed a modeling for the prediction of the equilibrium concentration of the chemical species present in the furnace of a steel boiler installed in a thermoelectric plant. The employed technique consists of the minimization of the Gibbs energy of the reaction medium present in the furnace of steel boiler equipment on the thermoelectric central plant. The optimization problem was proposed, by defining thus the objective function and restrictions to be resolved employing the commercial software Matlab® . The solution of the optimization problem resultant provides the description of composition output of the exhaust gases. This work it was possible to evaluate the impact of changes in air feed flow rate and operating temperature of the composition of the exhaust gases. The applied methodology is able to reproduce satisfactorily the information provided by industry and obtained in the literature, that describe the combustion of the byproducts on the steel industry by-products.
26

Evaluation of Textbooks Chmeistry and Concepts of Students of Secondary Education and Higher on the Content Phenomena Spontaneous / AvaliaÃÃo dos Livros DidÃticos de QuÃmica e as ConcepÃÃes de Alunos de Ensino MÃdio e Superior Sobre o ConteÃdo dos FenÃmenos EspontÃneos

Bruno Peixoto de Oliveira 02 October 2014 (has links)
CoordenaÃÃo de AperfeÃoamento de Pessoal de NÃvel Superior / A TermodinÃmica como um ramo experimental e aplicado da CiÃncia pode se tornar uma importante ferramenta no processo de ensino e aprendizagem, visto que, atravÃs deste carÃter aplicado pode facilitar para o aluno a visualizaÃÃo dos conceitos estudados em sala de aula. Este trabalho se propÃs a analisar e avaliar a abordagem do conteÃdo âProcessos EspontÃneosâ nos livros didÃticos de QuÃmica, atualmente recomendados pelo MinistÃrio da EducaÃÃo atravÃs do Guia de Livros DidÃticos. Foi realizada uma anÃlise dos livros didÃticos atualmente recomendados pelo MinistÃrio da EducaÃÃo com o objetivo de compreender como o conteÃdo âProcessos EspontÃneosâ era abordado e se estava em adequaÃÃo com as orientaÃÃes contidas em documentos legais do MEC. AtravÃs de questionÃrios aplicados com questÃes objetivas e subjetivas apresentando fenÃmenos cotidianos foram analisadas as concepÃÃes de alunos do Ensino MÃdio regular e profissional sobre fenÃmenos espontÃneos, bem como de alunos recÃm-admitidos no curso de Licenciatura em QuÃmica das cidades de Fortaleza e Itapipoca. Seguindo as normas do Programa Nacional do Livro DidÃtico dos cinco livros atualmente recomendados, apenas um foi considerado adequado para os processos espontÃneos, pois este aborda entropia fazendo sua ligaÃÃo com a Segunda Lei da TermodinÃmica e atravÃs de exemplos cotidianos. Um diferencial em relaÃÃo aos outros livros analisados foi que este material, tambÃm aborda outra funÃÃo termodinÃmica que descreve os processos espontÃneos em condiÃÃes mais corriqueiramente encontradas em laboratÃrios, ou seja, em temperatura e pressÃo constante, que à a energia de Gibbs. Ficou evidenciado que os alunos, atravÃs do senso comum, conseguem com certa facilidade descrever um processo como espontÃneo ou nÃo. Entretanto, quando questionados sobre qual o fator que determinaria se um dado fenÃmeno ocorre espontaneamente (energia de Gibbs), sÃo evidenciados equÃvocos e confusÃes. Este fato pode estar associado à lacuna deixada atualmente pelos livros indicados e utilizados nas escolas investigadas neste trabalho. PropÃe-se ainda, atravÃs destes resultados, sugerir esse conteÃdo nos livros didÃticos e uma abordagem atravÃs da visualizaÃÃo e compreensÃo dos fenÃmenos cotidianos conforme orientaÃÃes do MEC. / Thermodynamics as an experimental branch of applied science and may become an important tool in the teaching and learning process, since, through this character can apply for the student to facilitate visualization of the concepts studied in class. This study aimed to analyze and evaluate the approach of content "Spontaneous Processes" in textbooks of Chemistry, currently recommended by the Ministry of Education through Textbooks Guide. An analysis of the textbooks currently recommended by the Ministry of Education with the goal of understanding how content "Spontaneous Processes" was held was approached and was in compliance with the guidelines contained in legal documents MEC. Through questionnaires with objective and subjective questions were presenting everyday phenomena analyzed the conceptions of students in regular and vocational high school on spontaneous phenomena, as well as newly admitted to the Bachelor's Degree in Chemistry from the cities of Fortaleza and Itapipoca students. Following the standards of the National Textbook Program of the five books currently recommended only one was deemed appropriate for the spontaneous processes, as this addresses entropy making their connection with the Second Law of Thermodynamics and through everyday examples. A differential with other books has been analyzed that this material also addresses another thermodynamic function that describes the spontaneous processes under conditions routinely found in most laboratories, i.e. constant temperature and pressure, which is the Gibbs energy. It was evident that students, through common sense, can quite easily describe a process as spontaneous or not. However, when asked which factor would determine whether a given phenomenon occurs spontaneously (Gibbs energy), misunderstandings and confusion are evident. This fact can be associated with the gap now left by the books indicated and used in schools investigated in this work. It is further proposed, using these results suggest that content in textbooks and an approach through the visualization and understanding of everyday phenomena as the MEC guidelines.
27

Método da minimização da energia de Gibbs para a modelagem do equilíbrio químico e de fases no processo reacional do biodiesel / Method of minimization of Gibbs energy for the modeling of simultaneous chemical and phase equilibrium in reaction system for biodiesel production process

Yancy Caballero, Daison Manuel, 1986- 20 August 2018 (has links)
Orientador: Reginaldo Guirardello / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Química / Made available in DSpace on 2018-08-20T02:10:55Z (GMT). No. of bitstreams: 1 YancyCaballero_DaisonManuel_M.pdf: 4027109 bytes, checksum: 0eb7d2d1826a2d09c663855c9ad4360c (MD5) Previous issue date: 2012 / Resumo: O presente trabalho tem como objetivo o estudo e aplicação da metodologia da minimização da energia de Gibbs no sistema para o cálculo do equilíbrio de fases, com e sem reação química, para o sistema reacional do biodiesel, utilizando técnicas de otimização global aliadas ao software GAMS, ferramenta computacional utilizada nesta pesquisa. Desse modo, diferentes algoritmos para o cálculo do equilíbrio químico e de fases foram desenvolvidos na forma de programação não-linear para testar diferentes casos de estudo; o primeiro caso envolve apenas a formação de possíveis fases líquidas, e são comparados alguns dados disponíveis na literatura de sistemas contendo os componentes presentes na formação do biodiesel com os resultados obtidos mediante a minimização da função de Gibbs; o segundo caso, equilíbrio de fases com reação química, envolve a formação de uma possível fase vapor e possíveis fases liquidas; dessa forma foi simulada a reação de transesterificação de óleos vegetais para produção de biodiesel, fazendo alguns testes em que são considerados pseudo-componentes e outros em que não. Desse modo foram utilizados os modelos termodinâmicos NRTL, UNIQUAC e UNIFAC para a representação das fases líquidas, com ajustes dos parâmetros de interação binária dos modelos NRTL e UNIQUAC mediante o principio de máxima-verossimilhança para os sistemas envolvidos; nesse caso também foi usado o GAMS. Os resultados obtidos demonstraram que o uso das técnicas de otimização global aliadas ao GAMS são ferramentas úteis e eficientes para calcular o equilíbrio químico e de fases mediante a minimização da energia de Gibbs, além de apresentar tempos computacionais razoavelmente pequenos. Além disso, para os casos que foram comparados com dados da literatura, observou-se uma boa concordância entre os dados simulados e experimentais / Abstract: In this work, the Gibbs energy minimization method was used in chemical and phase equilibrium calculations for system containing compounds in biodiesel production processes. So, global optimization techniques associated with the GAMS software were utilized. Thus, different algorithms to simultaneous calculations of chemical and phase equilibrium were developed in form of non-linear programming to test different case studies; in the first case, only the formation of liquid phases were considered, and the results obtained by direct minimizing of the Gibbs energy were compared with some data available in literature for the systems evaluated; in the second case, phase equilibrium with chemical reaction, were considered the formation of one vapor phase and several liquid phases. In such a way, the transesterification reaction of vegetable oils for biodiesel production was simulated, considering and no considering pseudo-components. In this way, different thermodynamic models were applied to represent the non-idealities of the liquid phases, such as, NRTL, UNIQUAC and UNIFAC models. Also, binary interaction parameters for NRTL and UNIQUAC models were adjusted using the maximum-likelihood method. Additionally, in adjustment process was used the GAMS software too. The results obtained showed that the use of global optimization techniques associated with the GAMS software are useful and efficient tools to calculate the chemical and phase equilibrium by minimizing of the Gibbs energy. Furthermore, the computational times spent in the calculations were quite small in all systems studied. Moreover, in the cases that were compared with literature data, a good agreement between predicted and experimental data was observed / Mestrado / Desenvolvimento de Processos Químicos / Mestre em Engenharia Química
28

Advances in computational Bayesian statistics and the approximation of Gibbs measures / Avancées en statistiques computationelles Bayesiennes et approximation de mesures de Gibbs

Ridgway, James 17 September 2015 (has links)
Ce mémoire de thèse regroupe plusieurs méthodes de calcul d'estimateur en statistiques bayésiennes. Plusieurs approches d'estimation seront considérées dans ce manuscrit. D'abord en estimation nous considérerons une approche standard dans le paradigme bayésien en utilisant des estimateurs sous la forme d'intégrales par rapport à des lois \textit{a posteriori}. Dans un deuxième temps nous relâcherons les hypothèses faites dans la phase de modélisation. Nous nous intéresserons alors à l'étude d'estimateurs répliquant les propriétés statistiques du minimiseur du risque de classification ou de ranking théorique et ceci sans modélisation du processus génératif des données. Dans les deux approches, et ce malgré leur dissemblance, le calcul numérique des estimateurs nécessite celui d'intégrales de grande dimension. La plus grande partie de cette thèse est consacrée au développement de telles méthodes dans quelques contextes spécifiques. / This PhD thesis deals with some computational issues of Bayesian statistics. I start by looking at problems stemming from the standard Bayesian paradigm. Estimators in this case take the form of integrals with respect to the posterior distribution. Next we will look at another approach where no, or almost no model is necessary. This will lead us to consider a Gibbs posterior. Those two approaches, although different in aspect, will lead to similar computational difficulties. In this thesis, I address some of these issues.
29

Statistique de potentiels d'action et distributions de Gibbs dans les réseaux de neurones / Neuronal networks, spike trains statistics and Gibbs distributions

Cofré, Rodrigo 05 November 2014 (has links)
Les neurones sensoriels réagissent à des stimuli externes en émettant des séquences de potentiels d’action (“spikes”). Ces spikes transmettent collectivement de l’information sur le stimulus en formant des motifs spatio-temporels qui constituent le code neural. On observe expérimentalement que ces motifs se produisent de façon irrégulière, mais avec une structure qui peut être mise en évidence par l’utilisation de descriptions probabilistes et de méthodes statistiques. Cependant, la caractérisation statistique des données expérimentales présente plusieurs contraintes majeures: en dehors de celles qui sont inhérentes aux statistiques empiriques comme la taille de l’échantillonnage, ‘le’ modèle statistique sous-jacent est inconnu. Dans cette thèse, nous abordons le problème d’un point de vue complémentaire à l’approche expérimentale. Nous nous intéressons à des modèles neuro-mimétiques permettant d’étudier la statistique collective des potentiels d’action et la façon dont elle dépend de l’architecture et l’histoire du réseau ainsi que du stimulus. Nous considérons tout d’abord un modèle de type Intègre-et-Tire à conductance incluant synapses électriques et chimiques. Nous montrons que la statistique des potentiels d’action est caractérisée par une distribution non stationnaire et de mémoire infinie, compatible avec les probabilités conditionnelles (left interval-specification), qui est non-nulle et continue, donc une distribution de Gibbs. Nous présentons ensuite une méthode qui permet d’unifier les modèles dits d’entropie maximale spatio-temporelle (dont la mesure invariante est une distribution de Gibbs dans le sens de Bowen) et les modèles neuro-mimétiques, en fou / Sensory neurons respond to external stimulus using sequences of action potentials (“spikes”). They convey collectively to the brain information about the stimulus using spatio-temporal patterns of spikes (spike trains), that constitute a “neural code”. Since spikes patterns occur irregularly (yet highly structured) both within and over repeated trials, it is reasonable to characterize them using statistical methods and probabilistic descriptions. However, the statistical characterization of experimental data presents several major constraints: apart from those inherent to empirical statistics like finite size sampling, ‘the’ underlying statistical model is unknown. In this thesis we adopt a complementary approach to experiments. We consider neuromimetic models allowing the study of collective spike trains statistics and how it depends on network architecture and history, as well as on the stimulus. First, we consider a conductance-based Integrate-and-Fire model with chemical and electric synapses. We show that the spike train statistics is characterized by non-stationary, infinite memory, distribution consistent with conditional probabilities (Left interval specifications), which is continuous and non null, thus a Gibbs distribution. Then, we present a novel method that allows us to unify spatio-temporal Maximum Entropy models (whose invariant measure are Gibbs distributions in the Bowen sense) and neuro-mimetic models, providing a solid ground towards biophysical explanation of spatio-temporal correlations observed in experimental data. Finally, using these tools, we discuss the stimulus response of retinal ganglion cells, and the possible generalization of the co
30

Modélisations polynomiales des signaux ECG : applications à la compression / Polynomial modelling of ecg signals with applications to data compression

Tchiotsop, Daniel 15 November 2007 (has links)
La compression des signaux ECG trouve encore plus d’importance avec le développement de la télémédecine. En effet, la compression permet de réduire considérablement les coûts de la transmission des informations médicales à travers les canaux de télécommunication. Notre objectif dans ce travail de thèse est d’élaborer des nouvelles méthodes de compression des signaux ECG à base des polynômes orthogonaux. Pour commencer, nous avons étudié les caractéristiques des signaux ECG, ainsi que différentes opérations de traitements souvent appliquées à ce signal. Nous avons aussi décrit de façon exhaustive et comparative, les algorithmes existants de compression des signaux ECG, en insistant sur ceux à base des approximations et interpolations polynomiales. Nous avons abordé par la suite, les fondements théoriques des polynômes orthogonaux, en étudiant successivement leur nature mathématique, les nombreuses et intéressantes propriétés qu’ils disposent et aussi les caractéristiques de quelques uns de ces polynômes. La modélisation polynomiale du signal ECG consiste d’abord à segmenter ce signal en cycles cardiaques après détection des complexes QRS, ensuite, on devra décomposer dans des bases polynomiales, les fenêtres de signaux obtenues après la segmentation. Les coefficients produits par la décomposition sont utilisés pour synthétiser les segments de signaux dans la phase de reconstruction. La compression revient à utiliser un petit nombre de coefficients pour représenter un segment de signal constitué d’un grand nombre d’échantillons. Nos expérimentations ont établi que les polynômes de Laguerre et les polynômes d’Hermite ne conduisaient pas à une bonne reconstruction du signal ECG. Par contre, les polynômes de Legendre et les polynômes de Tchebychev ont donné des résultats intéressants. En conséquence, nous concevons notre premier algorithme de compression de l’ECG en utilisant les polynômes de Jacobi. Lorsqu’on optimise cet algorithme en supprimant les effets de bords, il dévient universel et n’est plus dédié à la compression des seuls signaux ECG. Bien qu’individuellement, ni les polynômes de Laguerre, ni les fonctions d’Hermite ne permettent une bonne modélisation des segments du signal ECG, nous avons imaginé l’association des deux systèmes de fonctions pour représenter un cycle cardiaque. Le segment de l’ECG correspondant à un cycle cardiaque est scindé en deux parties dans ce cas: la ligne isoélectrique qu’on décompose en séries de polynômes de Laguerre et les ondes P-QRS-T modélisées par les fonctions d’Hermite. On obtient un second algorithme de compression des signaux ECG robuste et performant. / Developing new ECG data compression methods has become more important with the implementation of telemedicine. In fact, compression schemes could considerably reduce the cost of medical data transmission through modern telecommunication networks. Our aim in this thesis is to elaborate compression algorithms for ECG data, using orthogonal polynomials. To start, we studied ECG physiological origin, analysed this signal patterns, including characteristic waves and some signal processing procedures generally applied ECG. We also made an exhaustive review of ECG data compression algorithms, putting special emphasis on methods based on polynomial approximations or polynomials interpolations. We next dealt with the theory of orthogonal polynomials. We tackled on the mathematical construction and studied various and interesting properties of orthogonal polynomials. The modelling of ECG signals with orthogonal polynomials includes two stages: Firstly, ECG signal should be divided into blocks after QRS detection. These blocks must match with cardiac cycles. The second stage is the decomposition of blocks into polynomial bases. Decomposition let to coefficients which will be used to synthesize reconstructed signal. Compression is the fact of using a small number of coefficients to represent a block made of large number of signal samples. We realised ECG signals decompositions into some orthogonal polynomials bases: Laguerre polynomials and Hermite polynomials did not bring out good signal reconstruction. Interesting results were recorded with Legendre polynomials and Tchebychev polynomials. Consequently, our first algorithm for ECG data compression was designed using Jacobi polynomials. This algorithm could be optimized by suppression of boundary effects, it then becomes universal and could be used to compress other types of signal such as audio and image signals. Although Laguerre polynomials and Hermite functions could not individually let to good signal reconstruction, we imagined an association of both systems of functions to realize ECG compression. For that matter, every block of ECG signal that matches with a cardiac cycle is split in two parts. The first part consisting of the baseline section of ECG is decomposed in a series of Laguerre polynomials. The second part made of P-QRS-T waves is modelled with Hermite functions. This second algorithm for ECG data compression is robust and very competitive.

Page generated in 0.0346 seconds