• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 161
  • 68
  • 35
  • 16
  • 13
  • 11
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 367
  • 103
  • 65
  • 55
  • 53
  • 48
  • 45
  • 41
  • 41
  • 41
  • 38
  • 38
  • 34
  • 31
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Modélisations polynomiales des signaux ECG : applications à la compression / Polynomial modelling of ecg signals with applications to data compression

Tchiotsop, Daniel 15 November 2007 (has links)
La compression des signaux ECG trouve encore plus d’importance avec le développement de la télémédecine. En effet, la compression permet de réduire considérablement les coûts de la transmission des informations médicales à travers les canaux de télécommunication. Notre objectif dans ce travail de thèse est d’élaborer des nouvelles méthodes de compression des signaux ECG à base des polynômes orthogonaux. Pour commencer, nous avons étudié les caractéristiques des signaux ECG, ainsi que différentes opérations de traitements souvent appliquées à ce signal. Nous avons aussi décrit de façon exhaustive et comparative, les algorithmes existants de compression des signaux ECG, en insistant sur ceux à base des approximations et interpolations polynomiales. Nous avons abordé par la suite, les fondements théoriques des polynômes orthogonaux, en étudiant successivement leur nature mathématique, les nombreuses et intéressantes propriétés qu’ils disposent et aussi les caractéristiques de quelques uns de ces polynômes. La modélisation polynomiale du signal ECG consiste d’abord à segmenter ce signal en cycles cardiaques après détection des complexes QRS, ensuite, on devra décomposer dans des bases polynomiales, les fenêtres de signaux obtenues après la segmentation. Les coefficients produits par la décomposition sont utilisés pour synthétiser les segments de signaux dans la phase de reconstruction. La compression revient à utiliser un petit nombre de coefficients pour représenter un segment de signal constitué d’un grand nombre d’échantillons. Nos expérimentations ont établi que les polynômes de Laguerre et les polynômes d’Hermite ne conduisaient pas à une bonne reconstruction du signal ECG. Par contre, les polynômes de Legendre et les polynômes de Tchebychev ont donné des résultats intéressants. En conséquence, nous concevons notre premier algorithme de compression de l’ECG en utilisant les polynômes de Jacobi. Lorsqu’on optimise cet algorithme en supprimant les effets de bords, il dévient universel et n’est plus dédié à la compression des seuls signaux ECG. Bien qu’individuellement, ni les polynômes de Laguerre, ni les fonctions d’Hermite ne permettent une bonne modélisation des segments du signal ECG, nous avons imaginé l’association des deux systèmes de fonctions pour représenter un cycle cardiaque. Le segment de l’ECG correspondant à un cycle cardiaque est scindé en deux parties dans ce cas: la ligne isoélectrique qu’on décompose en séries de polynômes de Laguerre et les ondes P-QRS-T modélisées par les fonctions d’Hermite. On obtient un second algorithme de compression des signaux ECG robuste et performant. / Developing new ECG data compression methods has become more important with the implementation of telemedicine. In fact, compression schemes could considerably reduce the cost of medical data transmission through modern telecommunication networks. Our aim in this thesis is to elaborate compression algorithms for ECG data, using orthogonal polynomials. To start, we studied ECG physiological origin, analysed this signal patterns, including characteristic waves and some signal processing procedures generally applied ECG. We also made an exhaustive review of ECG data compression algorithms, putting special emphasis on methods based on polynomial approximations or polynomials interpolations. We next dealt with the theory of orthogonal polynomials. We tackled on the mathematical construction and studied various and interesting properties of orthogonal polynomials. The modelling of ECG signals with orthogonal polynomials includes two stages: Firstly, ECG signal should be divided into blocks after QRS detection. These blocks must match with cardiac cycles. The second stage is the decomposition of blocks into polynomial bases. Decomposition let to coefficients which will be used to synthesize reconstructed signal. Compression is the fact of using a small number of coefficients to represent a block made of large number of signal samples. We realised ECG signals decompositions into some orthogonal polynomials bases: Laguerre polynomials and Hermite polynomials did not bring out good signal reconstruction. Interesting results were recorded with Legendre polynomials and Tchebychev polynomials. Consequently, our first algorithm for ECG data compression was designed using Jacobi polynomials. This algorithm could be optimized by suppression of boundary effects, it then becomes universal and could be used to compress other types of signal such as audio and image signals. Although Laguerre polynomials and Hermite functions could not individually let to good signal reconstruction, we imagined an association of both systems of functions to realize ECG compression. For that matter, every block of ECG signal that matches with a cardiac cycle is split in two parts. The first part consisting of the baseline section of ECG is decomposed in a series of Laguerre polynomials. The second part made of P-QRS-T waves is modelled with Hermite functions. This second algorithm for ECG data compression is robust and very competitive.
32

Aplicação de métodos de otimização no cálculo de equilíbrio de misturas de biodiesel com utilização de metodologias rigorosas para estimativa de propriedades termodinâmicas / Application of optimization methods in equilibrium calculation of biodiesel blends with use of rigorous methodologies for estimation of thermodynamic properties

Borghi, Daniela de Freitas, 1983- 12 December 2014 (has links)
Orientadores: Reginaldo Guirardello, Charlles Rubber de Almeida Abreu / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Química / Made available in DSpace on 2018-08-26T16:13:40Z (GMT). No. of bitstreams: 1 Borghi_DanieladeFreitas_D.pdf: 3030117 bytes, checksum: b4aac454256a7b9c27deaa4219de53e7 (MD5) Previous issue date: 2014 / Resumo: Atualmente, a produção de biodiesel tem sido objeto de estudo em diversos países devido à previsão de exaustão das fontes petrolíferas e também às suas características, já que este combustível possui vantagens ambientais em relação à emissão de gases poluentes. Trata-se de uma fonte de energia renovável e as matérias-primas para a sua produção são abundantes no Brasil. O biodiesel é produzido através da transesterificação de gordura animal ou óleo vegetal. Porém, os reagentes (óleo e etanol/metanol) e os produtos (éster metílico/etílico de ácidos graxos e glicerol) são parcialmente solúveis durante o processo. Assim, o conhecimento do equilíbrio de fases das misturas envolvidas em sua produção é um fator importante na otimização das condições da reação e da separação final dos produtos. Dentro deste contexto, o presente trabalho teve como objetivo realizar cálculos de equilíbrio de misturas de compostos envolvidos nos processos de produção de biodiesel, utilizando técnicas de otimização global. A modelagem matemática requer o conhecimento de propriedades termoquímicas de todas as espécies envolvidas e do equilíbrio de fases do sistema. Tais informações foram obtidas a partir do uso de uma metodologia utilizando o software Gaussian 03 e o método COSMO-SAC. E o equilíbrio de fases foi realizado através da minimização da energia de Gibbs do sistema. Foram obtidos perfis sigma a serem utilizados no método COSMO-SAC através do software MOPAC. Eles foram comparados com os perfis sigma de um banco de dados do grupo Virginia Tech (VT-2005) e percebeu-se que, apesar de apresentarem diferenças, qualitativamente eles possuem muitas semelhanças. Foi verificada a eficiência do modelo COSMO-SAC em predizer os coeficientes de atividade de uma mistura binária de metanol e glicerol e observou-se que os desvios médios absolutos entre os valores experimentais e os calculados estão acima de 27%, o que é um desvio muito alto. Além disso, utilizou-se a metodologia da minimização da energia de Gibbs para a realização do cálculo de equilíbrio de fases de misturas binárias contendo água, metanol e glicerol, com o auxílio da ferramenta computacional GAMS. Os desvios médios absolutos obtidos variaram de 1,55 a 7,96%, dependendo da mistura. Portanto, este modelo ainda precisa de refinamentos para se tornar uma ferramenta confiável / Abstract: Biodiesel production has been studied in several countries due to the forecast of petroleum sources depletion and to its characteristics, since this fuel has environmental advantages due to lower pollutant gas emissions. It is a renewable source of energy and raw materials for its production are abundant in Brazil. Biodiesel is produced by transesterification of vegetable oils or animal fat. However, reagents (oil and ethanol/methanol) and products (fatty acids ethyl/methyl ester and glycerol) are partially soluble in the process. Therefore, knowledge of the phase equilibrium mixtures involved in its production is an important factor in optimizing the reaction conditions and separation of the final products. In this context, this work aims to perform equilibrium calculations of mixtures of compounds involved in the biodiesel production process using global optimization techniques. Mathematical modeling requires the knowledge of thermochemical properties of all the species involved and the phase equilibrium of the system. Such information was obtained with a methodology using the Gaussian 03 software and COSMO-SAC method. The phase equilibrium was achieved by minimizing the Gibbs energy of the system. Sigma profiles to be used in COSMO-SAC method were obtained using the MOPAC software. It was compared with the sigma profiles from Virginia Tech group database (VT-2005) and it was realized that, despite having differences, qualitatively it is similar. The efficiency of COSMO-SAC model to predict the activity coefficients of a binary mixture of methanol and glycerol was verified and it was found that the average absolute deviation between experimental and calculated values are above 27%, a very high value. In addition, a methodology of Gibbs energy minimization we used to perform the calculation of the phase equilibrium of binary mixtures containing water, methanol and glycerol, with the GAMS software. The mean absolute deviations obtained ranged from 1.55 to 7.96% depending on the mixure. Therefore, this model still needs refinements to become a reliable tool / Doutorado / Engenharia Química / Doutora em Engenharia Quimica
33

Gibbs free energy minimization for flow in porous media

Venkatraman, Ashwin 25 June 2014 (has links)
CO₂ injection in oil reservoirs provides the dual benefit of increasing oil recovery as well as sequestration. Compositional simulations using phase behavior calculations are used to model miscibility and estimate oil recovery. The injected CO₂, however, is known to react with brine. The precipitation and dissolution reactions, especially with carbonate rocks, can have undesirable consequences. The geochemical reactions can also change the mole numbers of components and impact the phase behavior of hydrocarbons. A Gibbs free energy framework that integrates phase equilibrium computations and geochemical reactions is presented in this dissertation. This framework uses the Gibbs free energy function to unify different phase descriptions - Equation of State (EOS) for hydrocarbon components and activity coefficient model for aqueous phase components. A Gibbs free energy minimization model was developed to obtain the equilibrium composition for a system with not just phase equilibrium (no reactions) but also phase and chemical equilibrium (with reactions). This model is adaptable to different reservoirs and can be incorporated in compositional simulators. The Gibbs free energy model is used for two batch calculation applications. In the first application, solubility models are developed for acid gases (CO₂ /H2 S) in water as well as brine at high pressures (0.1 - 80 MPa) and high temperatures (298-393 K). The solubility models are useful for formulating acid gas injection schemes to ensure continuous production from contaminated gas fields as well as for CO₂ sequestration. In the second application, the Gibbs free energy approach is used to predict the phase behavior of hydrocarbon mixtures - CO₂ -nC₁₄ H₃₀ and CH₄ -CO₂. The Gibbs free energy model is also used to predict the impact of geochemical reactions on the phase behavior of these two hydrocarbon mixtures. The Gibbs free energy model is integrated with flow using operator splitting to model an application of cation exchange reactions between aqueous phase and the solid surface. A 1-D numerical model to predict effluent concentration for a system with three cations using the Gibbs free energy minimization approach was observed to be faster than an equivalent stoichiometric approach. Analytical solutions were also developed for this system using the hyperbolic theory of conservation laws and are compared with experimental results available at laboratory and field scales. / text
34

Application of a Gibbs Sampler to estimating parameters of a hierarchical normal model with a time trend and testing for existence of the global warming

Yankovskyy, Yevhen January 1900 (has links)
Master of Science / Department of Statistics / Paul I. Nelson / This research is devoted to studying statistical inference implemented using the Gibbs Sampler for a hierarchical Bayesian linear model with first order autoregressive structure. This model was applied to global-mean monthly temperatures from January 1880 to April 2008 and used to estimate a time trend coefficient and to test for the existence of global warming. The global temperature increase estimated by Gibbs Sampler was found to be between 0.0203℃ and 0.0284℃ per decade with 95% credibility. The difference between Gibbs Sampler estimate and ordinary least squares estimate for the time trend was insignificant. Further, a simulation study with data generated from this model was carried out. This study showed that the Gibbs Sampler estimators for the intercept and for the time trend were less biased than corresponding ordinary least squares estimators, while the reverse was true for the autoregressive parameter and error standard deviation. The difference in precision of the estimators found by the two approaches was insignificant except for the samples of small sizes. The Gibbs Sampler estimator of the time trend has significantly smaller mean square error than ordinary least squares estimator for the smaller sample sizes studied. This report also describes how the software package WinBUGS can be used to carry out the simulations required to implement a Gibbs Sampler.
35

topicmodels: An R Package for Fitting Topic Models

Hornik, Kurt, Grün, Bettina January 2011 (has links) (PDF)
Topic models allow the probabilistic modeling of term frequency occurrences in documents. The fitted model can be used to estimate the similarity between documents as well as between a set of specified keywords using an additional layer of latent variables which are referred to as topics. The R package topicmodels provides basic infrastructure for fitting topic models based on data structures from the text mining package tm. The package includes interfaces to two algorithms for fitting topic models: the variational expectation-maximization algorithm provided by David M. Blei and co-authors and an algorithm using Gibbs sampling by Xuan-Hieu Phan and co-authors.
36

Bayesian Inference of a Finite Population under Selection Bias

Xu, Zhiqing 01 May 2014 (has links)
Length-biased sampling method gives the samples from a weighted distribution. With the underlying distribution of the population, one can estimate the attributes of the population by converting the weighted samples. In this thesis, generalized gamma distribution is considered as the underlying distribution of the population and the inference of the weighted distribution is made. Both the models with known and unknown finite population size are considered. In the modes with known finite population size, maximum likelihood estimation and bootstrapping methods are attempted to derive the distributions of the parameters and population mean. For the sake of comparison, both the models with and without the selection bias are built. The computer simulation results show the model with selection bias gives better prediction for the population mean. In the model with unknown finite population size, the distributions of the population size as well as the sample complements are derived. Bayesian analysis is performed using numerical methods. Both the Gibbs sampler and random sampling method are employed to generate the parameters from their joint posterior distribution. The fitness of the size-biased samples are checked by utilizing conditional predictive ordinate.
37

On Gibbsianness of infinite-dimensional diffusions

Dereudre, David, Roelly, Sylvie January 2004 (has links)
We analyse different Gibbsian properties of interactive Brownian diffusions X indexed by the lattice $Z^{d} : X = (X_{i}(t), i ∈ Z^{d}, t ∈ [0, T], 0 < T < +∞)$. In a first part, these processes are characterized as Gibbs states on path spaces of the form $C([0, T],R)Z^{d}$. In a second part, we study the Gibbsian character on $R^{Z}^{d}$ of $v^{t}$, the law at time t of the infinite-dimensional diffusion X(t), when the initial law $v = v^{0}$ is Gibbsian.
38

Propagation of Gibbsianness for infinite-dimensional gradient Brownian diffusions

Dereudre, David, Roelly, Sylvie January 2004 (has links)
We study the (strong-)Gibbsian character on RZd of the law at time t of an infinitedimensional gradient Brownian diffusion / when the initial distribution is Gibbsian.
39

On Gibbsianness of infinite-dimensional diffusions

Roelly, Sylvie, Dereudre, David January 2004 (has links)
The authors analyse different Gibbsian properties of interactive Brownian diffusions X indexed by the d-dimensional lattice. In the first part of the paper, these processes are characterized as Gibbs states on path spaces. In the second part of the paper, they study the Gibbsian character on R^{Z^d} of the law at time t of the infinite-dimensional diffusion X(t), when the initial law is Gibbsian. <br><br> AMS Classifications: 60G15 / 60G60 / 60H10 / 60J60
40

On the construction of point processes in statistical mechanics

Nehring, Benjamin, Poghosyan, Suren, Zessin, Hans January 2013 (has links)
By means of the cluster expansion method we show that a recent result of Poghosyan and Ueltschi (2009) combined with a result of Nehring (2012) yields a construction of point processes of classical statistical mechanics as well as processes related to the Ginibre Bose gas of Brownian loops and to the dissolution in R^d of Ginibre's Fermi-Dirac gas of such loops. The latter will be identified as a Gibbs perturbation of the ideal Fermi gas. On generalizing these considerations we will obtain the existence of a large class of Gibbs perturbations of the so-called KMM-processes as they were introduced by Nehring (2012). Moreover, it is shown that certain "limiting Gibbs processes" are Gibbs in the sense of Dobrushin, Lanford and Ruelle if the underlying potential is positive. And finally, Gibbs modifications of infinitely divisible point processes are shown to solve a new integration by parts formula if the underlying potential is positive.

Page generated in 0.0257 seconds