• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 102
  • 47
  • 27
  • 14
  • 14
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 263
  • 62
  • 42
  • 37
  • 36
  • 34
  • 29
  • 26
  • 26
  • 26
  • 25
  • 23
  • 21
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Bayesian Gaussian processes for sequential prediction, optimisation and quadrature

Osborne, Michael A. January 2010 (has links)
We develop a family of Bayesian algorithms built around Gaussian processes for various problems posed by sensor networks. We firstly introduce an iterative Gaussian process for multi-sensor inference problems, and show how our algorithm is able to cope with data that may be noisy, missing, delayed and/or correlated. Our algorithm can also effectively manage data that features changepoints, such as sensor faults. Extensions to our algorithm allow us to tackle some of the decision problems faced in sensor networks, including observation scheduling. Along these lines, we also propose a general method of global optimisation, Gaussian process global optimisation (GPGO), and demonstrate how it may be used for sensor placement. Our algorithms operate within a complete Bayesian probabilistic framework. As such, we show how the hyperparameters of our system can be marginalised by use of Bayesian quadrature, a principled method of approximate integration. Similar techniques also allow us to produce full posterior distributions for any hyperparameters of interest, such as the location of changepoints. We frame the selection of the positions of the hyperparameter samples required by Bayesian quadrature as a decision problem, with the aim of minimising the uncertainty we possess about the values of the integrals we are approximating. Taking this approach, we have developed sampling for Bayesian quadrature (SBQ), a principled competitor to Monte Carlo methods. We conclude by testing our proposals on real weather sensor networks. We further benchmark GPGO on a wide range of canonical test problems, over which it achieves a significant improvement on its competitors. Finally, the efficacy of SBQ is demonstrated in the context of both prediction and optimisation.
162

Metody výpočtu maximálně věrohodných odhadů v zobecněném lineárním smíšeném modelu / Computational Methods for Maximum Likelihood Estimation in Generalized Linear Mixed Models

Otava, Martin January 2011 (has links)
of the diploma thesis Title: Computational Methods for Maximum Likelihood Estimation in Generalized Linear Mixed Models Author: Bc. Martin Otava Department: Department of Probability and Mathematical Statistics Supervisor: RNDr. Arnošt Komárek, Ph.D., Department of Probability and Mathematical Statistics Abstract: Using maximum likelihood method for generalized linear mixed models, the analytically unsolvable problem of maximization can occur. As solution, iterative and ap- proximate methods are used. The latter ones are core of the thesis. Detailed and general introducing of the widely used methods is emphasized with algorithms useful in practical cases. Also the case of non-gaussian random effects is discussed. The approximate methods are demonstrated using the real data sets. Conclusions about bias and consistency are supported by the simulation study. Keywords: generalized linear mixed model, penalized quasi-likelihood, adaptive Gauss- Hermite quadrature 1
163

Modelos de regressão beta com efeitos aleatórios normais e não normais para dados longitudinais / Beta regression models with normal and not normal random effects for longitudinal data

Usuga Manco, Olga Cecilia 01 March 2013 (has links)
A classe de modelos de regressão beta tem sido estudada amplamente. Porém, para esta classe de modelos existem poucos trabalhos sobre a inclusão de efeitos aleatórios e a flexibilização da distribuição dos efeitos aleatórios, além de métodos de predição e de diagnóstico no ponto de vista dos efeitos aleatórios. Neste trabalho são propostos modelos de regressão beta com efeitos aleatórios normais e não normais para dados longitudinais. Os métodos de estimação de parâmetros e de predição dos efeitos aleatórios usados no trabalho são o método de máxima verossimilhança e o método do melhor preditor de Bayes empírico. Para aproximar a função de verossimilhança foi utilizada a quadratura de Gauss-Hermite. Métodos de seleção de modelos e análise de resíduos também foram propostos. Foi implementado o pacote BLMM no R para a realização de todos os procedimentos. O processo de estimação os parâmetros dos modelos e a distribuição empírica dos resíduos propostos foram analisados por meio de estudos de simulação. Foram consideradas várias distribuições para os efeitos aleatórios, valores para o número de indivíduos, número de observações por indivíduo e estruturas de variância-covariância para os efeitos aleatórios. Os resultados dos estudos de simulação mostraram que o processo de estimação obtém melhores resultados quando o número de indivíduos e o número de observações por indivíduo aumenta. Estes estudos também mostraram que o resíduo quantil aleatorizado segue uma distribuição aproximadamente normal. A metodologia apresentada é uma ferramenta completa para analisar dados longitudinais contínuos que estão restritos ao intervalo limitado (0; 1). / The class of beta regression models has been studied extensively. However, there are few studies on the inclusion of random effects and models with flexible random effects distributions besides prediction and diagnostic methods. In this work we proposed a beta regression models with normal and not normal random effects for longitudinal data. The maximum likelihood method and the empirical Bayes approach are used to obtain the estimates and the best prediction. Also, the Gauss-Hermite quadrature is used to approximate the likelihood function. Model selection methods and residual analysis were also proposed.We implemented a BLMM package in R to perform all procedures. The estimation procedure and the empirical distribution of residuals were analyzed through simulation studies considering differents random effects distributions, values for the number of individuals, number of observations per individual and covariance structures for the random effects. The results of simulation studies showed that the estimation procedure obtain better results when the number of individuals and the number of observations per individual increase. These studies also showed that the empirical distribution of the quantile randomized residual follows a normal distribution. The methodolgy presented is a tool for analyzing longitudinal data restricted to a interval (0; 1).
164

Direct quadrature conditional moment closure for turbulent non-premixed combustion

Ali, Shaukat January 2014 (has links)
The accurate description of the turbulence chemistry interactions that can determine chemical conversion rates and flame stability in turbulent combustion modelling is a challenging research area. This thesis presents the development and implementation of a model for the treatment of fluctuations around the conditional mean (i.e., the auto-ignition and extinction phenomenon) of realistic turbulence-chemistry interactions in computational fluid dynamics (CFD) software. The wider objective is to apply the model to advanced combustion modelling and extend the present analysis to larger hydrocarbon fuels and particularly focus on the ability of the model to capture the effects of particulate formation such as soot. A comprehensive approach for modelling of turbulent combustion is developed in this work. A direct quadrature conditional moment closure (DQCMC) method for the treatment of realistic turbulence-chemistry interactions in computational fluid dynamics (CFD) software is described. The method which is based on the direct quadrature method of moments (DQMOM) coupled with the Conditional Moment Closure (CMC) equations is in simplified form and easily implementable in existing CMC formulation for CFD code. The observed fluctuations of scalar dissipation around the conditional mean values are captured by the treatment of a set of mixing environments, each with its pre-defined weight. In the DQCMC method the resulting equations are similar to that of the first-order CMC, and the “diffusion in the mixture fraction space” term is strictly positive and no correction factors are used. Results have been presented for two mixing environments, where the resulting matrices of the DQCMC can be inverted analytically. Initially the DQCMC is tested for a simple hydrogen flame using a multi species chemical scheme containing nine species. The effects of the fluctuations around the conditional means are captured qualitatively and the predicted results are in very good agreement with observed trends from direct numerical simulations (DNS). To extend the analysis further and validate the model for larger hydrocarbon fuel, the simulations have been performed for n-heptane flame using detailed multi species chemical scheme containing 67 species. The hydrocarbon fuel showed improved results in comparison to the simple hydrogen flame. It suggests that higher hydrocarbons are more sensitive to local scalar dissipation rate and the fluctuations around the conditional means than the hydrogen. Finally, the DQCMC is coupled with a semi-empirical soot model to study the effects of particulate formation such as soot. The modelling results show to predict qualitatively the trends from DNS and are in very good agreement with available experimental data from a shock tube concerning ignition delays time. Furthermore, the findings suggest that the DQCMC approach is a promising framework for soot modelling.
165

As ideias envolvidas na gênese do teorema fundamental do cálculo, de Arquimedes a Newton e Leibniz

Santos, Walkíria Corrêa dos 13 May 2011 (has links)
Made available in DSpace on 2016-04-27T16:57:07Z (GMT). No. of bitstreams: 1 Walkiria Correa dos Santos.pdf: 2202936 bytes, checksum: 0b47cf76b6ab7f2053830abc5b6950c9 (MD5) Previous issue date: 2011-05-13 / Secretaria da Educação do Estado de São Paulo / This paper seeks to contribute to the study of the main ideas that involve the Fundamental Theorem of Calculus (FTC) from the Mathematics in Ancient Greece to contributions of Newton (1642 - 1727) and Leibniz (1646 - 1716), the seventeenth century. Given the scope of this theme, we focus our attention on the question of Incommensurability and in consequence, the definition of Proportion of Eudoxus (390 a.C. - 320 a.C.). Such a definition, results in the 'geometrization' of translating the mathematical ideas that culminated in the concepts of derivative and integral, in quadrature issues and calculation of volumes, through method of exhaustion and method Mechanic Archimedes (287 a.C. - 212 a.C.), and the method of tracing the tangent of Apollonius (262 a.C.) - 190 a.C.). The searches tangent to a curve and the problem of quadrature were a predecessor motive for the work of Newton (1642 - 1727) and Leibniz (1646 - 1716) could establish "Infinitesimal Calculus". The revival of mathematical activity in the fifteenth century, with the need for new routes of commerce and navigation, covering arithmetic, algebra and trigonometry and the sixteenth century, were of great importance, forming the basis of all algebraic development. In the seventeenth century, an important area has been established: the Analytic Geometry, which contributed greatly to the achievements of Newton (1642 - 1727), and Leibniz (1646 - 1716), by establishing, in definitive, that the process of integration and differentiation are inverse operations of one another. The result is now known as the Fundamental Theorem of Calculus. The product of the research conducted is a text, drafted with didactic concern, which aims to facilitate understanding of the interconnection of ideas that have contributed, through centuries, to the result that we now know as the Fundamental Theorem of calculus / Esse trabalho busca contribuir com o estudo das principais ideias que envolvem o Teorema Fundamental do Cálculo (TFC), desde a Matemática na Grécia Antiga até as contribuições de Newton (1642 - 1727) e Leibniz (1646 - 1716), no século XVII. Dada a abrangência de tal tema, focamos nossa atenção na questão da Incomensurabilidade e em decorrência, na definição de Proporção de Eudoxo (390 a.C. - 320 a.C.). Tal definição traz como consequência a ‗geometrização da matemática traduzindo as ideias que culminaram nos conceitos de derivada e integral, nas questões de quadratura e cálculo de volumes, por meio dos métodos de Exaustão e o método Mecânico de Arquimedes (287 a.C. - 212 a.C.), e no método do traçado de tangente de Apolônio (262 a.C. - 190 a.C.) . As buscas da tangente a uma curva e a questão da quadratura foram a mola precursora para que os trabalhos de Newton (1642 - 1727) e Leibniz (1646 - 1716) pudessem estabelecer o Cálculo Infinitesimal. O renascimento da atividade matemática no século XV, pela necessidade de novas rotas de comércios e navegação, abordando a aritmética, a álgebra e a trigonometria e o século XVI, foram de grande importância, constituindo a base de todo desenvolvimento algébrico. No século XVII, uma importante área foi estabelecida: a Geometria Analítica que muito contribuiu para os resultados alcançados por Newton (1642 - 1727) e Leibniz (1646 - 1716), estabelecendo, em definitivo, que o processo de integração e derivação são operações uma inversa da outra. O resultado é hoje conhecido como Teorema Fundamental do Cálculo. O produto da pesquisa realizada é um texto, redigido com preocupação didática, que pretende facilitar o entendimento da interligação das ideias que contribuíram, através de séculos, para o resultado que hoje conhecemos como o Teorema Fundamental do Cálculo
166

Modelos log-Birnbaum-Saunders mistos / Log-Birnbaum-Saunders mixed models

Lobos, Cristian Marcelo Villegas 06 October 2010 (has links)
O objetivo principal deste trabalho é introduzir os modelos log-Birnbaum-Saunders mistos (log-BS mistos) e estender os resultados para os modelos log-Birnbaum-Saunders t-Student mistos (log-BS-t mistos). Os modelos log-BS são bastante conhecidos desde o trabalho de Rieck e Nedelman (1991) e particularmente receberam uma grande atenção nos últimos 10 anos com vários trabalhos publicados em periódicos internacionais. Contudo, o enfoque desses trabalhos tem sido em modelos log-BS ou log-BS generalizados com efeitos fixos, não havendo muita atenção para modelos com efeitos aleatórios. Inicialmente, apresentamos no trabalho uma revisão das distribuições Birnbaum-Saunders e Birnbaum-Saunders generalizada (BSG) e em seguida discutimos os modelos log-BS e log-BS-t com efeitos fixos, para os quais revisamos alguns resultados de estimação e diagnóstico. Os modelos log-BS mistos são então apresentados precedidos de uma revisão dos métodos de quadratura de Gauss Hermite (QGH). Embora a estimação dos parâmetros nos modelos log-BS mistos seja efetuada através do procedimento Proc NLMIXED do SAS (Littell et al, 1996), aplicamos o método de quadratura não adaptativa a fim de obtermos aproximações para o logaritmo da função de verossimilhança do modelo log-BS de intercepto aleatório. Com essas aproximações derivamos as funções escore e a matriz hessiana, além das curvaturas normais de influência local (Cook, 1986) para alguns esquemas de perturbação usuais. Os mesmos procedimentos são aplicados para os modelos log-BS-t de intercepto aleatório. Discussões sobre a predição dos efeitos aleatórios, teste para o componente de variância dos modelos com intercepto aleatório e análises de resíduos são também apresentados. Finalmente, comparamos os ajustes de modelos log-BS e log-BS mistos a um conjunto de dados reais. Métodos de diagnóstico são utilizados na comparação dos modelos ajustados. / The aim of this work is to introduce the log-Birnbaum-Saunders mixed models (log-BS mixed models) and to extend the results to log-Birnbaum-Saunders Student-t mixed models (log-BS-t mixed models). The log-BS models are well-known since the work by Rieck and Nedelman (1991) and particularly have received great attention in the last 10 years with various published papers in international journals. However, the emphasis given in such works has been in fixed-effects models with few attention given to random-effects models. Firstly, we present in this work a review on Birnbaum-Saunders and generalized Birnbaum-Saunders distributions and so we discuss log-BS and log-BS-t fixed-effects models for which some results on estimation and diagnostic are presented. Then, we introduce the log-BS mixed models preceded by a review on Gauss-Hermite quadrature. Although the parameter estimation of the marginal log-BS and log-BS-t mixed models are performed in the procedure NLMIXED of SAS (Littell et al., 1996), we apply the quadrature methods in order to obtain approximations for the likelihood function of the log-BS and log-BS-t random intercept models. These approximations are used to derive the respective score functions, observed information matrices as well as the normal curvature of local influence (Cook, 1986) under some usual perturbation schemes. Discussions on the prediction of the random effects, variance component tests and residual analysis are also given. Finally, we compare the fits of log-BS and log-BS-t mixed models to a real data set. Diagnostic methods are used in the comparisons.
167

Modelos de regressão beta com efeitos aleatórios normais e não normais para dados longitudinais / Beta regression models with normal and not normal random effects for longitudinal data

Olga Cecilia Usuga Manco 01 March 2013 (has links)
A classe de modelos de regressão beta tem sido estudada amplamente. Porém, para esta classe de modelos existem poucos trabalhos sobre a inclusão de efeitos aleatórios e a flexibilização da distribuição dos efeitos aleatórios, além de métodos de predição e de diagnóstico no ponto de vista dos efeitos aleatórios. Neste trabalho são propostos modelos de regressão beta com efeitos aleatórios normais e não normais para dados longitudinais. Os métodos de estimação de parâmetros e de predição dos efeitos aleatórios usados no trabalho são o método de máxima verossimilhança e o método do melhor preditor de Bayes empírico. Para aproximar a função de verossimilhança foi utilizada a quadratura de Gauss-Hermite. Métodos de seleção de modelos e análise de resíduos também foram propostos. Foi implementado o pacote BLMM no R para a realização de todos os procedimentos. O processo de estimação os parâmetros dos modelos e a distribuição empírica dos resíduos propostos foram analisados por meio de estudos de simulação. Foram consideradas várias distribuições para os efeitos aleatórios, valores para o número de indivíduos, número de observações por indivíduo e estruturas de variância-covariância para os efeitos aleatórios. Os resultados dos estudos de simulação mostraram que o processo de estimação obtém melhores resultados quando o número de indivíduos e o número de observações por indivíduo aumenta. Estes estudos também mostraram que o resíduo quantil aleatorizado segue uma distribuição aproximadamente normal. A metodologia apresentada é uma ferramenta completa para analisar dados longitudinais contínuos que estão restritos ao intervalo limitado (0; 1). / The class of beta regression models has been studied extensively. However, there are few studies on the inclusion of random effects and models with flexible random effects distributions besides prediction and diagnostic methods. In this work we proposed a beta regression models with normal and not normal random effects for longitudinal data. The maximum likelihood method and the empirical Bayes approach are used to obtain the estimates and the best prediction. Also, the Gauss-Hermite quadrature is used to approximate the likelihood function. Model selection methods and residual analysis were also proposed.We implemented a BLMM package in R to perform all procedures. The estimation procedure and the empirical distribution of residuals were analyzed through simulation studies considering differents random effects distributions, values for the number of individuals, number of observations per individual and covariance structures for the random effects. The results of simulation studies showed that the estimation procedure obtain better results when the number of individuals and the number of observations per individual increase. These studies also showed that the empirical distribution of the quantile randomized residual follows a normal distribution. The methodolgy presented is a tool for analyzing longitudinal data restricted to a interval (0; 1).
168

Uncertainty Quantification for low-frequency Maxwell equations with stochastic conductivity models

Kamilis, Dimitrios January 2018 (has links)
Uncertainty Quantification (UQ) has been an active area of research in recent years with a wide range of applications in data and imaging sciences. In many problems, the source of uncertainty stems from an unknown parameter in the model. In physical and engineering systems for example, the parameters of the partial differential equation (PDE) that model the observed data may be unknown or incompletely specified. In such cases, one may use a probabilistic description based on prior information and formulate a forward UQ problem of characterising the uncertainty in the PDE solution and observations in response to that in the parameters. Conversely, inverse UQ encompasses the statistical estimation of the unknown parameters from the available observations, which can be cast as a Bayesian inverse problem. The contributions of the thesis focus on examining the aforementioned forward and inverse UQ problems for the low-frequency, time-harmonic Maxwell equations, where the model uncertainty emanates from the lack of knowledge of the material conductivity parameter. The motivation comes from the Controlled-Source Electromagnetic Method (CSEM) that aims to detect and image hydrocarbon reservoirs by using electromagnetic field (EM) measurements to obtain information about the conductivity profile of the sub-seabed. Traditionally, algorithms for deterministic models have been employed to solve the inverse problem in CSEM by optimisation and regularisation methods, which aside from the image reconstruction provide no quantitative information on the credibility of its features. This work employs instead stochastic models where the conductivity is represented as a lognormal random field, with the objective of providing a more informative characterisation of the model observables and the unknown parameters. The variational formulation of these stochastic models is analysed and proved to be well-posed under suitable assumptions. For computational purposes the stochastic formulation is recast as a deterministic, parametric problem with distributed uncertainty, which leads to an infinite-dimensional integration problem with respect to the prior and posterior measure. One of the main challenges is thus the approximation of these integrals, with the standard choice being some variant of the Monte-Carlo (MC) method. However, such methods typically fail to take advantage of the intrinsic properties of the model and suffer from unsatisfactory convergence rates. Based on recently developed theory on high-dimensional approximation, this thesis advocates the use of Sparse Quadrature (SQ) to tackle the integration problem. For the models considered here and under certain assumptions, we prove that for forward UQ, Sparse Quadrature can attain dimension-independent convergence rates that out-perform MC. Typical CSEM models are large-scale and thus additional effort is made in this work to reduce the cost of obtaining forward solutions for each sampling parameter by utilising the weighted Reduced Basis method (RB) and the Empirical Interpolation Method (EIM). The proposed variant of a combined SQ-EIM-RB algorithm is based on an adaptive selection of training sets and a primal-dual, goal-oriented formulation for the EIM-RB approximation. Numerical examples show that the suggested computational framework can alleviate the computational costs associated with forward UQ for the pertinent large-scale models, thus providing a viable methodology for practical applications.
169

Modélisation stochastique de processus d'agrégation en chimie / Stochastic modeling of aggregation and floculation processes in chemestry

Paredes Moreno, Daniel 27 October 2017 (has links)
Nous concentrons notre intérêt sur l'Équation du Bilan de la Population (PBE). Cette équation décrit l'évolution, au fil du temps, des systèmes de particules en fonction de sa fonction de densité en nombre (NDF) où des processus d'agrégation et de rupture sont impliqués. Dans la première partie, nous avons étudié la formation de groupes de particules et l'importance relative des variables dans la formation des ces groupes en utilisant les données dans (Vlieghe 2014) et des techniques exploratoires comme l'analyse en composantes principales, le partitionnement de données et l'analyse discriminante. Nous avons utilisé ce schéma d'analyse pour la population initiale de particules ainsi que pour les populations résultantes sous différentes conditions hydrodynamiques. La deuxième partie nous avons étudié l'utilisation de la PBE en fonction des moments standard de la NDF, et les méthodes en quadrature des moments (QMOM) et l'Extrapolation Minimale Généralisée (GME), afin de récupérer l'évolution, d'un ensemble fini de moments standard de la NDF. La méthode QMOM utilise une application de l'algorithme Produit- Différence et GME récupère une mesure discrète non-négative, étant donnée un ensemble fini de ses moments standard. Dans la troisième partie, nous avons proposé un schéma de discrétisation afin de trouver une approximation numérique de la solution de la PBE. Nous avons utilisé trois cas où la solution analytique est connue (Silva et al. 2011) afin de comparer la solution théorique à l'approximation trouvée avec le schéma de discrétisation. La dernière partie concerne l'estimation des paramètres impliqués dans la modélisation des processus d'agrégation et de rupture impliqués dans la PBE. Nous avons proposé une méthode pour estimer ces paramètres en utilisant l'approximation numérique trouvée, ainsi que le Filtre Étendu de Kalman. La méthode estime interactivement les paramètres à chaque instant du temps, en utilisant un estimateur de Moindres Carrés non-linéaire. / We center our interest in the Population Balance Equation (PBE). This equation describes the time evolution of systems of colloidal particles in terms of its number density function (NDF) where processes of aggregation and breakage are involved. In the first part, we investigated the formation of groups of particles using the available variables and the relative importance of these variables in the formation of the groups. We use data in (Vlieghe 2014) and exploratory techniques like principal component analysis, cluster analysis and discriminant analysis. We used this scheme of analysis for the initial population of particles as well as in the resulting populations under different hydrodynamics conditions. In the second part we studied the use of the PBE in terms of the moments of the NDF, and the Quadrature Method of Moments (QMOM) and the Generalized Minimal Extrapolation (GME), in order to recover the time evolution of a finite set of standard moments of the NDF. The QMOM methods uses an application of the Product-Difference algorithm and GME recovers a discrete non-negative measure given a finite set of its standard moments. In the third part, we proposed an discretization scheme in order to find a numerical approximation to the solution of the PBE. We used three cases where the analytical solution is known (Silva et al. 2011) in order to compare the theoretical solution to the approximation found with the discretization scheme. In the last part, we proposed a method for estimate the parameters involved in the modelization of aggregation and breakage processes in PBE. The method uses the numerical approximation found, as well as the Extended Kalman Filter. The method estimates iteratively the parameters at each time, using an non- linear Least Square Estimator.
170

Etudes numériques du spectre d'un opérateur de Schrödinger avec champ magnétique constant.

Janane, Rahhal 27 October 2005 (has links) (PDF)
Cette thèse comporte quatre parties. Les deux premières parties concernent le calcul de la première valeur propre de familles d'opérateurs de Neumann en utilisant d'abord une méthode basée sur les différences finies, puis une approximation par une méthode d'éléments finis sans quadrature numérique. Pour le calcul numérique de la plus petite valeur propre, la méthode de la puissance inverse a été implémentée avec factorisation LU de la matrice considérée pour la résolution des systèmes linéaires utilisés.<br />La troisième partie porte sur un problème de valeurs propres faisant intervenir un opérateur de Schrödinger avec champ magnétique constant issu de la théorie de Ginzburg-Landau et concernant la supraconductivité de certains matériaux. Pour la résolution numérique, une méthode basée sur les éléments finis avec intégration numérique est utilisée. Dans cette partie, une évaluation de la partie basse du spectre de la réalisation de Neumann est obtenue. Ensuite, l'existence des solutions du problème variationnel spectral a été établie. L'étude de la convergence et l'estimation des erreurs pour les paires propres approchées avec quadrature numérique dans le cas où les fonctions propres sont vectorielles, sont semblables à celles obtenues dans le cas où les fonctions propres sont réelles. Dans l'étude de ces estimations, la distinction est faite entre le cas d'une valeur propre exacte simple et le cas d'une valeur propre exacte multiple. La quatrième partie porte sur la mise en œuvre de la résolution numérique du problème précédent.

Page generated in 0.4989 seconds