• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 61
  • 12
  • 9
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 202
  • 73
  • 60
  • 46
  • 32
  • 32
  • 31
  • 29
  • 27
  • 26
  • 26
  • 25
  • 24
  • 24
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Méthodes d'éléments finis et estimations d'erreur a posteriori

Dhondt-Cochez, Sarah 30 November 2007 (has links) (PDF)
Dans cette thèse, on développe des estimateurs d'erreur a posteriori, pour l'approximation par éléments finis des équations de Maxwell en régime harmonique et des équations de réaction-diffusion. Introduisant d'abord, pour le système de Maxwell, des estimateurs de type résiduel, on étudie la dépendance des constantes intervenant dans les bornes inférieures et supérieures en fonction de la variation des coefficients de l'équation, en les considérant d'abord constants puis constants par morceaux. On construit ensuite un autre type d'estimateur, basé sur des flux équilibrés et la résolution de problèmes locaux, que l'on étudie dans le cadre des équations de réaction-diffusion et du système de Maxwell. Ayant introduit plusieurs estimateurs pour l'équation de Maxwell, on en propose une étude comparative, au travers de tests numériques présentant le comportement de ces estimateurs pour des solutions particulières sur des maillages uniformes ainsi que les maillages obtenus par des procédures de raffinement de maillages adaptatifs. Enfin, dans le cadre des équations de diffusion, on étend la construction des estimateurs équilibrés aux méthodes éléments finis de type Galerkin discontinues.
112

Finite element methods for multiscale/multiphysics problems

Söderlund, Robert January 2011 (has links)
In this thesis we focus on multiscale and multiphysics problems. We derive a posteriori error estimates for a one way coupled multiphysics problem, using the dual weighted residual method. Such estimates can be used to drive local mesh refinement in adaptive algorithms, in order to efficiently obtain good accuracy in a desired goal quantity, which we demonstrate numerically. Furthermore we prove existence and uniqueness of finite element solutions for a two way coupled multiphysics problem. The possibility of deriving dual weighted a posteriori error estimates for two way coupled problems is also addressed. For a two way coupled linear problem, we show numerically that unless the coupling of the equations is to strong the propagation of errors between the solvers goes to zero. We also apply a variational multiscale method to both an elliptic and a hyperbolic problem that exhibits multiscale features. The method is based on numerical solutions of decoupled local fine scale problems on patches. For the elliptic problem we derive an a posteriori error estimate and use an adaptive algorithm to automatically tune the resolution and patch size of the local problems. For the hyperbolic problem we demonstrate the importance of how to construct the patches of the local problems, by numerically comparing the results obtained for symmetric and directed patches.
113

Adaptive Algorithms and High Order Stabilization for Finite Element Computation of Turbulent Compressible Flow

Nazarov, Murtazo January 2011 (has links)
This work develops finite element methods with high order stabilization, and robust and efficient adaptive algorithms for Large Eddy Simulation of turbulent compressible flows. The equations are approximated by continuous piecewise linear functions in space, and the time discretization is done in implicit/explicit fashion: the second order Crank-Nicholson method and third/fourth order explicit Runge-Kutta methods. The full residual of the system and the entropy residual, are used in the construction of the stabilization terms. These methods are consistent for the exact solution, conserves all the quantities, such as mass, momentum and energy, is accurate and very simple to implement. We prove convergence of the method for scalar conservation laws in the case of an implicit scheme. The convergence analysis is based on showing that the approximation is uniformly bounded, weakly consistent with all entropy inequalities, and strongly consistent with the initial data. The convergence of the explicit schemes is tested in numerical examples in 1D, 2D and 3D. To resolve the small scales of the flow, such as turbulence fluctuations, shocks, discontinuities and acoustic waves, the simulation needs very fine meshes. In this thesis, a robust adjoint based adaptive algorithm is developed for the time-dependent compressible Euler/Navier-Stokes equations. The adaptation is driven by the minimization of the error in quantities of interest such as stresses, drag and lift forces, or the mean value of some quantity. The implementation and analysis are validated in computational tests, both with respect to the stabilization and the duality based adaptation. / QC 20110627
114

A posteriori error estimates and adaptive methods for convection dominated transport processes

Ohlberger, Mario. Unknown Date (has links) (PDF)
University, Diss., 2001--Freiburg (Breisgau). / Parallelt.: A-posteriori-Fehlerabschätzungen und adaptive Methoden für konvektionsdominante Transportprozesse.
115

Computação bayesiana aproximada: aplicações em modelos de dinâmica populacional / Approximate Bayesian Computation: applications in population dynamics models

Maria Cristina Martins 29 September 2017 (has links)
Processos estocásticos complexos são muitas vezes utilizados em modelagem, com o intuito de capturar uma maior proporção das principais características dos sistemas biológicos. A descrição do comportamento desses sistemas tem sido realizada por muitos amostradores baseados na distribuição a posteriori de Monte Carlo. Modelos probabilísticos que descrevem esses processos podem levar a funções de verossimilhança computacionalmente intratáveis, impossibilitando a utilização de métodos de inferência estatística clássicos e os baseados em amostragem por meio de MCMC. A Computação Bayesiana Aproximada (ABC) é considerada um novo método de inferência com base em estatísticas de resumo, ou seja, valores calculados a partir do conjunto de dados (média, moda, variância, etc.). Essa metodologia combina muitas das vantagens da eficiência computacional de processos baseados em estatísticas de resumo com inferência estatística bayesiana uma vez que, funciona bem para pequenas amostras e possibilita incorporar informações passadas em um parâmetro e formar uma priori para análise futura. Nesse trabalho foi realizada uma comparação entre os métodos de estimação, clássico, bayesiano e ABC, para estudos de simulação de modelos simples e para análise de dados de dinâmica populacional. Foram implementadas no software R as distâncias modular e do máximo como alternativas de função distância a serem utilizadas no ABC, além do algoritmo ABC de rejeição para equações diferenciais estocásticas. Foi proposto sua utilização para a resolução de problemas envolvendo modelos de interação populacional. Os estudos de simulação mostraram melhores resultados quando utilizadas as distâncias euclidianas e do máximo juntamente com distribuições a priori informativas. Para os sistemas dinâmicos, a estimação por meio do ABC apresentou resultados mais próximos dos verdadeiros bem como menores discrepâncias, podendo assim ser utilizado como um método alternativo de estimação. / Complex stochastic processes are often used in modeling in order to capture a greater proportion of the main features of natural systems. The description of the behavior of these systems has been made by many Monte Carlo based samplers of the posterior distribution. Probabilistic models describing these processes can lead to computationally intractable likelihood functions, precluding the use of classical statistical inference methods and those based on sampling by MCMC. The Approxi- mate Bayesian Computation (ABC) is considered a new method for inference based on summary statistics, that is, calculated values from the data set (mean, mode, variance, etc.). This methodology combines many of the advantages of computatio- nal efficiency of processes based on summary statistics with the Bayesian statistical inference since, it works well for small samples and it makes possible to incorporate past information in a parameter and form a prior distribution for future analysis. In this work a comparison between, classical, Bayesian and ABC, estimation methods was made for simulation studies considering simple models and for data analysis of population dynamics. It was implemented in the R software the modular and maxi- mum as alternative distances function to be used in the ABC, besides the rejection ABC algorithm for stochastic differential equations. It was proposed to use it to solve problems involving models of population interaction. The simulation studies showed better results when using the Euclidean and maximum distances together with informative prior distributions. For the dynamic systems, the ABC estimation presented results closer to the real ones as well as smaller discrepancies and could thus be used as an alternative estimation method.
116

Diagramas de influência e teoria estatística / Influence Diagrams and Statistical Theory

Rafael Bassi Stern 09 January 2009 (has links)
O objetivo principal deste trabalho foi analisar o controverso conceito de informação em estatística. Para tal, primeiramente foi estudado o conceito de informação dado por Basu. A seguir, a análise foi dividida em três partes: informação nos dados, informação no experimento e diagramas de influência. Nas duas primeiras etapas, sempre se tentou definir propriedades que uma função de informação deveria satisfazer para se enquadrar ao conceito. Na primeira etapa, foi estudado como o princípio da verossimilhança é uma classe de equivalência decorrente de acreditar que experimentos triviais não trazem informação. Também foram apresentadas métricas que satisfazem o princípio da verossimilhança e estas foram usadas para avaliar um exemplo intuitivo. Na segunda etapa, passamos para o problema da informação de um experimento. Foi apresentada a relação da suficiência de Blackwell com experimentos triviais e o conceito usual de suficiência. Também foi analisada a equivalência de Blackwell e a sua relação com o Princípio da Verossimilhança anteriormente estudado. Além disso, as métricas apresentadas para medir a informação de conjuntos de dados foram adaptadas para também medir a informação de um experimento. Finalmente, observou-se que nas etapas anteriores uma série de simetrias mostraram-se como elementos essenciais do conceito de informação. Para ganhar intuição sobre elas, estas foram reescritas através da ferramenta gráfica dos diagramas de influência. Assim, definições como suficiência, suficiência de Blackwell, suficiência mínima e completude foram reapresentadas apenas usando essa ferramenta. / The main objective of this work is to analyze the controversial concept of information in Statistics. To do so, firstly the concept of information according to Basu is presented. Next, the analysis is divided in three parts: information in a data set, information in an experiment and influence diagrams. In the first two parts, we always tried to define properties an information function should satisfy in order to be in accordance to the concept of Basu. In the first part, it was studied how the likelihood principle is an equivalence class which follows from believing that trivial experiments do not bring information. Metrics which satisfy the likelihood principle were also presented and used to analyze an intuitive example. In the second part, the problem became that of determining information of a particular experiment. The relation between Blackwell\'s suciency, trivial experiments and classical suciency was presented. Blackwell\'s equivalence was also analyzed and its relationship with the Likelihood Principle was exposed. The metrics presented to evaluate the information in a data set were also adapted to do so with experiments. Finally, in the first parts a number of symmetries were shown as essencial elements of the concept of information. To gain more intuition about these elements, we tried to rewrite them using the graphic tool of influence diagrams. Therefore, definitions as sufficiency, Blackwell\'s sufficiency, minimal sufficiency and completeness were shown again, only using influence diagrams.
117

Accuracy and variability of item parameter estimates from marginal maximum a posteriori estimation and Bayesian inference via Gibbs samplers

Wu, Yi-Fang 01 August 2015 (has links)
Item response theory (IRT) uses a family of statistical models for estimating stable characteristics of items and examinees and defining how these characteristics interact in describing item and test performance. With a focus on the three-parameter logistic IRT (Birnbaum, 1968; Lord, 1980) model, the current study examines the accuracy and variability of the item parameter estimates from the marginal maximum a posteriori estimation via an expectation-maximization algorithm (MMAP/EM) and the Markov chain Monte Carlo Gibbs sampling (MCMC/GS) approach. In the study, the various factors which have an impact on the accuracy and variability of the item parameter estimates are discussed, and then further evaluated through a large scale simulation. The factors of interest include the composition and length of tests, the distribution of underlying latent traits, the size of samples, and the prior distributions of discrimination, difficulty, and pseudo-guessing parameters. The results of the two estimation methods are compared to determine the lower limit--in terms of test length, sample size, test characteristics, and prior distributions of item parameters--at which the methods can satisfactorily recover item parameters and efficiently function in reality. For practitioners, the results help to define limits on the appropriate use of the BILOG-MG (which implements MMAP/EM) and also, to assist in deciding the utility of OpenBUGS (which carries out MCMC/GS) for item parameter estimation in practice.
118

Analyse a posteriori d'algorithmes itératifs pour des problèmes non linéaires. / A posteriori analyses of iterative algorithm for nonlinear problems.

Dakroub, Jad 07 October 2014 (has links)
La résolution numérique de n’importe quelle discrétisation d’équations aux dérivées partielles non linéaires requiert le plus souvent un algorithme itératif. En général, la discrétisation des équations aux dérivées partielles donne lieu à des systèmes de grandes dimensions. Comme la résolution des grands systèmes est très coûteuse en terme de temps de calcul, une question importante se pose: afin d’obtenir une solution approchée de bonne qualité, quand est-ce qu’il faut arrêter l’itération afin d’éviter les itérations inutiles ? L’objectif de cette thèse est alors d’appliquer, à différentes équations, une méthode qui nous permet de diminuer le nombre d’itérations de la résolution des systèmes en gardant toujours une bonne précision de la méthode numérique. En d’autres termes, notre but est d’appliquer une nouvelle méthode qui fournira un gain remarquable en terme de temps de calcul. Tout d’abord, nous appliquons cette méthode pour un problème non linéaire modèle. Nous effectuons l’analyse a priori et a posteriori de la discrétisation par éléments finis de ce problème et nous proposons par la suite deux algorithmes de résolution itérative correspondants. Nous calculons les estimations d’erreur a posteriori de nos algorithmes itératifs proposés et nous présentons ensuite quelques résultats d’expérience numériques afin de comparer ces deux algorithmes. Nous appliquerons de même cette approche pour les équations de Navier-Stokes. Nous proposons un schéma itératif et nous étudions la convergence et l’analyse a priori et a posteriori correspondantes. Finalement, nous présentons des simulations numériques montrant l’efficacité de notre méthode. / The numerical resolution of any discretization of nonlinear PDEs most often requires an iterative algorithm. In general, the discretization of partial differential equations leads to large systems. As the resolution of large systems is very costly in terms of computation time, an important question arises. To obtain an approximate solution of good quality, when is it necessary to stop the iteration in order to avoid unnecessary iterations? A posteriori error indicators have been studied in recent years owing to their remarkable capacity to enhance both speed and accuracy in computing. This thesis deals with a posteriori error estimation for the finite element discretization of nonlinear problems. Our purpose is to apply a new method that allows us to reduce the number of iterations of the resolution system while keeping a good accuracy of the numerical method. In other words, our goal is to apply a new method that provides a remarkable gain in computation time. For a given nonlinear equation we propose a finite element discretization relying on the Galerkin method. We solve the discrete problem using two iterative methods involving some kind of linearization. For each of them, there are actually two sources of error, namely discretization and linearization. Balancing these two errors can be very important, since it avoids performing an excessive number of iterations. Our results lead to the construction of computable upper indicators for the full error. Similarly, we apply this approach to the Navier-Stokes equations. Several numerical tests are provided to evaluate the efficiency of our indicators.
119

Compression Techniques for Boundary Integral Equations - Optimal Complexity Estimates

Dahmen, Wolfgang, Harbrecht, Helmut, Schneider, Reinhold 05 April 2006 (has links)
In this paper matrix compression techniques in the context of wavelet Galerkin schemes for boundary integral equations are developed and analyzed that exhibit optimal complexity in the following sense. The fully discrete scheme produces approximate solutions within discretization error accuracy offered by the underlying Galerkin method at a computational expense that is proven to stay proportional to the number of unknowns. Key issues are the second compression, that reduces the near field complexity significantly, and an additional a-posteriori compression. The latter one is based on a general result concerning an optimal work balance, that applies, in particular, to the quadrature used to compute the compressed stiffness matrix with sufficient accuracy in linear time. The theoretical results are illustrated by a 3D example on a nontrivial domain.
120

Robust local problem error estimation for a singularly perturbed reaction-diffusion problem on anisotropic finite element meshes

Grosman, Serguei 05 April 2006 (has links)
Singularly perturbed reaction-diffusion problems exhibit in general solutions with anisotropic features, e.g. strong boundary and/or interior layers. This anisotropy is reflected in the discretization by using meshes with anisotropic elements. The quality of the numerical solution rests on the robustness of the a posteriori error estimator with respect to both the perturbation parameters of the problem and the anisotropy of the mesh. An estimator that has shown to be one of the most reliable for reaction-diffusion problem is the <i>equilibrated residual method</i> and its modification done by Ainsworth and Babuška for singularly perturbed problem. However, even the modified method is not robust in the case of anisotropic meshes. The present work modifies the equilibrated residual method for anisotropic meshes. The resulting error estimator is equivalent to the equilibrated residual method in the case of isotropic meshes and is proved to be robust on anisotropic meshes as well. A numerical example confirms the theory.

Page generated in 0.0867 seconds