15 November 2006
This research is concerned with the self-adaptive numerical solution of the neutral particle radiation transport problem. Radiation transport is an extremely challenging computational problem since the governing equation is seven-dimensional (3 in space, 2 in direction, 1 in energy, and 1 in time) with a high degree of coupling between these variables. If not careful, this relatively large number of independent variables when discretized can potentially lead to sets of linear equations of intractable size. Though parallel computing has allowed the solution of very large problems, available computational resources will always be finite due to the fact that ever more sophisticated multiphysics models are being demanded by industry. There is thus the pressing requirement to optimize the discretizations so as to minimize the effort and maximize the accuracy. One way to achieve this goal is through adaptive phase-space refinement. Unfortunately, the quality of discretization (and its solution) is, in general, not known a priori; accurate error estimates can only be attained via the a posteriori error analysis. In particular, in the context of the finite element method, the a posteriori error analysis provides a rigorous error bound. The main difficulty in applying a well-established a posteriori error analysis and subsequent adaptive refinement in the context of radiation transport is the strong coupling between spatial and angular variables. This research attempts to address this issue within the context of the second-order, even-parity form of the transport equation discretized with the finite-element spherical harmonics method. The objective of this thesis is to develop a posteriori error analysis in a coupled space-angle framework and an efficient adaptive algorithm. Moreover, the mesh refinement strategy which is tuned for minimizing the error in the target engineering output has been developed by employing the dual argument of the problem. This numerical framework has been implemented in the general-purpose neutral particle code EVENT for assessment.
Cooper, Jonathan Paul
Simulating the human heart is a challenging problem, with simulations being very time consuming, to the extent that some can take days to compute even on high performance computing resources. There is considerable interest in computational optimisation techniques, with a view to making whole-heart simulations tractable. Reliability of heart model simulations is also of great concern, particularly considering clinical applications. Simulation software should be easily testable and maintainable, which is often not the case with extensively hand-optimised software. It is thus crucial to automate and verify any optimisations. CellML is an XML language designed for describing biological cell models from a mathematical modeller’s perspective, and is being developed at the University of Auckland. It gives us an abstract format for such models, and from a computer science perspective looks like a domain specific programming language. We are investigating the gains available from exploiting this viewpoint. We describe various static checks for CellML models, notably checking the dimensional consistency of mathematics, and investigate the possibilities of provably correct optimisations. In particular, we demonstrate that partial evaluation is a promising technique for this purpose, and that it combines well with a lookup table technique, commonly used in cardiac modelling, which we have automated. We have developed a formal operational semantics for CellML, which enables us to mathematically prove the partial evaluation of CellML correct, in that optimisation of models will not change the results of simulations. The use of lookup tables involves an approximation, thus introduces some error; we have analysed this using a posteriori techniques and shown how it may be managed. While the techniques could be applied more widely to biological models in general, this work focuses on cardiac models as an application area. We present experimental results demonstrating the effectiveness of our optimisations on a representative sample of cardiac cell models, in a variety of settings.
The present thesis is on the error estimates of different energy based quasicontinuum (QC) methods, which are a class of computational methods for the coupling of atomistic and continuum models for micro- or nano-scale materials. The thesis consists of two parts. The first part considers the a priori error estimates of three energy based QC methods. The second part deals with the a posteriori error estimates of a specific energy based QC method which was recently developed. In the first part, we develop a unified framework for the a priori error estimates and present a new and simpler proof based on negative-norm estimates, which essentially extends previous results. In the second part, we establish the a posteriori error estimates for the newly developed energy based QC method for an energy norm and for the total energy. The analysis is based on a posteriori residual and stability estimates. Adaptive mesh refinement algorithms based on these error estimators are formulated. In both parts, numerical experiments are presented to illustrate the results of our analysis and indicate the optimal convergence rates. The thesis is accompanied by a thorough introduction to the development of the QC methods and its numerical analysis, as well as an outlook of the future work in the conclusion.
Indicadores de erros a posteriori na aproximação de funcionais de soluções de problemas elípticos no contexto do método Galerkin descontínuo hp-adaptivo / A posteriori error indicators in the approximation of functionals of elliptic problems solutions in the context of hp-adaptive discontinuous Galerkin methodGonçalves, João Luis, 1982- 19 August 2018 (has links)
Orientador: Sônia Maria Gomes, Philippe Remy Bernard Devloo, Igor Mozolevski / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica / Made available in DSpace on 2018-08-19T03:23:02Z (GMT). No. of bitstreams: 1 Goncalves_JoaoLuis_D.pdf: 15054031 bytes, checksum: 23ef9ef75ca5a7ae7455135fc552a678 (MD5) Previous issue date: 2011 / Resumo: Neste trabalho, estudamos indicadores a posteriori para o erro na aproximação de funcionais das soluções das equações biharmônica e de Poisson obtidas pelo método de Galerkin descontínuo. A metodologia usada na obtenção dos indicadores é baseada no problema dual associado ao funcional, que é conhecida por gerar os indicadores mais eficazes. Os dois principais indicadores de erro com base no problema dual já obtidos, apresentados para problemas de segunda ordem, são estendidos neste trabalho para problemas de quarta ordem. Também propomos um terceiro indicador para problemas de segunda e quarta ordem. Estudamos as características dos diferentes indicadores na localização dos elementos com as maiores contribuições do erro, na caracterização da regularidade das soluções, bem como suas consequências na eficiência dos indicadores. Estabelecemos uma estratégia hp-adaptativa específica para os indicadores de erro em funcionais. Os experimentos numéricos realizados mostram que a estratégia hp-adaptativa funciona adequadamente e que o uso de espaços de aproximação hp-adaptados resulta ser eficiente para a redução do erro em funcionais com menor úmero de graus de liberdade. Além disso, nos exemplos estudados, a qualidade dos resultados varia entre os indicadores, dependendo do tipo de singularidade e da equação tratada, mostrando a importância de dispormos de uma maior diversidade de indicadores / Abstract: In this work we study goal-oriented a posteriori error indicators for approximations by the discontinuous Galerkin method for the biharmonic and Poisson equations. The methodology used for the indicators is based on the dual problem associated with the functional, which is known to generate the most effective indicators. The two main error indicators based on the dual problem, obtained for second order problems, are extended to fourth order problems. We also propose a third indicator for second and fourth order problems. The characteristics of the different indicators are studied for the localization of the elements with the greatest contributions of the error, and for the characterization of the regularity of the solutions, as well as their consequences on indicators efficiency. We propose an hp-adaptive strategy specific for goal-oriented error indicators. The performed numerical experiments show that the hp-adaptive strategy works properly, and that the use of hp-adapted approximation spaces turns out to be efficient to reduce the error with a lower number of degrees of freedom. Moreover, in the examples studied, a comparison of the quality of results for the different indicators shows that it may depend on the type of singularity and of the equation treated, showing the importance of having a wider range of indicators / Doutorado / Matematica Aplicada / Doutor em Matemática Aplicada
(has links) (PDF)
The main emphasis of this thesis is to study a posteriori error analysis of discontinuous Galerkin (DG) methods for the elliptic variational inequalities. The DG methods have become very pop-ular in the last two decades due to its nature of handling complex geometries, allowing irregular meshes with hanging nodes and different degrees of polynomial approximation on different ele-ments. Moreover they are high order accurate and stable methods. Adaptive algorithms reﬁne the mesh locally in the region where the solution exhibits irregular behaviour and a posteriori error estimates are the main ingredients to steer the adaptive mesh reﬁnement. The solution of linear elliptic problem exhibits singularities due to change in boundary con-ditions, irregularity of coefﬁcients and reentrant corners in the domain. Apart from this, the solu-tion of variational inequality exhibits additional irregular behaviour due to occurrence of the free boundary (the part of the domain which is a priori unknown and must be found as a component of the solution). In the lack of full elliptic regularity of the solution, uniform reﬁnement is inefﬁcient and it does not yield optimal convergence rate. But adaptive reﬁnement, which is based on the residuals ( or a posteriori error estimator) of the problem, enhance the efﬁciency by reﬁning the mesh locally and provides the optimal convergence. In this thesis, we derive a posteriori error estimates of the DG methods for the elliptic variational inequalities of the ﬁrst kind and the second kind. This thesis contains seven chapters including an introductory chapter and a concluding chap-ter. In the introductory chapter, we review some fundamental preliminary results which will be used in the subsequent analysis. In Chapter 2, a posteriori error estimates for a class of DG meth-ods have been derived for the second order elliptic obstacle problem, which is a prototype for elliptic variational inequalities of the ﬁrst kind. The analysis of Chapter 2 is carried out for the general obstacle function therefore the error estimator obtained therein involves the min/max func-tion and hence the computation of the error estimator becomes a bit complicated. With a mild assumption on the trace of the obstacle, we have derived a signiﬁcantly simple and easily com-putable error estimator in Chapter 3. Numerical experiments illustrates that this error estimator indeed behaves better than the error estimator derived in Chapter 2. In Chapter 4, we have carried out a posteriori analysis of DG methods for the Signorini problem which arises from the study of the frictionless contact problems. A nonlinear smoothing map from the DG ﬁnite element space to conforming ﬁnite element space has been constructed and used extensively, in the analysis of Chapter 2, Chapter 3 and Chapter 4. Also, a common property shared by all DG methods allows us to carry out the analysis in uniﬁed setting. In Chapter 5, we study the C0 interior penalty method for the plate frictional contact problem, which is a fourth order variational inequality of the second kind. In this chapter, we have also established the medius analysis along with a posteriori analy-sis. Numerical results have been presented at the end of every chapter to illustrate the theoretical results derived in respective chapters. We discuss the possible extension and future proposal of the work presented in the Chapter 6. In the last chapter, we have documented the FEM codes used in the numerical experiments.
Algebraická chyba v maticových výpočtech v kontextu numerického řešení parciálních diferenciálních rovnic / Algebraic Error in Matrix Computations in the Context of Numerical Solution of Partial Differential EquationsPapež, Jan January 2017 (has links)
Title: Algebraic Error in Matrix Computations in the Context of Numerical Solution of Partial Differential Equations Author: Jan Papež Department: Department of Numerical Mathematics Supervisor: prof. Ing. Zdeněk Strakoš, DrSc., Department of Numerical Mathe- matics Abstract: Solution of algebraic problems is an inseparable and usually the most time-consuming part of numerical solution of PDEs. Algebraic computations are, in general, not exact, and in many cases it is even principally desirable not to perform them to a high accuracy. This has consequences that have to be taken into account in numerical analysis. This thesis investigates in this line some closely related issues. It focuses, in particular, on spatial distribution of the errors of different origin across the solution domain, backward error interpretation of the algebraic error in the context of function approximations, incorporation of algebraic errors to a posteriori error analysis, influence of algebraic errors to adaptivity, and construction of stopping criteria for (preconditioned) iterative algebraic solvers. Progress in these issues requires, in our opinion, understanding the interconnections between the phases of the overall solution process, such as discretization and algebraic computations. Keywords: Numerical solution of partial...
Köhler, Karoline Sophie
14 November 2016
Effiziente und zuverlässige a posteriori Fehlerabschätzungen sind eine Hauptzutat für die effiziente numerische Berechnung von Lösungen zu Variationsungleichungen durch die Finite-Elemente-Methode. Die vorliegende Arbeit untersucht zuverlässige und effiziente Fehlerabschätzungen für beliebige Finite-Elemente-Methoden und drei Variationsungleichungen, nämlich dem Hindernisproblem, dem Signorini Problem und dem Bingham Problem in zwei Raumdimensionen. Die Fehlerabschätzungen hängen vom zum Problem gehörenden Lagrange Multiplikator ab, der eine Verbindung zwischen der Variationsungleichung und dem zugehörigen linearen Problem darstellt. Effizienz und Zuverlässigkeit werden bezüglich eines totalen Fehlers gezeigt. Die Fehleranschätzungen fordern minimale Regularität. Die Approximation der exakten Lösung erfüllt die Dirichlet Randbedingungen und die Approximation des Lagrange Multiplikators ist nicht-positiv im Falle des Hindernis- und Signoriniproblems, und hat Betrag kleiner gleich 1 für das Bingham Problem. Dieses allgemeine Vorgehen ermöglicht das Einbinden nicht-exakter diskreter Lösungen, welche im Kontext dieser Ungleichungen auftreten. Aus dem Blickwinkel der Anwendungen ist Effizienz und Zuverlässigkeit im Bezug auf den Fehler der primalen Variablen in der Energienorm von großem Interesse. Solche Abschätzungen hängen von der Wahl eines effizienten diskreten Lagrange Multiplikators ab. Im Falle des Hindernis- und Signorini Problems werden postive Beispiele für drei Finite-Elemente Methoden, der konformen Courant Methode, der nicht-konformen Crouzeix-Raviart Methode und der gemischten Raviart-Thomas Methode niedrigster Ordnung hergeleitet. Partielle Resultate liegen im Fall des Bingham Problems vor. Numerischer Experimente heben die theoretischen Ergebnisse hervor und zeigen Effizienz und Zuverlässigkeit. Die numerischen Tests legen nahe, dass der aus den Abschätzungen resultierende adaptive Algorithmus mit optimaler Konvergenzrate konvergiert. / Efficient and reliable a posteriori error estimates are a key ingredient for the efficient numerical computation of solutions for variational inequalities by the finite element method. This thesis studies such reliable and efficient error estimates for arbitrary finite element methods and three representative variational inequalities, namely the obstacle problem, the Signorini problem, and the Bingham problem in two space dimensions. The error estimates rely on a problem connected Lagrange multiplier, which presents a connection between the variational inequality and the corresponding linear problem. Reliability and efficiency are shown with respect to some total error. Reliability and efficiency are shown under minimal regularity assumptions. The approximation to the exact solution satisfies the Dirichlet boundary conditions, and an approximation of the Lagrange multiplier is non-positive in the case of the obstacle and Signorini problem and has an absolute value smaller than 1 for the Bingham flow problem. These general assumptions allow for reliable and efficient a posteriori error analysis even in the presence of inexact solve, which naturally occurs in the context of variational inequalities. From the point of view of the applications, reliability and efficiency with respect to the error of the primal variable in the energy norm is of great interest. Such estimates depend on the efficient design of a discrete Lagrange multiplier. Affirmative examples of discrete Lagrange multipliers are presented for the obstacle and Signorini problem and three different first-order finite element methods, namely the conforming Courant, the non-conforming Crouzeix-Raviart, and the mixed Raviart-Thomas FEM. Partial results exist for the Bingham flow problem. Numerical experiments highlight the theoretical results, and show efficiency and reliability. The numerical tests suggest that the resulting adaptive algorithms converge with optimal convergence rates.
Page generated in 0.1452 seconds