1 |
Adaptive methods for time domain boundary integral equations for acoustic scatteringGläfke, Matthias January 2012 (has links)
This thesis is concerned with the study of transient scattering of acoustic waves by an obstacle in an infinite domain, where the scattered wave is represented in terms of time domain boundary layer potentials. The problem of finding the unknown solution of the scattering problem is thus reduced to the problem of finding the unknown density of the time domain boundary layer operators on the obstacle’s boundary, subject to the boundary data of the known incident wave. Using a Galerkin approach, the unknown density is replaced by a piecewise polynomial approximation, the coefficients of which can be found by solving a linear system. The entries of the system matrix of this linear system involve, for the case of a two dimensional scattering problem, integrals over four dimensional space-time manifolds. An accurate computation of these integrals is crucial for the stability of this method. Using piecewise polynomials of low order, the two temporal integrals can be evaluated analytically, leading to kernel functions for the spatial integrals with complicated domains of piecewise support. These spatial kernel functions are generalised into a class of admissible kernel functions. A quadrature scheme for the approximation of the two dimensional spatial integrals with admissible kernel functions is presented and proven to converge exponentially by using the theory of countably normed spaces. A priori error estimates for the Galerkin approximation scheme are recalled, enhanced and discussed. In particular, the scattered wave’s energy is studied as an alternative error measure. The numerical schemes are presented in such a way that allows the use of non-uniform meshes in space and time, in order to be used with adaptive methods that are based on a posteriori error indicators and which modify the computational domain according to the values of these error indicators. The theoretical analysis of these schemes demands the study of generalised mapping properties of time domain boundary layer potentials and integral operators, analogously to the well known results for elliptic problems. These mapping properties are shown for both two and three space dimensions. Using the generalised mapping properties, three types of a posteriori error estimators are adopted from the literature on elliptic problems and studied within the context of the two dimensional transient problem. Some comments on the three dimensional case are also given. Advantages and disadvantages of each of these a posteriori error estimates are discussed and compared to the a priori error estimates. The thesis concludes with the presentation of two adaptive schemes for the two dimensional scattering problem and some corresponding numerical experiments.
|
2 |
A new approach to boundary integral simulations of axisymmetric droplet dynamics / 軸対称液滴運動の境界積分シミュレーションに対する新しいアプローチKoga, Kazuki 24 November 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第22861号 / 情博第740号 / 新制||情||127(附属図書館) / 京都大学大学院情報学研究科先端数理科学専攻 / (主査)教授 青柳 富誌生, 教授 磯 祐介, 教授 田口 智清 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
3 |
Discontinuous Galerkin method for the solution of boundary-value problems in non-smooth domains / Discontinuous Galerkin method for the solution of boundary-value problems in non-smooth domainsBartoš, Ondřej January 2017 (has links)
This thesis is concerned with the analysis of the finite element method and the discontinuous Galerkin method for the numerical solution of an elliptic boundary value problem with a nonlinear Newton boundary condition in a two-dimensional polygonal domain. The weak solution loses regularity in a neighbourhood of boundary singularities, which may be at corners or at roots of the weak solution on edges. The main attention is paid to the study of error estimates. It turns out that the order of convergence is not dampened by the nonlinearity, if the weak solution is nonzero on a large part of the boundary. If the weak solution is zero on the whole boundary, the nonlinearity only slows down the convergence of the function values but not the convergence of the gradient. The same analysis is carried out for approximate solutions obtained with numerical integration. The theoretical results are verified by numerical experiments. 1
|
4 |
Méthodes de Monte Carlo stratifiées pour l'intégration numérique et la simulation numériques / Stratified Monte Carlo methods for numerical integration and simulationFakhereddine, Rana 26 September 2013 (has links)
Les méthodes de Monte Carlo (MC) sont des méthodes numériques qui utilisent des nombres aléatoires pour résoudre avec des ordinateurs des problèmes des sciences appliquées et des techniques. On estime une quantité par des évaluations répétées utilisant N valeurs et l'erreur de la méthode est approchée par la variance de l'estimateur. Le présent travail analyse des méthodes de réduction de la variance et examine leur efficacité pour l'intégration numérique et la résolution d'équations différentielles et intégrales. Nous présentons d'abord les méthodes MC stratifiées et les méthodes d'échantillonnage par hypercube latin (LHS : Latin Hypercube Sampling). Parmi les méthodes de stratification, nous privilégions la méthode simple (MCS) : l'hypercube unité Is := [0; 1)s est divisé en N sous-cubes d'égale mesure, et un point aléatoire est choisi dans chacun des sous-cubes. Nous analysons la variance de ces méthodes pour le problème de la quadrature numérique. Nous étudions particulièrment le cas de l'estimation de la mesure d'un sous-ensemble de Is. La variance de la méthode MCS peut être majorée par O(1=N1+1=s). Les résultats d'expériences numériques en dimensions 2,3 et 4 montrent que les majorations obtenues sont précises. Nous proposons ensuite une méthode hybride entre MCS et LHS, qui possède les propriétés de ces deux techniques, avec un point aléatoire dans chaque sous-cube et les projections des points sur chacun des axes de coordonnées également réparties de manière régulière : une projection dans chacun des N sousintervalles qui divisent I := [0; 1) uniformément. Cette technique est appelée Stratification Sudoku (SS). Dans le même cadre d'analyse que précédemment, nous montrons que la variance de la méthode SS est majorée par O(1=N1+1=s) ; des expériences numériques en dimensions 2,3 et 4 valident les majorations démontrées. Nous présentons ensuite une approche de la méthode de marche aléatoire utilisant les techniques de réduction de variance précédentes. Nous proposons un algorithme de résolution de l'équation de diffusion, avec un coefficient de diffusion constant ou non-constant en espace. On utilise des particules échantillonnées suivant la distribution initiale, qui effectuent un déplacement gaussien à chaque pas de temps. On ordonne les particules suivant leur position à chaque étape et on remplace les nombres aléatoires qui permettent de calculer les déplacements par les points stratifiés utilisés précédemment. On évalue l'amélioration apportée par cette technique sur des exemples numériques Nous utilisons finalement une approche analogue pour la résolution numérique de l'équation de coagulation, qui modélise l'évolution de la taille de particules pouvant s'agglomérer. Les particules sont d'abord échantillonnées suivant la distribution initiale des tailles. On choisit un pas de temps et, à chaque étape et pour chaque particule, on choisit au hasard un partenaire de coalescence et un nombre aléatoire qui décide de cette coalescence. Si l'on classe les particules suivant leur taille à chaque pas de temps et si l'on remplace les nombres aléatoires par des points stratifiés, on observe une réduction de variance par rapport à l'algorithme MC usuel. / Monte Carlo (MC) methods are numerical methods using random numbers to solve on computers problems from applied sciences and techniques. One estimates a quantity by repeated evaluations using N values ; the error of the method is approximated through the variance of the estimator. In the present work, we analyze variance reduction methods and we test their efficiency for numerical integration and for solving differential or integral equations. First, we present stratified MC methods and Latin Hypercube Sampling (LHS) technique. Among stratification strategies, we focus on the simple approach (MCS) : the unit hypercube Is := [0; 1)s is divided into N subcubes having the same measure, and one random point is chosen in each subcube. We analyze the variance of the method for the problem of numerical quadrature. The case of the evaluation of the measure of a subset of Is is particularly detailed. The variance of the MCS method may be bounded by O(1=N1+1=s). The results of numerical experiments in dimensions 2,3, and 4 show that the upper bounds are tight. We next propose an hybrid method between MCS and LHS, that has properties of both approaches, with one random point in each subcube and such that the projections of the points on each coordinate axis are also evenly distributed : one projection in each of the N subintervals that uniformly divide the unit interval I := [0; 1). We call this technique Sudoku Sampling (SS). Conducting the same analysis as before, we show that the variance of the SS method is bounded by O(1=N1+1=s) ; the order of the bound is validated through the results of numerical experiments in dimensions 2,3, and 4. Next, we present an approach of the random walk method using the variance reduction techniques previously analyzed. We propose an algorithm for solving the diffusion equation with a constant or spatially-varying diffusion coefficient. One uses particles, that are sampled from the initial distribution ; they are subject to a Gaussian move in each time step. The particles are renumbered according to their positions in every step and the random numbers which give the displacements are replaced by the stratified points used above. The improvement brought by this technique is evaluated in numerical experiments. An analogous approach is finally used for numerically solving the coagulation equation ; this equation models the evolution of the sizes of particles that may agglomerate. The particles are first sampled from the initial size distribution. A time step is fixed and, in every step and for each particle, a coalescence partner is chosen and a random number decides if coalescence occurs. If the particles are ordered in every time step by increasing sizes an if the random numbers are replaced by statified points, a variance reduction is observed, when compared to the results of usual MC algorithm.
|
5 |
Finite Element Modeling for Assessing Flood Barrier Risks and Failures due to Storm Surges and WavesWood, Dylan M. January 2020 (has links)
No description available.
|
Page generated in 0.0815 seconds