• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 61
  • 12
  • 9
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 202
  • 73
  • 60
  • 46
  • 32
  • 32
  • 31
  • 29
  • 27
  • 26
  • 26
  • 25
  • 24
  • 24
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Methods for solving discontinuous-Galerkin finite element equations with application to neutron transport / Méthodes de résolution d'équations aux éléments finis Galerkin discontinus et application à la neutronique

Murphy, Steven 26 August 2015 (has links)
Cette thèse traite des méthodes d’éléments finis Galerkin discontinus d’ordre élevé pour la résolution d’équations aux dérivées partielles, avec un intérêt particulier pour l’équation de transport des neutrons. Nous nous intéressons tout d’abord à une méthode de pré-traitement de matrices creuses par blocs, qu’on retrouve dans les méthodes Galerkin discontinues, avant factorisation par un solveur multifrontal. Des expériences numériques conduites sur de grandes matrices bi- et tri-dimensionnelles montrent que cette méthode de pré-traitement permet une réduction significative du ’fill-in’, par rapport aux méthodes n’exploitant pas la structure par blocs. Ensuite, nous proposons une méthode d’éléments finis Galerkin discontinus, employant des éléments d’ordre élevé en espace comme en angle, pour résoudre l’équation de transport des neutrons. Nous considérons des solveurs parallèles basés sur les sous-espaces de Krylov à la fois pour des problèmes ’source’ et des problèmes aux valeur propre multiplicatif. Dans cet algorithme, l’erreur est décomposée par projection(s) afin d’équilibrer les contraintes numériques entre les parties spatiales et angulaires du domaine de calcul. Enfin, un algorithme HP-adaptatif est présenté ; les résultats obtenus démontrent une nette supériorité par rapport aux algorithmes h-adaptatifs, à la fois en terme de réduction de coût de calcul et d’amélioration de la précision. Les valeurs propres et effectivités sont présentées pour un panel de cas test industriels. Une estimation précise de l’erreur (avec effectivité de 1) est atteinte pour un ensemble de problèmes aux domaines inhomogènes et de formes irrégulières ainsi que des groupes d’énergie multiples. Nous montrons numériquement que l’algorithme HP-adaptatif atteint une convergence exponentielle par rapport au nombre de degrés de liberté de l’espace éléments finis. / We consider high order discontinuous-Galerkin finite element methods for partial differential equations, with a focus on the neutron transport equation. We begin by examining a method for preprocessing block-sparse matrices, of the type that arise from discontinuous-Galerkin methods, prior to factorisation by a multifrontal solver. Numerical experiments on large two and three dimensional matrices show that this pre-processing method achieves a significant reduction in fill-in, when compared to methods that fail to exploit block structures. A discontinuous-Galerkin finite element method for the neutron transport equation is derived that employs high order finite elements in both space and angle. Parallel Krylov subspace based solvers are considered for both source problems and $k_{eff}$-eigenvalue problems. An a-posteriori error estimator is derived and implemented as part of an h-adaptive mesh refinement algorithm for neutron transport $k_{eff}$-eigenvalue problems. This algorithm employs a projection-based error splitting in order to balance the computational requirements between the spatial and angular parts of the computational domain. An hp-adaptive algorithm is presented and results are collected that demonstrate greatly improved efficiency compared to the h-adaptive algorithm, both in terms of reduced computational expense and enhanced accuracy. Computed eigenvalues and effectivities are presented for a variety of challenging industrial benchmarks. Accurate error estimation (with effectivities of 1) is demonstrated for a collection of problems with inhomogeneous, irregularly shaped spatial domains as well as multiple energy groups. Numerical results are presented showing that the hp-refinement algorithm can achieve exponential convergence with respect to the number of degrees of freedom in the finite element space
42

Indicadores de erros a posteriori na aproximação de funcionais de soluções de problemas elípticos no contexto do método Galerkin descontínuo hp-adaptivo / A posteriori error indicators in the approximation of functionals of elliptic problems solutions in the context of hp-adaptive discontinuous Galerkin method

Gonçalves, João Luis, 1982- 19 August 2018 (has links)
Orientador: Sônia Maria Gomes, Philippe Remy Bernard Devloo, Igor Mozolevski / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica / Made available in DSpace on 2018-08-19T03:23:02Z (GMT). No. of bitstreams: 1 Goncalves_JoaoLuis_D.pdf: 15054031 bytes, checksum: 23ef9ef75ca5a7ae7455135fc552a678 (MD5) Previous issue date: 2011 / Resumo: Neste trabalho, estudamos indicadores a posteriori para o erro na aproximação de funcionais das soluções das equações biharmônica e de Poisson obtidas pelo método de Galerkin descontínuo. A metodologia usada na obtenção dos indicadores é baseada no problema dual associado ao funcional, que é conhecida por gerar os indicadores mais eficazes. Os dois principais indicadores de erro com base no problema dual já obtidos, apresentados para problemas de segunda ordem, são estendidos neste trabalho para problemas de quarta ordem. Também propomos um terceiro indicador para problemas de segunda e quarta ordem. Estudamos as características dos diferentes indicadores na localização dos elementos com as maiores contribuições do erro, na caracterização da regularidade das soluções, bem como suas consequências na eficiência dos indicadores. Estabelecemos uma estratégia hp-adaptativa específica para os indicadores de erro em funcionais. Os experimentos numéricos realizados mostram que a estratégia hp-adaptativa funciona adequadamente e que o uso de espaços de aproximação hp-adaptados resulta ser eficiente para a redução do erro em funcionais com menor úmero de graus de liberdade. Além disso, nos exemplos estudados, a qualidade dos resultados varia entre os indicadores, dependendo do tipo de singularidade e da equação tratada, mostrando a importância de dispormos de uma maior diversidade de indicadores / Abstract: In this work we study goal-oriented a posteriori error indicators for approximations by the discontinuous Galerkin method for the biharmonic and Poisson equations. The methodology used for the indicators is based on the dual problem associated with the functional, which is known to generate the most effective indicators. The two main error indicators based on the dual problem, obtained for second order problems, are extended to fourth order problems. We also propose a third indicator for second and fourth order problems. The characteristics of the different indicators are studied for the localization of the elements with the greatest contributions of the error, and for the characterization of the regularity of the solutions, as well as their consequences on indicators efficiency. We propose an hp-adaptive strategy specific for goal-oriented error indicators. The performed numerical experiments show that the hp-adaptive strategy works properly, and that the use of hp-adapted approximation spaces turns out to be efficient to reduce the error with a lower number of degrees of freedom. Moreover, in the examples studied, a comparison of the quality of results for the different indicators shows that it may depend on the type of singularity and of the equation treated, showing the importance of having a wider range of indicators / Doutorado / Matematica Aplicada / Doutor em Matemática Aplicada
43

Couplage AIG/MEG pour l'analyse de détails structuraux par une approche non intrusive et certifiée / IGA/FEM coupling for the analysis of structural details by a non-intrusive and certified approach

Tirvaudey, Marie 27 September 2019 (has links)
Dans le contexte industriel actuel, où la simulation numérique joue un rôle majeur, de nombreux outils sont développés afin de rendre les calculs les plus performants et exacts possibles en utilisant les ressources numériques de façon optimale. Parmi ces outils, ceux non-intrusifs, c’est-à-dire ne modifiant pas les codes commerciaux disponibles mais permettant d’utiliser des méthodes de résolution avancées telles que l’analyse isogéométrique ou les couplages multi-échelles, apparaissent parmi les plus attirants pour les industriels. L’objectif de cette thèse est ainsi de coupler l’Analyse IsoGéométrique (AIG) et la Méthode des Éléments Finis (MEF) standard pour l’analyse de détails structuraux par une approche non-intrusive et certifiée. Dans un premier temps, on développe un lien global approché entre les fonctions de Lagrange, classiquement utilisées en éléments finis et les fonctions NURBS bases de l’AIG, ce qui permet d’implémenter des analyses isogéométriques dans un code industriel EF vu comme une boîte noire. Au travers d’exemples linéaires et non-linéaires implémentés dans le code industriel Code_Aster de EDF, nous démontrons l’efficacité de ce pont AIG\MEF et les possibilités d’applications industrielles. Il est aussi démontré que ce lien permet de simplifier l’implémentation du couplage non-intrusif entre un problème global isogéométrique et un problème local éléments finis. Ensuite, le concept de couplage non-intrusif entre les méthodes étant ainsi possible, une stratégie d’adaptation est mise en place afin de certifier ce couplage vis-à-vis d’une quantité d’intérêt. Cette stratégie d’adaptation est basée sur des méthodes d’estimation d’erreur a posteriori. Un estimateur global et des indicateurs d’erreur d’itération, de modèle et de discrétisation permettent de piloter la définition du problème couplé. La méthode des résidus est utilisée pour évaluer ces erreurs dans des cas linéaires, et une extension aux problèmes non-linéaires via le concept d’Erreur en Relation de Comportement (ERC) est proposée. / In the current industrial context where the numerical simulation plays a major role, a large amount of tools are developed in order to perform accurate and effective simulations using as less numerical resources as possible. Among all these tools, the non-intrusive ones which do not modify the existing structure of commercial softwares but allowing the use of advanced solving methods, such as isogeometric analysis or multi-scale coupling, are the more attractive to the industry. The goal of these thesis works is thus the coupling of the Isogeometric Analysis (IGA) with the Finite Element Method (FEM) to analyse structural details with a non-intrusive and certified approach. First, we develop an approximate global link between the Lagrange functions, commonly used in the FEM, and the NURBS functions on which the IGA is based. It’s allowed the implementation of isogeometric analysis in an existing finite element industrial software considering as a black-box. Through linear and nonlinear examples implemented in the industrial software Code_Aster of EDF, we show the efficiency of the IGA\FEM bridge and all the industrial applications that can be made. This link is also a key to simplify the non-intrusive coupling between a global isogeometric problem and a local finite element problem. Then, as the non-intrusive coupling between both methods is possible, an adaptive process is introduced in order to certify this coupling regarding a quantity of interest. This adaptive strategy is based on a posteriori error estimation. A global estimator and indicators of iteration, model and discretization error sources are computed to control the definition of the coupled problem. Residual base methods are performed to estimated errors for linear cases, an extension to the concept of constitutive relation errors is also initiated for non-linear problems.
44

Algorithmes et Bornes minimales pour la Synchronisation Temporelle à Haute Performance : Application à l’internet des objets corporels / Algorithms and minimum bounds for high performance time synchronization : Application to the wearable Internet of Things

Nasr, Imen 23 January 2017 (has links)
La synchronisation temporelle est la première opération effectuée par le démodulateur. Elle permet d'assurer que les échantillons transmis aux processus de démodulation puissent réaliser un taux d'erreurs binaires le plus faible.Dans cette thèse, nous proposons l'étude d'algorithmes innovants de synchronisation temporelle à haute performance.D'abord, nous avons proposé des algorithmes exploitant l'information souple du décodeur en plus du signal reçu afin d'améliorer l'estimation aveugle d'un retard temporel supposé constant sur la durée d'observation.Ensuite, nous avons proposé un algorithme original basé sur la synchronisation par lissage à faible complexité.Cette étape a consisté à proposer une technique opérant dans un contexte hors ligne, permettant l'estimation d'un retard aléatoire variable dans le temps via les boucles d'aller-retour sur plusieurs itération. Les performances d'un tel estimateur dépassent celles des algorithmes traditionnels.Afin d'évaluer la pertinence de tous les estimateurs proposés, pour des retards déterministe et aléatoire, nous avons évalué et comparé leurs performances à des bornes de Cramèr-Rao que nous avons développées pour ce cadre. Enfin, nous avons évalué les algorithmes proposés sur des signaux WBAN. / Time synchronization is the first function performed by the demodulator. It ensures that the samples transmitted to the demodulation processes allow to achieve the lowest bit error rate.In this thesis we propose the study of innovative algorithms for high performance time synchronization.First, we propose algorithms exploiting the soft information from the decoder in addition to the received signal to improve the blind estimation of the time delay. Next, we develop an original algorithm based on low complexity smoothing synchronization techniques. This step consisted in proposing a technique operating in an off-line context, making it possible to estimate a random delay that varies over time on several iterations via Forward- Backward loops. The performance of such estimators exceeds that of traditional algorithms. In order to evaluate the relevance of all the proposed estimators, for deterministic and random delays, we evaluated and compared their performance to Cramer-Rao bounds that we developed within these frameworks. We, finally, evaluated the proposed algorithms on WBAN signals.
45

Application des techniques de bases réduites à la simulation des écoulements en milieux poreux / Application of reduced basis techniques to the simulation of flows in porous media

Sanchez, Mohamed, Riad 19 December 2017 (has links)
En géosciences, les applications associées au calage de modèles d'écoulement nécessitent d'appeler plusieurs fois un simulateur au cours d'un processus d'optimisation. Or, une seule simulation peut durer plusieurs heures et l'exécution d'une boucle complète de calage peut s'étendre sur plusieurs jours. Diminuer le temps de calcul global à l'aide des techniques de bases réduites (RB) constitue l’objectif de la thèse.Il s'agit plus précisément dans ce travail d'appliquer ces techniques aux écoulements incompressibles diphasiques eau-huile en milieu poreux. Ce modèle, bien que simplifié par rapport aux modèles utilisés dans l'industrie pétrolière, constitue déjà un défi du point de vue de la pertinence de la méthode RB du fait du couplage entre les différentes équations, de la forte hétérogénéité des données physiques, ainsi que du choix des schémas numériques de référence.Nous présentons d'abord le modèle considéré, le schéma volumes finis (VF) retenu pour l'approximation numérique, ainsi que différentes paramétrisations pertinentes en simulation de réservoir. Ensuite, après un bref rappel de la méthode RB, nous mettons en oeuvre la réduction du problème en pression à un instant donné en suivant deux démarches distinctes. La première consiste à interpréter la discrétisation VF comme une approximation de Ritz-Galerkine, ce qui permet de se ramener au cadre standard de la méthode RB mais n'est possible que sous certaines hypothèses restrictives. La seconde démarche lève ces restrictions en construisant le modèle réduit directement au niveau discret.Enfin, nous testons deux stratégies de réduction pour la collection en temps de pressions paramétrées par les variations de la saturation. La première considère le temps juste comme un paramètre supplémentaire. La seconde tente de mieux capturer la causalité temporelle en introduisant les trajectoires en temps paramétrées. / In geosciences, applications involving model calibration require a simulator to be called several times with an optimization process. However, a single simulation can take several hours and a complete calibration loop can extend over serval days. The objective of this thesis is to reduce the overall simulation time using reduced basis (RB) techniques.More specifically, this work is devoted to applying such techniques to incompressible two-phase water-oil flows in porous media. Despite its relative simplicity in comparison to other models used in the petroleum industry, this model is already a challenge from the standpoint of reduced order modeling. This is due to the coupling between its equations, the highly heterogeneous physical data, as well as the choice of reference numerical schemes.We first present the two-phase flow model, along with the finite volume (FV) scheme used for the discretization and relevant parameterizations in reservoir simulation. Then, after having recalled the RB method, we perform a reduction of the pressure equation at a fixed time step by two different approaches. In the first approach, we interpret the FV discretization as a Ritz-Galerkine approximation, which takes us back to the standard RB framework but which is possible only under severe assumptions. The second approach frees us of these restrictions by building the RB method directly at the discrete level.Finally, we deploy two strategies for reducing the collection in time of pressuresparameterized by the variations of the saturation. The first one simply considers time as an additional parameter. The second one attempts to better capture temporalcausality by introducing parameterized time-trajectories.
46

Contrôle d’erreur pour et par les modèles réduits PGD / Error control for and with PGD reduced models

Allier, Pierre-Eric 21 November 2017 (has links)
De nombreux problèmes de mécanique des structures nécessitent la résolution de plusieurs problèmes numériques semblables. Une approche itérative de type réduction de modèle, la Proper Generalized Decomposition (PGD), permet de déterminer l’ensemble des solutions en une fois, par l’introduction de paramètres supplémentaires. Cependant, un frein majeur à son utilisation dans le monde industriel est l’absence d’estimateur d’erreur robuste permettant de mesurer la qualité des solutions obtenues. L’approche retenue s’appuie sur le concept d’erreur en relation de comportement. Cette méthode consiste à construire des champs admissibles, assurant ainsi l’aspect conservatif et garanti de l’estimation de l’erreur en réutilisant le maximum d’outils employés dans le cadre éléments finis. La possibilité de quantifier l’importance des différentes sources d’erreur (réduction et discrétisation) permet de plus de piloter les principales stratégies de résolution PGD. Deux stratégies ont été proposées dans ces travaux. La première s’est principalement limitée à post-traiter une solution PGD pour construire une estimation de l’erreur commise, de façon non intrusive pour les codes PGD existants. La seconde consiste en une nouvelle stratégie PGD fournissant une approximation améliorée couplée à une estimation de l’erreur commise. Les diverses études comparatives sont menées dans le cadre des problèmes linéaires thermiques et en élasticité. Ces travaux ont également permis d’optimiser les méthodes de construction de champs admissibles en substituant la résolution de nombreux problèmes semblables par une solution PGD, exploitée comme un abaque. / Many structural mechanics problems require the resolution of several similar numerical problems. An iterative model reduction approach, the Proper Generalized Decomposition (PGD), enables the control of the main solutions at once, by the introduction of additional parameters. However, a major drawback to its use in the industrial world is the absence of a robust error estimator to measure the quality of the solutions obtained.The approach used is based on the concept of constitutive relation error. This method consists in constructing admissible fields, thus ensuring the conservative and guaranteed aspect of the estimation of the error by reusing the maximum number of tools used in the finite elements framework. The ability to quantify the importance of the different sources of error (reduction and discretization) allows to control the main strategies of PGD resolution.Two strategies have been proposed in this work. The first was limited to post-processing a PGD solution to construct an estimate of the error committed, in a non-intrusively way for existing PGD codes. The second consists of a new PGD strategy providing an improved approximation associated with an estimate of the error committed. The various comparative studies are carried out in the context of linear thermal and elasticity problems.This work also allowed us to optimize the admissible fields construction methods by substituting the resolution of many similar problems by a PGD solution, exploited as a virtual chart.
47

On Regularized Newton-type Algorithms and A Posteriori Error Estimates for Solving Ill-posed Inverse Problems

Liu, Hui 11 August 2015 (has links)
Ill-posed inverse problems have wide applications in many fields such as oceanography, signal processing, machine learning, biomedical imaging, remote sensing, geophysics, and others. In this dissertation, we address the problem of solving unstable operator equations with iteratively regularized Newton-type algorithms. Important practical questions such as selection of regularization parameters, construction of generating (filtering) functions based on a priori information available for different models, algorithms for stopping rules and error estimates are investigated with equal attention given to theoretical study and numerical experiments.
48

Convergence rates of adaptive algorithms for deterministic and stochastic differential equations

Moon, Kyoung-Sook January 2001 (has links)
No description available.
49

The epistemology of necessity

Pollock, William J. January 2001 (has links)
The thesis examines the direct reference theory of proper names and natural kind terms as expounded by Saul Kripke, Hilary Putnam and others and finds that it has not succeeded in replacing some kind of description theory of the reference of such terms - although it does concede that the traditional Fregean theory is not quite correct. It is argued that the direct reference theory is mistaken on several counts. First of all it is question-begging. Secondly, it is guilty of a 'use/mention' confusion. And thirdly, and most importantly, it fails to deal with the notion of understanding. The notion of understanding is crucial to the present thesis - specifically, what is understood by a proper name or natural kind term. It is concluded that sense (expressed in the form of descriptions) is at least necessary for reference, which makes a significant difference to Kripke's claim that there are necessary a posteriori truths as well as contingent a priori truths. It is also argued that sense could be sufficient for reference, if it is accepted that it is speakers who effect reference. In this sense, sense determines reference. The thesis therefore not only argues against the account of reference given by the direct reference theorists, it also gives an account of how proper names and natural kind terms actually do function in natural language. As far as the epistemology of necessity is concerned the thesis concludes that Kripke (along with many others) has not succeeded in establishing the existence of the necessary a posteriori nor the contingent a priori from the theory of direct reference. Whether such truths can be established by some other means, or in principle, is not the concern of the thesis; although the point is made that, if a certain view of sense is accepted, then questions of necessity and a priority seem inappropriate.
50

An Adaptive Mixed Finite Element Method using the Lagrange Multiplier Technique

Gagnon, Michael Anthony 04 May 2009 (has links)
Adaptive methods in finite element analysis are essential tools in the efficient computation and error control of problems that may exhibit singularities. In this paper, we consider solving a boundary value problem which exhibits a singularity at the origin due to both the structure of the domain and the regularity of the exact solution. We introduce a hybrid mixed finite element method using Lagrange Multipliers to initially solve the partial differential equation for the both the flux and displacement. An a posteriori error estimate is then applied both locally and globally to approximate the error in the computed flux with that of the exact flux. Local estimation is the key tool in identifying where the mesh should be refined so that the error in the computed flux is controlled while maintaining efficiency in computation. Finally, we introduce a simple refinement process in order to improve the accuracy in the computed solutions. Numerical experiments are conducted to support the advantages of mesh refinement over a fixed uniform mesh.

Page generated in 0.3505 seconds