• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 13
  • 11
  • 5
  • 5
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 105
  • 105
  • 31
  • 30
  • 20
  • 20
  • 20
  • 20
  • 18
  • 16
  • 16
  • 15
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Anisotropic mesh construction and error estimation in the finite element method

Kunert, Gerd 27 July 2000 (has links) (PDF)
In an anisotropic adaptive finite element algorithm one usually needs an error estimator that yields the error size but also the stretching directions and stretching ratios of the elements of a (quasi) optimal anisotropic mesh. However the last two ingredients can not be extracted from any of the known anisotropic a posteriori error estimators. Therefore a heuristic approach is pursued here, namely, the desired information is provided by the so-called Hessian strategy. This strategy produces favourable anisotropic meshes which result in a small discretization error. The focus of this paper is on error estimation on anisotropic meshes. It is known that such error estimation is reliable and efficient only if the anisotropic mesh is aligned with the anisotropic solution. The main result here is that the Hessian strategy produces anisotropic meshes that show the required alignment with the anisotropic solution. The corresponding inequalities are proven, and the underlying heuristic assumptions are given in a stringent yet general form. Hence the analysis provides further inside into a particular aspect of anisotropic error estimation.
32

High-dimensional data mining: subspace clustering, outlier detection and applications to classification

Foss, Andrew Unknown Date
No description available.
33

Real-Time Reliable Prediction of Linear-Elastic Mode-I Stress Intensity Factors for Failure Analysis

Huynh, Dinh Bao Phuong, Peraire, Jaime, Patera, Anthony T., Liu, Guirong 01 1900 (has links)
Modern engineering analysis requires accurate, reliable and efficient evaluation of outputs of interest. These outputs are functions of "input" parameter that serve to describe a particular configuration of the system, typical input geometry, material properties, or boundary conditions and loads. In many cases, the input-output relationship is a functional of the field variable - which is the solution to an input-parametrized partial differential equations (PDE). The reduced-basis approximation, adopting off-line/on-line computational procedures, allows us to compute accurate and reliable functional outputs of PDEs with rigorous error estimations. The operation count for the on-line stage depends only on a small number N and the parametric complexity of the problem, which make the reduced-basis approximation especially suitable for complex analysis such as optimizations and designs. In this work we focus on the development of finite-element and reduced-basis methodology for the accurate, fast, and reliable prediction of the stress intensity factors or strain-energy release rate of a mode-I linear elastic fracture problem. With the use of off-line/on-line computational strategy, the stress intensity factor for a particular problem can be obtained in miliseconds. The method opens a new promising prospect: not only are the numerical results obtained only in miliseconds with great savings in computational time; the results are also reliable - thanks to the rigorous and sharp a posteriori error bounds. The practical uses of our prediction are presented through several example problems. / Singapore-MIT Alliance (SMA)
34

Robust local problem error estimation for a singularly perturbed reaction-diffusion problem on anisotropic finite element meshes

Grosman, Serguei 05 April 2006 (has links) (PDF)
Singularly perturbed reaction-diffusion problems exhibit in general solutions with anisotropic features, e.g. strong boundary and/or interior layers. This anisotropy is reflected in the discretization by using meshes with anisotropic elements. The quality of the numerical solution rests on the robustness of the a posteriori error estimator with respect to both the perturbation parameters of the problem and the anisotropy of the mesh. An estimator that has shown to be one of the most reliable for reaction-diffusion problem is the <i>equilibrated residual method</i> and its modification done by Ainsworth and Babuška for singularly perturbed problem. However, even the modified method is not robust in the case of anisotropic meshes. The present work modifies the equilibrated residual method for anisotropic meshes. The resulting error estimator is equivalent to the equilibrated residual method in the case of isotropic meshes and is proved to be robust on anisotropic meshes as well. A numerical example confirms the theory.
35

Quantification of modelling uncertainties in turbulent flow simulations / Quantification des incertitudes de modélisation dans les écoulements turbulents

Edeling, Wouter Nico 14 April 2015 (has links)
Le but de cette thèse est de faire des simulations prédictives à partir de modèles de turbulence de type RANS (Reynolds-Averaged Navier-Stokes). Ces simulations font l'objet d'un traitement systématique du modèle, de son incertitude et de leur propagation par le biais d'un modèle de calcul prédictif aux incertitudes quantifiées. Pour faire cela, nous utilisons le cadre robuste de la statistique Bayesienne.La première étape vers ce but a été d'obtenir une estimation de l'erreur de simulations RANS basées sur le modèle de turbulence de Launder-Sharma k-e. Nous avons recherché en particulier à estimer des incertitudes pour les coefficients du modele, pour des écoulements de parois en gradients favorable et défavorable. Dans le but d'estimer la propagation des coefficients qui reproduisent le plus précisemment ces types d'écoulements, nous avons étudié 13 configurations différentes de calibrations Bayesienne. Chaque calibration était associée à un gradient de pression spécifique gràce à un modèle statistique. Nous representont la totalite des incertitudes dans la solution avec une boite-probabilite (p-box). Cette boîte-p représente aussi bien les paramètres de variabilité de l'écoulement que les incertitudes epistemiques de chaque calibration. L'estimation d'un nouvel écoulement de couche-limite est faite pour des valeurs d'incertitudes générées par cette information sur l'incertitude elle-même. L'erreur d'incertitude qui en résulte est consistante avec les mesures expérimentales.Cependant, malgré l'accord avec les mesures, l'erreur obtenue était encore trop large. Ceci est dû au fait que la boite-p est une prédiction non pondérée. Pour améliorer cela, nous avons développé une autre approche qui repose également sur la variabilité des coefficients de fermeture du modèle, au travers de multiples scénarios d'écoulements et de multiples modèles de fermeture. La variabilité est là encore estimée par le recours à la calibration Bayesienne et confrontée aux mesures expérimentales de chaque scénario. Cependant, un scénario-modèle Bayesien moyen (BMSA) est ici utilisé pour faire correspondre les distributions a posteriori à un scénario (prédictif) non mesuré. Contrairement aux boîtes-p, cette approche est une approche pondérée faisant appel aux probabilités des modèles de turbulence, déterminée par les données de calibration. Pour tous les scénarios de prédiction considérés, la déviation standard de l'estimation stochastique est consistante avec les mesures effectuées.Les résultats de l'approche BMSA expriment des barres d'erreur raisonnables. Cependant, afin de l'appliquer à des topologies plus complexes et au-delà de la classe des écoulements de couche-limite, des techniques de modeles de substitution doivent être mises en places. La méthode de la collocation Stochastique-Simplex (SSC) est une de ces techniques et est particulièrement robuste pour la propagation de distributions d'entrée incertaines dans un code de calcul. Néanmois, son utilisation de la triangulation Delaunay peut entrainer un problème de coût prohibitif pour les cas à plus de 5 dimensions. Nous avons donc étudié des moyens pour améliorer cette faible scalabilité. En premier lieu, c'est dans ce but que nous avons en premier proposé une technique alternative d'interpolation basée sur le probleme 'Set-Covering'. Deuxièmement, nous avons intégré la méthode SSC au cadre du modèle de réduction à haute dimension (HDMR) dans le but d'éviter de considérer tous les espaces de haute dimension en même temps.Finalement, avec l'utilisation de notre technique de modelisation de substitution (surrogate modelling technique), nous avons appliqué le cadre BMSA à un écoulement transsonique autour d'un profil d'aile. Avec cet outil nous sommes maintenant capable de faire des simulations prédictives d'écoulements auparavant trop coûteux et offrant des incertitudes quantifiées selon les imperfections des différents modèles de turbulence. / The goal of this thesis is to make predictive simulations with Reynolds-Averaged Navier-Stokes (RANS) turbulence models, i.e. simulations with a systematic treatment of model and data uncertainties and their propagation through a computational model to produce predictions of quantities of interest with quantified uncertainty. To do so, we make use of the robust Bayesian statistical framework.The first step toward our goal concerned obtaining estimates for the error in RANS simulations based on the Launder-Sharma k-e turbulence closure model, for a limited class of flows. In particular we searched for estimates grounded in uncertainties in the space of model closure coefficients, for wall-bounded flows at a variety of favourable and adverse pressure gradients. In order to estimate the spread of closure coefficients which reproduces these flows accurately, we performed 13 separate Bayesian calibrations. Each calibration was at a different pressure gradient, using measured boundary-layer velocity profiles, and a statistical model containing a multiplicative model inadequacy term in the solution space. The results are 13 joint posterior distributions over coefficients and hyper-parameters. To summarize this information we compute Highest Posterior-Density (HPD) intervals, and subsequently represent the total solution uncertainty with a probability box (p-box). This p-box represents both parameter variability across flows, and epistemic uncertainty within each calibration. A prediction of a new boundary-layer flow is made with uncertainty bars generated from this uncertainty information, and the resulting error estimate is shown to be consistent with measurement data.However, although consistent with the data, the obtained error estimates were very large. This is due to the fact that a p-box constitutes a unweighted prediction. To improve upon this, we developed another approach still based on variability in model closure coefficients across multiple flow scenarios, but also across multiple closure models. The variability is again estimated using Bayesian calibration against experimental data for each scenario, but now Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors in an unmeasured (prediction) scenario. Unlike the p-boxes, this is a weighted approach involving turbulence model probabilities which are determined from the calibration data. The methodology was applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth.The BMSA approach results in reasonable error bars, which can also be decomposed into separate contributions. However, to apply it to more complex topologies outside the class of boundary-layer flows, surrogate modelling techniques must be applied. The Simplex-Stochastic Collocation (SSC) method is a robust surrogate modelling technique used to propagate uncertain input distributions through a computer code. However, its use of the Delaunay triangulation can become prohibitively expensive for problems with dimensions higher than 5. We therefore investigated means to improve upon this bad scalability. In order to do so, we first proposed an alternative interpolation stencil technique based upon the Set-Covering problem, which resulted in a significant speed up when sampling the full-dimensional stochastic space. Secondly, we integrated the SSC method into the High-Dimensional Model-Reduction framework in order to avoid sampling high-dimensional spaces all together.Finally, with the use of our efficient surrogate modelling technique, we applied the BMSA framework to the transonic flow over an airfoil. With this we are able to make predictive simulations of computationally expensive flow problems with quantified uncertainty due to various imperfections in the turbulence models.
36

Cartesian grid FEM (cgFEM): High performance h-adaptive FE analysis with efficient error control. Application to structural shape optimization

Nadal Soriano, Enrique 14 February 2014 (has links)
More and more challenging designs are required everyday in today¿s industries. The traditional trial and error procedure commonly used for mechanical parts design is not valid any more since it slows down the design process and yields suboptimal designs. For structural components, one alternative consists in using shape optimization processes which provide optimal solutions. However, these techniques require a high computational effort and require extremely efficient and robust Finite Element (FE) programs. FE software companies are aware that their current commercial products must improve in this sense and devote considerable resources to improve their codes. In this work we propose to use the Cartesian Grid Finite Element Method, cgFEM as a tool for efficient and robust numerical analysis. The cgFEM methodology developed in this thesis uses the synergy of a variety of techniques to achieve this purpose, but the two main ingredients are the use of Cartesian FE grids independent of the geometry of the component to be analyzed and an efficient hierarchical data structure. These two features provide to the cgFEM technology the necessary requirements to increase the efficiency of the cgFEM code with respect to commercial FE codes. As indicated in [1, 2], in order to guarantee the convergence of a structural shape optimization process we need to control the error of each geometry analyzed. In this sense the cgFEM code also incorporates the appropriate error estimators. These error estimators are specifically adapted to the cgFEM framework to further increase its efficiency. This work introduces a solution recovery technique, denoted as SPR-CD, that in combination with the Zienkiewicz and Zhu error estimator [3] provides very accurate error measures of the FE solution. Additionally, we have also developed error estimators and numerical bounds in Quantities of Interest based on the SPR-CD technique to allow for an efficient control of the quality of the numerical solution. Regarding error estimation, we also present three new upper error bounding techniques for the error in energy norm of the FE solution, based on recovery processes. Furthermore, this work also presents an error estimation procedure to control the quality of the recovered solution in stresses provided by the SPR-CD technique. Since the recovered stress field is commonly more accurate and has a higher convergence rate than the FE solution, we propose to substitute the raw FE solution by the recovered solution to decrease the computational cost of the numerical analysis. All these improvements are reflected by the numerical examples of structural shape optimization problems presented in this thesis. These numerical analysis clearly show the improved behavior of the cgFEM technology over the classical FE implementations commonly used in industry. / Cada d'¿a dise¿nos m'as complejos son requeridos por las industrias actuales. Para el dise¿no de nuevos componentes, los procesos tradicionales de prueba y error usados com'unmente ya no son v'alidos ya que ralentizan el proceso y dan lugar a dise¿nos sub-'optimos. Para componentes estructurales, una alternativa consiste en usar procesos de optimizaci'on de forma estructural los cuales dan como resultado dise¿nos 'optimos. Sin embargo, estas t'ecnicas requieren un alto coste computacional y tambi'en programas de Elementos Finitos (EF) extremadamente eficientes y robustos. Las compa¿n'¿as de programas de EF son conocedoras de que sus programas comerciales necesitan ser mejorados en este sentido y destinan importantes cantidades de recursos para mejorar sus c'odigos. En este trabajo proponemos usar el M'etodo de Elementos Finitos basado en mallados Cartesianos (cgFEM) como una herramienta eficiente y robusta para el an'alisis num'erico. La metodolog'¿a cgFEM desarrollada en esta tesis usa la sinergia entre varias t'ecnicas para lograr este prop'osito, cuyos dos ingredientes principales son el uso de los mallados Cartesianos de EF independientes de la geometr'¿a del componente que va a ser analizado y una eficiente estructura jer'arquica de datos. Estas dos caracter'¿sticas confieren a la tecnolog'¿a cgFEM de los requisitos necesarios para aumentar la eficiencia del c'odigo cgFEM con respecto a c'odigos comerciales. Como se indica en [1, 2], para garantizar la convergencia del proceso de optimizaci'on de forma estructural se necesita controlar el error en cada geometr'¿a analizada. En este sentido el c'odigo cgFEM tambi'en incorpora los apropiados estimadores de error. Estos estimadores de error han sido espec'¿ficamente adaptados al entorno cgFEM para aumentar su eficiencia. En esta tesis se introduce un proceso de recuperaci'on de la soluci'on, llamado SPR-CD, que en combinaci'on con el estimador de error de Zienkiewicz y Zhu [3], da como resultado medidas muy precisas del error de la soluci'on de EF. Adicionalmente, tambi'en se han desarrollado estimadores de error y cotas num'ericas en Magnitudes de Inter'es basadas en la t'ecnica SPR-CD para permitir un eficiente control de la calidad de la soluci'on num'erica. Respecto a la estimaci'on de error, tambi'en se presenta un proceso de estimaci'on de error para controlar la calidad del campo de tensiones recuperado obtenido mediante la t'ecnica SPR-CD. Ya que el campo recuperado es por lo general m'as preciso y tiene un mayor orden de convergencia que la soluci'on de EF, se propone sustituir la soluci'on de EF por la soluci'on recuperada para disminuir as'¿ el coste computacional del an'alisis num'erico. Todas estas mejoras se han reflejado en esta tesis mediante ejemplos num'ericos de problemas de optimizaci'on de forma estructural. Los resultados num'ericos muestran claramente un mejor comportamiento de la tecnolog'¿a cgFEM con respecto a implementaciones cl'asicas de EF com'unmente usadas en la industria. / Nadal Soriano, E. (2014). Cartesian grid FEM (cgFEM): High performance h-adaptive FE analysis with efficient error control. Application to structural shape optimization [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/35620 / TESIS
37

Estimación y acotación del error de discretización en el modelado de grietas mediante el método extendido de los elementos finitos

González Estrada, Octavio Andrés 19 February 2010 (has links)
El Método de los Elementos Finitos (MEF) se ha afianzado durante las últimas décadas como una de las técnicas numéricas más utilizadas para resolver una gran variedad de problemas en diferentes áreas de la ingeniería, como por ejemplo, el análisis estructural, análisis térmicos, de fluidos, procesos de fabricación, etc. Una de las aplicaciones donde el método resulta de mayor interés es en el análisis de problemas propios de la Mecánica de la Fractura, facilitando el estudio y evaluación de la integridad estructural de componentes mecánicos, la fiabilidad, y la detección y control de grietas. Recientemente, el desarrollo de nuevas técnicas como el Método Extendido de los Elementos Finitos (XFEM) ha permitido aumentar aún más el potencial del MEF. Dichas técnicas mejoran la descripción de problemas con singularidades, con discontinuidades, etc., mediante la adición de funciones especiales que enriquecen el espacio de la aproximación convencional de elementos finitos. Sin embargo, siempre que se aproxima un problema mediante técnicas numéricas, la solución obtenida presenta discrepancias con respecto al sistema que representa. En las técnicas basadas en la representación discreta del dominio mediante elementos finitos (MEF, XFEM, ...) interesa controlar el denominado error de discretización. En la literatura se pueden encontrar numerosas referencias a técnicas que permiten cuantificar el error en formulaciones convencionales de elementos finitos. No obstante, por ser el XFEM un método relativamente reciente, aún no se han desarrollado suficientemente las técnicas de estimación del error para aproximaciones enriquecidas de elementos finitos. El objetivo de esta Tesis es cuantificar el error de discretización cuando se utilizan aproximaciones enriquecidas del tipo XFEM para representar problemas propios de la Mecánica de la Fractura Elástico Lineal (MFEL), como es el caso del modelado de una grieta. / González Estrada, OA. (2010). Estimación y acotación del error de discretización en el modelado de grietas mediante el método extendido de los elementos finitos [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/7203 / Palancia
38

Pilotage de stratégies de calcul par décomposition de domaine par des objectifs de précision sur des quantités d’intérêt / Steering non-overlapping domain decomposition iterative solver by objectives of accuracy on quantities of interest

Rey, Valentine 11 December 2015 (has links)
Ces travaux de recherche ont pour objectif de contribuer au développement et à l'exploitation d'outils de vérification des problèmes de mécanique linéaires dans le cadre des méthodes de décomposition de domaine sans recouvrement. Les apports de cette thèse sont multiples : * Nous proposons d'améliorer la qualité des champs statiquement admissibles nécessaires à l'évaluation de l'estimateur par une nouvelle méthodologie de reconstruction des contraintes en séquentiel et par des optimisations du calcul de l'intereffort en cadre sous-structuré.* Nous démontrons des bornes inférieures et supérieures de l'erreur séparant l'erreur algébrique (due au solveur itératif) de l'erreur de discrétisation (due à la méthode des éléments finis) tant pour une mesure globale que pour une quantité d'intérêt. Cette séparation permet la définition d'un critère d'arrêt objectif pour le solveur itératif.* Nous exploitons les informations fournies par l'estimateur et les espaces de Krylov générés pour mettre en place une stratégie auto-adaptative de calcul consistant en une chaîne de résolution mettant à profit remaillage adaptatif et recyclage des directions de recherche. Nous mettons en application le pilotage du solveur par un objectif de précision sur des exemples mécaniques en deux dimensions. / This research work aims at contributing to the development of verification tools in linear mechanical problems within the framework of non-overlapping domain decomposition methods.* We propose to improve the quality of the statically admissible stress field required for the computation of the error estimator thanks to a new methodology of stress reconstruction in sequential context and thanks to optimizations of the computations of nodal reactions in substructured context.* We prove guaranteed upper and lower bounds of the error that separates the algebraic error (due to the iterative solver) from the discretization error (due to the finite element method) for both global error measure mentand goal-oriented error estimation. It enables the definition of a new stopping criterion for the iterative solver which avoids over-resolution.* We benefit the information provided by the error estimator and the Krylov subspaces built during the resolution to set an auto-adaptive strategy. This strategy consists in sequel of resolutions and takes advantage of adaptive remeshing and recycling of search directions .We apply the steering of the iterative solver by objective of precision on two-dimensional mechanical examples.
39

GOAL-ORIENTED ERROR ESTIMATION AND ADAPTIVITY FOR HIERARCHICAL MODELS OF THIN ELASTIC STRUCTURES

BILLADE, NILESH S. 01 July 2004 (has links)
No description available.
40

A Posteriori Error Analysis of the Discontinuous Galerkin Method for Linear Hyperbolic Systems of Conservation Laws

Weinhart, Thomas 22 April 2009 (has links)
In this dissertation we present an analysis for the discontinuous Galerkin discretization error of multi-dimensional first-order linear symmetric and symmetrizable hyperbolic systems of conservation laws. We explicitly write the leading term of the local DG error, which is spanned by Legendre polynomials of degree p and p+1 when p-th degree polynomial spaces are used for the solution. For special hyperbolic systems, where the coefficient matrices are nonsingular, we show that the leading term of the error is spanned by (p+1)-th degree Radau polynomials. We apply these asymptotic results to observe that projections of the error are pointwise O(h<sup>p+2</sup>)-superconvergent in some cases and establish superconvergence results for some integrals of the error. We develop an efficient implicit residual-based a posteriori error estimation scheme by solving local finite element problems to compute estimates of the leading term of the discretization error. For smooth solutions we obtain error estimates that converge to the true error under mesh refinement. We first show these results for linear symmetric systems that satisfy certain assumptions, then for general linear symmetric systems. We further generalize these results to linear symmetrizable systems by considering an equivalent symmetric formulation, which requires us to make small modifications in the error estimation procedure. We also investigate the behavior of the discretization error when the Lax-Friedrichs numerical flux is used, and we construct asymptotically exact a posteriori error estimates. While no superconvergence results can be obtained for this flux, the error estimation results can be recovered in most cases. These error estimates are used to drive h- and p-adaptive algorithms and assess the numerical accuracy of the solution. We present computational results for different fluxes and several linear and nonlinear hyperbolic systems in one, two and three dimensions to validate our theory. Examples include the wave equation, Maxwell's equations, and the acoustic equation. / Ph. D.

Page generated in 0.1143 seconds