Spelling suggestions: "subject:"[een] REFINEMENT"" "subject:"[enn] REFINEMENT""
351 |
Formulação h-adaptativa do método dos elementos de contorno para elasticidade bidimensional com ênfase na propagação da fratura / H-adaptative formulation of the boundary element method for elastic bidimensional with emphasis in the propagation of the fractureOscar Bayardo Ramos Lovón 09 June 2006 (has links)
Neste trabalho desenvolveu-se uma formulação adaptativa do método de elementos de contorno (MEC) para a análise de problemas de fratura elástica linear. Foi utilizado o método da colocação para a formulação das equações integrais de deslocamento e de tensão. Para a discretização das equações integrais foram utilizados elementos lineares que possibilitaram a obtenção das expressões exatas das integrais (integração analítica) sobre elementos de contorno e fratura. Para a montagem do sistema de equações algébricas foram utilizadas apenas equações de deslocamento, apenas equações de forças de superfície, ou as duas escritas para nós opostos da fratura levando, portanto ao método dos elementos de contorno dual usualmente empregado na análise de fratura. Para o processo de crescimento da trinca foi desenvolvido um procedimento especial objetivando a correta determinação da direção de crescimento da trinca. Os fatores de intensidade de tensão são calculados por meio da conhecida técnica de correlação de deslocamentos a qual relaciona os deslocamentos atuantes nas faces da fissura. Após a determinação dos fatores de intensidade de tensão é utilizada a teoria da máxima tensão circunferencial para a determinação do ângulo de propagação. O modelo adaptativo empregado é do tipo h onde apenas a sub-divisão dos elementos é feita com base em erros estimados. O erro a ser considerado foi estimado a partir de normas onde se consideraram: a variação aproximada dos deslocamentos, a variação das forças de superfície e a variação da energia de deformação do sistema, calculada com a sua integração sobre o contorno. São apresentados exemplos numéricos para demonstrar a eficiência dos procedimentos propostos. / In this work, an adaptative formulation of the boundary element method is developed to analyze linear elastic fracture problems. The collocation point method was used to formulate the integral equations for the displacements and stresses (or tractions). To discretize the integral equations, linear elements were used to obtain the exact expressions of the integrals over boundary elements and fracture. To construct the linear system of equations were used only displacement equations, traction equations or both of them written for opposite nodes of the fracture, leading to the dual boundary element formulation usually employed in the fracture analyses. For the process of growth of the crack a special procedure was developed aiming at the correct determination of the direction of growth of the crack. The stress intensity factors, to calculate he crack growth angle, are calculated through of correlation displacements technique which relates the displacements actuants in the faces of the crack. The employed adaptative model is the h-type where only the sub-division of the elements is done based on error estimate. The error estimates considered in this work are based on the following norms: displacement, traction and strain energy variations, this last considered from the integration over the boundary. Numerical examples are presented to demonstrate the efficiency of the proposed procedures.
|
352 |
[en] A STUDY OF TERRAIN-VISUALIZATION ALGORITHM / [pt] UM ESTUDO SOBRE UM ALGORITMO PARA VISUALIZAÇÃO DE TERRENOSEDINALDA MARIA DE SOUZA 22 August 2003 (has links)
[pt] Algoritmos para visualização interativa de terrenos são
complexos e, ao mesmo tempo, de grande importância para
muitas aplicações como jogos e planejamento de atividades
sobre terrenos. Em função desta complexidade e
importância, o tema merecido, na última década, muita
atenção da comunidade de pesquisadores em Computação
Gráfica e, conseqüentemente, muitas estratégias
têm sido desenvolvidas. Entre as mais bem sucedidas
estratégias, destacam-se os recentes trabalhos de Lindstrom
e Pascucci. O algoritmo proposto por estes autores possui
diversas implementações disponíveis na Internet e merece
ser reavaliado. Esta dissertação faz esta re-avaliação
através de uma implementação independente feita pela autora
e testada sobre uma base de terrenos reais. Com o objetivo
de tornar esta análise mais completa e dar suporte a
algumas conclusões, resultados comparativos de outros
algoritmos da área também são apresentados. / [en] Algorithms for the interactive visualization of terrains
are very complex and, at the same time, of great importance
to many applications, such as games and activity-planning
over terrains. Due to such complexity and importance, in the
past decade this subject has received great attention by
researchers on Computer Graphics. As a consequence, a
number of strategies have been developed. Among
the most successful strategies, one can highlight recent
works by Lindstrom and Pascucci. The algorithm proposed by
these authors has various implementations available in the
Internet and deserves to be reevaluated. The present work
makes such reevaluation by means of an independent
implementation developed by the author and tested over a
base or real terrains. With the purpose of making this
analysis more complete and to support some conclusions,
comparative results with other algorithms in the area are
also presented.
|
353 |
Surface and subsurface damage quantification using multi-device robotics-based sensor system and other non-destructive testing techniquesRathod, Harsh 19 September 2019 (has links)
North American civil infrastructures are aging. According to recent (2016) Canadian infrastructure report card, 33% of the Canadian municipal infrastructures are either in fair or below fair condition. The current deficit of replacing fair and poor municipal bridges (covers 26% of bridges) is 13 billion dollars. According to the latest report (2017) by American Society of Civil Engineers, the entire American infrastructure have been given a D+ condition rating. This includes some of the structural elements of infrastructures that pose a significant risk and there is an urgent need for frequent and effective inspection to ensure the safety of people.
Visual inspection is a commonly used technique to detect and identify surface defects in bridge structures as it has been considered the most feasible method for decades. However, this currently used methodology is inadequate and unreliable as it is highly dependent on subjective human judgment. This labor-intensive approach for inspection requires huge investment in terms of an arrangement of temporary scaffoldings/permanent platforms, ladders, snooper trucks, and sometimes helicopters.
To address these issues associated with visual inspection, the completed research suggests three innovative methods; 1) Combined use of Fuzzy logic and Image Processing Algorithm to quantify surface defects, 2) Unmanned Aerial Vehicle (UAV)-assisted American Association of State Highway and Transportation Officials (AASHTO) guideline-based damage assessment technique, and 3) Patent-pending multi-device robotics-based sensor data acquisition system for mapping and assessing defects in civil structures.
To detect and quantify subsurface defects such as voids and delamination using a UAV system, another patent-pending UAV-based acoustic method is developed. It is a novel inspection apparatus that comprises of an acoustic signal generator coupled to a UAV. The acoustic signal generator includes a hammer to produce an acoustic signal in a structure using a UAV.
An outcome of this innovative research is the development of a model to refine multiple commercially available NDT techniques’ data to detect and quantify subsurface defects. To achieve this, a total of nine 1800 mm × 460 mm reinforced concrete slabs with varying thicknesses of 100 mm, 150 mm and 200 mm are prepared. These slabs are designed to have artificially simulated defects like voids, debonding, honeycombing, and corrosion. To determine the performance of five NDT techniques, more than 300 data points are considered for each test. The experimental research shows that utilizing multiple techniques on a single structure to evaluate the defects, significantly lowers error and increases accuracy compared to that from a standalone test. To visualize the NDT data, two-dimensional NDT data maps are developed. This work presents an innovative method to interpret NDT data correctly as it compares the individual data points of slabs with no defects to slabs with simulated damage. For the refinement of NDT data, significance factor and logical sequential determination factor are proposed. / Graduate / 2020-09-06
|
354 |
Incompressible Flow Simulations Using Least Squares Spectral Element Method On Adaptively Refined Triangular GridsAkdag, Osman 01 September 2012 (has links) (PDF)
The main purpose of this study is to develop a flow solver that employs triangular grids to solve two-dimensional, viscous, laminar, steady, incompressible flows. The flow solver is based on Least Squares Spectral Element Method (LSSEM). It has p-type adaptive mesh refinement/coarsening capability and supports p-type nonconforming element interfaces. To validate the developed flow solver several benchmark problems are studied and successful results are obtained. The performances of two different triangular nodal distributions, namely Lobatto distribution and Fekete distribution, are compared in terms of accuracy and implementation complexity. Accuracies provided by triangular and quadrilateral grids of equal computational size are compared. Adaptive mesh refinement studies are conducted using three different error indicators, including a novel one based on elemental mass loss. Effect of modifying the least-squares functional by multiplying the continuity equation by a weight factor is investigated in regards to mass conservation.
|
355 |
Réceptivité et sensibilité de la couche limite dans le bord d'attaque d'une aile en fleche : une approche multigridMeneghello, Gianluca 15 February 2013 (has links) (PDF)
Le but de cette étude est l'analyse de la stabilité et des propriétés de réceptivité de l'écoulement tridimensionnel au bord d'attaque d'une aile en flèche. Le projet est divisé en deux parties: (i) le calcul de l'écoulement de base stationnaire comme une solution de l'état d'équilibre de Navier-Stokes et (ii) l'étude du problème aux valeurs propres direct et adjoint obtenu en linéarisant les équations de Navier-Stokes autour de l'écoulement de base. Un code DNS a été développé sur la base d'un cadre multigrid. La solution des équations de Navier-Stokes non linéaires et stationnaires à différents nombres de Reynolds est obtenue à un coût de calcul de près de O(n), où n est le nombre de degrés de liberté du problème. L'étude de la stabilité et des propriétés de réceptivité est effectuée en résolvant numériquement le problème aux valeurs propres / vecteurs propres. Un algorithme de Krylov-Schur, couplé avec une transformation shift-invert, est utilisé pour extraire la partie la plus intéressante du spectre. Deux branches peuvent être identifiées et l'une d'elles est associée à des vecteurs propres montrant une connexion entre les modes caractéristique du bord d'attaque et les modes de type crossflow. Le wavemaker est localisé dans une région près du bord d'attaque. Les résultats numériques sont comparés qualitativement avec des observations expérimentales et des analyses de stabilité locale.
|
356 |
Policy Explanation and Model Refinement in Decision-Theoretic PlanningKhan, Omar Zia January 2013 (has links)
Decision-theoretic systems, such as Markov Decision Processes (MDPs), are used for sequential decision-making under uncertainty. MDPs provide a generic framework that can be applied in various domains to compute optimal policies. This thesis presents techniques that offer explanations of optimal policies for MDPs and then refine decision theoretic models (Bayesian networks and MDPs) based on feedback from experts.
Explaining policies for sequential decision-making problems is difficult due to the presence of stochastic effects, multiple possibly competing objectives and long-range effects of actions. However, explanations are needed to assist experts in validating that the policy is correct and to help users in developing trust in the choices recommended by the policy. A set of domain-independent templates to justify a policy recommendation is presented along with a process to identify the minimum possible number of templates that need to be populated to completely justify the policy.
The rejection of an explanation by a domain expert indicates a deficiency in the model which led to the generation of the rejected policy. Techniques to refine the model parameters such that the optimal policy calculated using the refined parameters would conform with the expert feedback are presented in this thesis. The expert feedback is translated into constraints on the model parameters that are used during refinement. These constraints are non-convex for both Bayesian networks and MDPs. For Bayesian networks, the refinement approach is based on Gibbs sampling and stochastic hill climbing, and it learns a model that obeys expert constraints. For MDPs, the parameter space is partitioned such that alternating linear optimization can be applied to learn model parameters that lead to a policy in accordance with expert feedback.
In practice, the state space of MDPs can often be very large, which can be an issue for real-world problems. Factored MDPs are often used to deal with this issue. In Factored MDPs, state variables represent the state space and dynamic Bayesian networks model the transition functions. This helps to avoid the exponential growth in the state space associated with large and complex problems. The approaches for explanation and refinement presented in this thesis are also extended for the factored case to demonstrate their use in real-world applications. The domains of course advising to undergraduate students, assisted hand-washing for people with dementia and diagnostics for manufacturing are used to present empirical evaluations.
|
357 |
Tau-Equivalences and Refinement for Petri Nets Based DesignTarasyuk, Igor V. 27 November 2012 (has links) (PDF)
The paper is devoted to the investigation of behavioral equivalences of concurrent systems modeled by Petri nets with silent transitions. Basic τ-equivalences and back-forth τ-bisimulation equivalences known from the literature are supplemented by new ones, giving rise to complete set of equivalence notions in interleaving / true concurrency and linear / branching time semantcis. Their interrelations are examined for the general class of nets as well as for their subclasses of nets without siltent transitions and sequential nets (nets without concurrent transitions). In addition, the preservation of all the equivalence notions by refinements (allowing one to consider the systems to be modeled on a lower abstraction levels) is investigated.
|
358 |
p-Refinement Techniques for Vector Finite Elements in ElectromagneticsPark, Gi-Ho 25 August 2005 (has links)
The vector finite element method has gained great attention since overcoming the deficiencies incurred by the scalar basis functions for the vector Helmholtz equation. Most implementations of vector FEM have been non-adaptive, where a mesh of the domain is generated entirely in advance and used with a constant degree polynomial basis to assign the degrees of freedom. To reduce the dependency on the users' expertise in analyzing problems with complicated boundary structures and material characteristics, and to speed up the FEM tool, the demand for adaptive FEM grows high.
For efficient adaptive FEM, error estimators play an important role in assigning additional degrees of freedom. In this proposal study, hierarchical vector basis functions and four error estimators for p-refinement are investigated for electromagnetic applications.
|
359 |
Least-squares Finite Element Solution Of Euler Equations With Adaptive Mesh RefinementAkargun, Yigit Hayri 01 February 2012 (has links) (PDF)
Least-squares finite element method (LSFEM) is employed to simulate 2-D and axisymmetric flows governed by the compressible Euler equations. Least-squares formulation brings many advantages over classical Galerkin finite element methods. For non-self-adjoint systems, LSFEM result in symmetric positive-definite matrices which can be solved efficiently by iterative methods. Additionally, with a unified formulation it can work in all flight regimes from subsonic to supersonic. Another advantage is that, the method does not require artificial viscosity since it is naturally diffusive which also appears as a difficulty for sharply resolving high gradients in the flow field such as shock waves. This problem is dealt by employing adaptive mesh refinement (AMR) on triangular meshes. LSFEM with AMR technique is numerically tested with various flow problems and good agreement with the available data in literature is seen.
|
360 |
Adaptive numerical techniques for the solution of electromagnetic integral equationsSaeed, Usman 07 July 2011 (has links)
Various error estimation and adaptive refinement techniques for the solution of electromagnetic integral equations were developed. Residual based error estimators and h-refinement implementations were done for the Method of Moments (MoM) solution of electromagnetic integral equations for a number of different problems. Due to high computational cost associated with the MoM, a cheaper solution technique known as the Locally-Corrected Nyström (LCN) method was explored. Several explicit and implicit techniques for error estimation in the LCN solution of electromagnetic integral equations were proposed and implemented for different geometries to successfully identify high-error regions. A simple p-refinement algorithm was developed and implemented for a number of prototype problems using the proposed estimators. Numerical error was found to significantly reduce in the high-error regions after the refinement. A simple computational cost analysis was also presented for the proposed error estimation schemes. Various cost-accuracy trade-offs and problem-specific limitations of different techniques for error estimation were discussed. Finally, a very important problem of slope-mismatch in the global error rates of the solution and the residual was identified. A few methods to compensate for that mismatch using scale factors based on matrix norms were developed.
|
Page generated in 0.0661 seconds