• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 32
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 104
  • 104
  • 59
  • 22
  • 16
  • 13
  • 13
  • 12
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Transient performance simulation of gas turbine engine integrated with fuel and control systems

Wang, Chen January 2016 (has links)
Two new methods for the simulation of gas turbine fuel systems, one based on an inter-component volume (ICV) method, and the other based on the iterative Newton Raphson (NR) method, have been developed in this study. They are able to simulate the performance behaviour of each of the hydraulic components such as pumps, valves, metering unit of a fuel system, using physics-based models, which potentially offer more accurate results compared with those using transfer functions. A transient performance simulation system has been set up for gas turbine engines based on an inter-component volume (ICV). A proportional- integral (PI) control strategy is used for the simulation of engine control systems. An integrated engine and its control and hydraulic fuel systems has been set up to investigate their coupling effect during engine transient processes. The developed simulation methods and the systems have been applied to a model turbojet and a model turboshaft gas turbine engine to demonstrate the effectiveness of both two methods. The comparison between the results of engines with and without the ICV method simulated fuel system models shows that the delay of the engine transient response due to the inclusion of the fuel system components and introduced inter-component volumes is noticeable, although relatively small. The comparison of two developed methods applied to engine fuel system simulation demonstrate that both methods introduce delay effect to the engine transient response but the NR method is ahead than the ICV method due to the omission of inter-component volumes on engine fuel system simulation. The developed simulation methods are generic and can be applied to the performance simulation of any other gas turbines and their control and fuel systems. A sensitivity analysis of fuel system key parameters that may affect the engine transient behaviours has also been achieved and represented in this thesis. Three sets of fuel system key parameters have been introduced to investigate their sensitivities, which are, the volumes introduced for ICV method applied to fuel system simulation; the time constants introduced into those first order lags tosimulate the valve movements delay and fuel spray delay effect; and the fuel system key performance and structural parameters.
62

Convex Optimization and Extensions, with a View Toward Large-Scale Problems

Gao, Wenbo January 2020 (has links)
Machine learning is a major source of interesting optimization problems of current interest. These problems tend to be challenging because of their enormous scale, which makes it difficult to apply traditional optimization algorithms. We explore three avenues to designing algorithms suited to handling these challenges, with a view toward large-scale ML tasks. The first is to develop better general methods for unconstrained minimization. The second is to tailor methods to the features of modern systems, namely the availability of distributed computing. The third is to use specialized algorithms to exploit specific problem structure. Chapters 2 and 3 focus on improving quasi-Newton methods, a mainstay of unconstrained optimization. In Chapter 2, we analyze an extension of quasi-Newton methods wherein we use block updates, which add curvature information to the Hessian approximation on a higher-dimensional subspace. This defines a family of methods, Block BFGS, that form a spectrum between the classical BFGS method and Newton's method, in terms of the amount of curvature information used. We show that by adding a correction step, the Block BFGS method inherits the convergence guarantees of BFGS for deterministic problems, most notably a Q-superlinear convergence rate for strongly convex problems. To explore the tradeoff between reduced iterations and greater work per iteration of block methods, we present a set of numerical experiments. In Chapter 3, we focus on the problem of step size determination. To obviate the need for line searches, and for pre-computing fixed step sizes, we derive an analytic step size, which we call curvature-adaptive, for self-concordant functions. This adaptive step size allows us to generalize the damped Newton method of Nesterov to other iterative methods, including gradient descent and quasi-Newton methods. We provide simple proofs of convergence, including superlinear convergence for adaptive BFGS, allowing us to obtain superlinear convergence without line searches. In Chapter 4, we move from general algorithms to hardware-influenced algorithms. We consider a form of distributed stochastic gradient descent that we call Leader SGD, which is inspired by the Elastic Averaging SGD method. These methods are intended for distributed settings where communication between machines may be expensive, making it important to set their consensus mechanism. We show that LSGD avoids an issue with spurious stationary points that affects EASGD, and provide a convergence analysis of LSGD. In the stochastic strongly convex setting, LSGD converges at the rate O(1/k) with diminishing step sizes, matching other distributed methods. We also analyze the impact of varying communication delays, stochasticity in the selection of the leader points, and under what conditions LSGD may produce better search directions than the gradient alone. In Chapter 5, we switch again to focus on algorithms to exploit problem structure. Specifically, we consider problems where variables satisfy multiaffine constraints, which motivates us to apply the Alternating Direction Method of Multipliers (ADMM). Problems that can be formulated with such a structure include representation learning (e.g with dictionaries) and deep learning. We show that ADMM can be applied directly to multiaffine problems. By extending the theory of nonconvex ADMM, we prove that ADMM is convergent on multiaffine problems satisfying certain assumptions, and more broadly, analyze the theoretical properties of ADMM for general problems, investigating the effect of different types of structure.
63

Finite Element Analysis of Unreinforced Concrete Block Walls Subject to Out-of-Plane Loading

He, Zhong 12 1900 (has links)
<p>Finite element modeling of the structural response of hollow concrete block walls subject to out-of-plane loading has become more common given the availability of computers and general-purpose finite element software packages. In order to develop appropriate models of full-scale walls with and without openings, a parametric study was conducted on simple wall elements to assess different modeling techniques. Two approaches were employed in the study, homogeneous models and heterogeneous models. The linear elastic analysis was carried out to quantify the effects of the modeling techniques for hollow blocks on the structural response of the assembly, specifically for out-of-plane bending. Three structural elements with varying span/thickness ratios were considered, a horizontal spanning strip, a vertical spanning strip and a rectangular wall panel supported on four edges. The values computed using homogeneous and heterogeneous finite element models were found to differ significantly depending on the configuration and span/thickness ratio of the wall.</p><p>Further study was carried out through discrete modeling approach to generate a three-dimensional heterogeneous model to investigate nonlinear behaviour of full-scale walls under out-of-plane loading. The Composite Interface Model, established based on multi-surface plasticity, which is capable of describing both tension and shear failure mechanisms, has been incorporated into the analysis to capture adequately the inelastic behaviour of unit-mortar interface.An effective solution procedure was achieved by implementing the Newton-Raphson method, constrained with the arc-length control method and enhanced by line search algorithm. The proposed model was evaluated using experimental results for ten full-size walls reported in the literature. The comparative analysis has indicated very good agreement between the numerical and experimental results in predicting the cracking and ultimate load values as well as the corresponding crack pattern. / Thesis / Master of Applied Science (MASc)
64

Geometric and Material Nonlinear Analysis of Three-Dimensional Soil-Structure Interaction

Phan, Hoang Viet 22 August 2013 (has links)
A finite element procedure is developed for stress-deformation analysis of three-dimensional solid bodies including geometric and material nonlinearities. The formulation also includes the soil-structure interaction effect by using an interface element. A scheme is formulated to allow consistent definitions of stress, stress and strain rates, and constitutive laws. The analysis adopts the original Newton-Raphson technique coupled with incremental approach. Different elasto-plastic laws based on Von-Mises, Drucker-Prager, critical state, and cap criteria are incorporated in the formulation and computer code, and they can be used depending on the geological material involved. A special cap model is also incorporated to predict the behavior of the artificial soil used in current research. Examples are given to verify the formulation and the finite element code. Examples of the problems of soil-moving tool are also shown to compare to the experimental solutions observed in a prototype soilbin test facility. / Ph. D.
65

Load Flow Study for Utility-Scale Wind Farm Economic  Operation and Reactive Power Grid Compliance

Moon, Christopher Michael 24 June 2024 (has links)
With environmental and policy pressure to move towards cleaner fuel sources, wind energy is a proven technology that can be successfully implemented at the utility-scale and provide clean energy to the grid. Wind energy consists of many distributed wind turbines that are paralleled and connected to inject power to one location on the transmission grid. There are real power losses and reactive power drops that must be taken into consideration for these projects for plant performance and compliance. The better the performance of each new and operating wind farm installed, the more efficiently the grid operates as well as the less greenhouse gases generated. This thesis will first review the creation of an Excel tool to perform a load flow study given inputs for a wind farm using Newton-Raphson algorithms. Next, the results of the load flow analysis will be compared to an actual operating wind farm located in Texas to review the accuracy of the scenarios. Finally, alternative design and operating states for the wind farm are proposed and cases are simulated to review the impact on wind farm energy generation and reactive power provided to the grid. Finally, preferred improvements for future design and operational considerations are provided along with future areas of research and development. / Master of Science / This thesis is focused on improvements for wind farm design and operation to help wind farms generate more clean power to the grid. The thesis involves the creation of an Excel tool which can be used to complete required grid studies for real and reactive power flows within the wind farm to the point of connection with the transmission system. This analysis helps inform the wind farm design and operation to be more effective and operate more efficiently. An operating wind farm in Texas is explained and depicted for an understanding of how utility-scale wind farms are set up. Additionally, a year of data from an operating wind farm is used to compare the Excel load flow tool to actual data and confirm it's accuracy. Alternate methods this plant could have been designed and operated are evaluated using the new tool and actual operating conditions from the plant for the year under analysis are performed to better understand and quantify possible improvements for wind farms. This thesis is less focused on the wind turbine generator (WTG) construction and operation of a single unit, but rather focused on the output from the WTG and the impact on an entire system containing many of these distributed generators and their operation to provide energy to the grid.
66

Circuitos divisores Newton-Raphson e Goldschmidt otimizados para filtro adaptativo NLMS aplicado no cancelamento de interferência

FURTADO, Vagner Guidotti 07 December 2017 (has links)
Submitted by Cristiane Chim (cristiane.chim@ucpel.edu.br) on 2018-05-08T17:34:22Z No. of bitstreams: 1 Vagner Guidotti Furtado (1).pdf: 2942442 bytes, checksum: a43c18ecb28456284d4b6c622f11210d (MD5) / Made available in DSpace on 2018-05-08T17:34:22Z (GMT). No. of bitstreams: 1 Vagner Guidotti Furtado (1).pdf: 2942442 bytes, checksum: a43c18ecb28456284d4b6c622f11210d (MD5) Previous issue date: 2017-12-07 / The division operation in digital systems has its relevance because it is a necessary function in several applications, such as general purpose processors, digital signal processors and microcontrollers. The digital divider circuit is of great architectural complexity and may occupy a considerable area in the design of an integrated circuit, and as a consequence may have a great influence on the static and dynamic power dissipation of the circuit as a whole. In relation to the application of dividing circuits in circuits of the Digital Signal Processing (DSP) area, adaptive filters have a particular appeal, especially when using algorithms that perform a normalization in the input signals. In view of the above, this work focuses on the proposition of algorithms, techniques for reducing energy consumption and logical area, proposition and implementation of efficient dividing circuit architectures for use in adaptive filters. The Newton-Raphson and Goldschmidt iterative dividing circuits both operating at fixed-point were specifically addressed. The results of the synthesis of the implemented architectures of the divisors with the proposed algorithms and techniques showed considerable reduction of power and logical area of the circuits. In particular, the dividing circuits were applied in adaptive filter architectures based on the NLMS (Normalized least Mean Square) algorithm, seeking to add to these filters, characteristics of good convergence speed, combined with the improvement in energy efficiency. The adaptive filters implemented are used in the case study of harmonic cancellation on electrocardiogram signals / A operação de divisão em sistemas digitais tem sua relevância por se tratar de uma função necessária em diversas aplicações, tais como processadores de propósito geral, processadores digitais de sinais e microcontroladores. O circuito divisor digital é de grande complexidade arquitetural, podendo ocupar uma área considerável no projeto de um circuito integrado, e por consequência pode ter uma grande influência na dissipação de potência estática e dinâmica do circuito como um todo. Em relação à aplicação de circuitos divisores em circuitos da área DSP (Digital Signal Processing), os filtros adaptativos têm um particular apelo, principalmente quando são utilizados algoritmos que realizam uma normalização nos sinais de entrada. Diante do exposto, este trabalho foca na proposição de algoritmos, técnicas de redução de consumo de energia e área lógica, proposição e implementação de arquiteturas de circuitos divisores eficientes para utilização em filtros adaptativos. Foram abordados em específico os circuitos divisores iterativos Newton-Raphson e Goldschmidt ambos operando em ponto-fixo. Os resultados da síntese das arquiteturas implementadas dos divisores com os algoritmos e técnicas propostas mostraram considerável redução de potência e área lógica dos circuitos. Em particular, os circuitos divisores foram aplicados em arquiteturas de filtros adaptativos baseadas no algoritmo NLMS (Normalized least Mean Square), buscando agregar a esses filtros, características de boa velocidade de convergência, aliada à melhoria na eficiência energética. Os filtros adaptativos implementados são utilizados no estudo de caso de cancelamento de harmônicas em sinais de eletrocardiograma (ECG)
67

Problemas inversos em engenharia financeira: regularização com critério de entropia / Inverse problems in financial engineering: regularization with entropy criteria

Raombanarivo Dina Ramilijaona 13 September 2013 (has links)
Esta dissertação aplica a regularização por entropia máxima no problema inverso de apreçamento de opções, sugerido pelo trabalho de Neri e Schneider em 2012. Eles observaram que a densidade de probabilidade que resolve este problema, no caso de dados provenientes de opções de compra e opções digitais, pode ser descrito como exponenciais nos diferentes intervalos da semireta positiva. Estes intervalos são limitados pelos preços de exercício. O critério de entropia máxima é uma ferramenta poderosa para regularizar este problema mal posto. A família de exponencial do conjunto solução, é calculado usando o algoritmo de Newton-Raphson, com limites específicos para as opções digitais. Estes limites são resultados do princípio de ausência de arbitragem. A metodologia foi usada em dados do índice de ação da Bolsa de Valores de São Paulo com seus preços de opções de compra em diferentes preços de exercício. A análise paramétrica da entropia em função do preços de opções digitais sínteticas (construídas a partir de limites respeitando a ausência de arbitragem) mostraram valores onde as digitais maximizaram a entropia. O exemplo de extração de dados do IBOVESPA de 24 de janeiro de 2013, mostrou um desvio do princípio de ausência de arbitragem para as opções de compra in the money. Este princípio é uma condição necessária para aplicar a regularização por entropia máxima a fim de obter a densidade e os preços. Nossos resultados mostraram que, uma vez preenchida a condição de convexidade na ausência de arbitragem, é possível ter uma forma de smile na curva de volatilidade, com preços calculados a partir da densidade exponencial do modelo. Isto coloca o modelo consistente com os dados do mercado. Do ponto de vista computacional, esta dissertação permitiu de implementar, um modelo de apreçamento que utiliza o princípio de entropia máxima. Três algoritmos clássicos foram usados: primeiramente a bisseção padrão, e depois uma combinação de metodo de bisseção com Newton-Raphson para achar a volatilidade implícita proveniente dos dados de mercado. Depois, o metodo de Newton-Raphson unidimensional para o cálculo dos coeficientes das densidades exponenciais: este é objetivo do estudo. Enfim, o algoritmo de Simpson foi usado para o calculo integral das distribuições cumulativas bem como os preços do modelo obtido através da esperança matemática. / This study aims at applying Maximum Entropy Regularization to the Inverse Problem of Option Pricing suggested by Neri and Schneider in 2012. They pointed out that the probability density that solves such problem in the case of calls and digital options could be written as piecewise exponentials on the positive real axis. The limits of these segments are the different strike prices. The entropy criteria is a powerful tool to regularize this ill-posed problem. The Exponential Family solution set is calculated using a Newton-Raphson algorithm, with specific bounds for the binary options. These bounds obey the no-arbitrage principle. We applied the method to data from the Brazilian stock index BOVESPA and its call prices for different strikes. The parametric entropy analysis for "synthetic" digital prices (constructed from the no-arbitrage bounds) showed values where the digital prices maximizes the entropy. The example of data extracted on the IBOVESPA of January 24th 2013, showed slippage from the no-arbitrage principle when the option was in the money: such principle is a necessary condition to apply the maximum entropy regularization to get the density and modeled prices. When the condition is fulfilled, our results showed that it is possible to have a smile-like volatility curve with prices calculated from the exponential density that fit well the market data. In a computational modelling perspective, this thesis enabled the implementation of a pricing method using the maximum entropy principle. Three well known algorithms were used in that extent. The bisection alone, then a combined bisection with Newton-Raphson to recover the implied volatility from market data. Thereafter, the one dimensional Newton-Raphson to calculate the coefficients of the exponential densities: purpose of the study. Finally the Simpson method was used to calculate integrals of the cumulative distributions and the modeled prices implied by the expectation.
68

Métodos para Encontrar Raízes Exatas e Aproximadas de Funções Polinomiais até o 4º Grau

Nascimento, Demilson Antonio do 24 February 2015 (has links)
Submitted by Maike Costa (maiksebas@gmail.com) on 2016-03-30T11:12:00Z No. of bitstreams: 1 arquivo total.pdf: 1989591 bytes, checksum: c1b3f2740144367fd7ef458d0603ba20 (MD5) / Made available in DSpace on 2016-03-30T11:12:00Z (GMT). No. of bitstreams: 1 arquivo total.pdf: 1989591 bytes, checksum: c1b3f2740144367fd7ef458d0603ba20 (MD5) Previous issue date: 2015-02-24 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / In several scienti c character problems, it is common to come across us with the need to obtain an approximate solution to nd roots of functions. At this point, this paper aims to conduct a study about some methods used to obtain an approximate solution of the functions of roots. The survey was made by means of a literature review, focusing on Numerical Methods Bisection, False Position, Fixed Point, Newton-Raphson and Secant. In order to illustrate the operation and application of these methods, numerical test problems taken from the literature were performed by implementing these. For each test performed were analyzed parameters that in uence each method and the convergence situation for the approximate solution of the analyzed problems. Although these methods do not always make available exact roots, they can be calculated with the precision that the problem needs. At this point, it is evident the importance of studying methods for nding such equations roots. Thus, the work is justi ed on the need to discuss the problems facing the nding roots of polynomial functions in the literature. In addition, this paper describes a comparison between the methods studied by applying mathematical problems. All this research material becomes adept and e ective for students and professionals from all areas that make use of them, or perhaps wish to extract it for enrichment of several sources of study. / Em diversos problemas de caráter cientí co, é comum depararmo-nos com a necessidade de obter uma solução aproximada para encontrar raízes de funções. Nesse ponto, este trabalho objetiva realizar um estudo acerca de alguns métodos utilizados para a obtenção de uma solução aproximada das raízes de funções. A pesquisa realizada deu-se por meio de uma revisão bibliográ ca, enfocando os Métodos Numéricos da Bisseção, Falsa Posição, Ponto Fixo, Newton-Raphson, Secante e Muller. Com o intuito de ilustrar o funcionamento e aplicação desses métodos, foram realizados testes numéricos de problemas extraídos da literatura por meio da implementação destes. Para cada teste realizado foram analisados os parâmetros que in uenciam cada método e a situação de convergência para a solução aproximada dos problemas analisados. Embora esses métodos, nem sempre, disponibilizem raízes exatas, estas poderão ser calculadas com a precisão que o problema necessite. Nesse ponto, ca evidente a importância de estudar métodos para encontrar tais raízes de equações. Diante disso, o trabalho se justi ca na necessidade de se discutir os problemas voltados a encontrar raízes de funções polinomiais, existentes na literatura. Além disso, o presente trabalho descreve um comparativo entre os métodos estudados mediante aplicação de problemas matemáticos. Todo esse material de pesquisa torna-se hábil e e caz para os estudantes e pro ssionais de todas as áreas que dele faça uso, ou, porventura, pretendam extraí-lo para enriquecimento de fontes diversas de estudo.
69

Problemas inversos em engenharia financeira: regularização com critério de entropia / Inverse problems in financial engineering: regularization with entropy criteria

Raombanarivo Dina Ramilijaona 13 September 2013 (has links)
Esta dissertação aplica a regularização por entropia máxima no problema inverso de apreçamento de opções, sugerido pelo trabalho de Neri e Schneider em 2012. Eles observaram que a densidade de probabilidade que resolve este problema, no caso de dados provenientes de opções de compra e opções digitais, pode ser descrito como exponenciais nos diferentes intervalos da semireta positiva. Estes intervalos são limitados pelos preços de exercício. O critério de entropia máxima é uma ferramenta poderosa para regularizar este problema mal posto. A família de exponencial do conjunto solução, é calculado usando o algoritmo de Newton-Raphson, com limites específicos para as opções digitais. Estes limites são resultados do princípio de ausência de arbitragem. A metodologia foi usada em dados do índice de ação da Bolsa de Valores de São Paulo com seus preços de opções de compra em diferentes preços de exercício. A análise paramétrica da entropia em função do preços de opções digitais sínteticas (construídas a partir de limites respeitando a ausência de arbitragem) mostraram valores onde as digitais maximizaram a entropia. O exemplo de extração de dados do IBOVESPA de 24 de janeiro de 2013, mostrou um desvio do princípio de ausência de arbitragem para as opções de compra in the money. Este princípio é uma condição necessária para aplicar a regularização por entropia máxima a fim de obter a densidade e os preços. Nossos resultados mostraram que, uma vez preenchida a condição de convexidade na ausência de arbitragem, é possível ter uma forma de smile na curva de volatilidade, com preços calculados a partir da densidade exponencial do modelo. Isto coloca o modelo consistente com os dados do mercado. Do ponto de vista computacional, esta dissertação permitiu de implementar, um modelo de apreçamento que utiliza o princípio de entropia máxima. Três algoritmos clássicos foram usados: primeiramente a bisseção padrão, e depois uma combinação de metodo de bisseção com Newton-Raphson para achar a volatilidade implícita proveniente dos dados de mercado. Depois, o metodo de Newton-Raphson unidimensional para o cálculo dos coeficientes das densidades exponenciais: este é objetivo do estudo. Enfim, o algoritmo de Simpson foi usado para o calculo integral das distribuições cumulativas bem como os preços do modelo obtido através da esperança matemática. / This study aims at applying Maximum Entropy Regularization to the Inverse Problem of Option Pricing suggested by Neri and Schneider in 2012. They pointed out that the probability density that solves such problem in the case of calls and digital options could be written as piecewise exponentials on the positive real axis. The limits of these segments are the different strike prices. The entropy criteria is a powerful tool to regularize this ill-posed problem. The Exponential Family solution set is calculated using a Newton-Raphson algorithm, with specific bounds for the binary options. These bounds obey the no-arbitrage principle. We applied the method to data from the Brazilian stock index BOVESPA and its call prices for different strikes. The parametric entropy analysis for "synthetic" digital prices (constructed from the no-arbitrage bounds) showed values where the digital prices maximizes the entropy. The example of data extracted on the IBOVESPA of January 24th 2013, showed slippage from the no-arbitrage principle when the option was in the money: such principle is a necessary condition to apply the maximum entropy regularization to get the density and modeled prices. When the condition is fulfilled, our results showed that it is possible to have a smile-like volatility curve with prices calculated from the exponential density that fit well the market data. In a computational modelling perspective, this thesis enabled the implementation of a pricing method using the maximum entropy principle. Three well known algorithms were used in that extent. The bisection alone, then a combined bisection with Newton-Raphson to recover the implied volatility from market data. Thereafter, the one dimensional Newton-Raphson to calculate the coefficients of the exponential densities: purpose of the study. Finally the Simpson method was used to calculate integrals of the cumulative distributions and the modeled prices implied by the expectation.
70

[en] ADVANCES IN IMPLICIT INTEGRATION ALGORITHMS FOR MULTISURFACE PLASTICITY / [pt] AVANÇOS EM ALGORITMOS DE INTEGRAÇÃO IMPLÍCITA PARA PLASTICIDADE COM MÚLTIPLAS SUPERFÍCIES

RAFAEL OTAVIO ALVES ABREU 04 December 2023 (has links)
[pt] A representação matemática de comportamentos complexos em materiais exige formulações constitutivas sofisticada, como é o caso de modelos com múltiplas superfícies de plastificação. Assim, um modelo elastoplástico complexo demanda um procedimento robusto de integração das equações de evolução plástica. O desenvolvimento de esquemas de integração para modelos de plasticidade é um tópico de pesquisa importante, já que estes estão diretamente ligados à acurácia e eficiência de simulações numéricas de materiais como metais, concretos, solos e rochas. O desempenho da solução de elementos finitos é diretamente afetado pelas características de convergência do procedimento de atualização de estados. Dessa forma, este trabalho explora a implementação de modelos constitutivos complexos, focando em modelos genéricos com múltiplas superfícies de plastificação. Este estudo formula e avalia algoritmos de atualização de estado que formam uma estrutura robusta para a simulação de materiais regidos por múltiplas superfícies de plastificação. Algoritmos de integração implícita são desenvolvidos com ênfase na obtenção de robustez, abrangência e flexibilidade para lidar eficazmente com aplicações complexas de plasticidade. Os algoritmos de atualização de estado, baseados no método de Euler implícito e nos métodos de Newton-Raphson e Newton-Krylov, são formulados utilizando estratégias de busca unidimensional para melhorar suas características de convergência. Além disso, é implementado um esquema de subincrementação para proporcionar mais robustez ao procedimento de atualização de estado. A flexibilidade dos algoritmos é explorada, considerando várias condições de tensão, como os estados plano de tensões e plano de deformações, num esquema de integração único e versátil. Neste cenário, a robustez e o desempenho dos algoritmos são avaliados através de aplicações clássicas de elementos finitos. Além disso, o cenário desenvolvido no contexto de modelos com múltiplas superfícies de plastificação é aplicado para formular um modelo elastoplástico com dano acoplado, que é avaliado através de ensaios experimentais em estruturas de concreto. Os resultados obtidos evidenciam a eficácia dos algoritmos de atualização de estado propostos na integração de equações de modelos com múltiplas superfícies de plastificação e a sua capacidade para lidar com problemas desafiadores de elementos finitos. / [en] The mathematical representation of complex material behavior requires a sophisticated constitutive formulation, as it is the case of multisurface plasticity. Hence, a complex elastoplastic model demands a robust integration procedure for the plastic evolution equations. Developing integration schemes for plasticity models is an important research topic because these schemes are directly related to the accuracy and efficiency of numerical simulations for materials such as metals, concrete, soils and rocks. The performance of the finite element solution is directly influenced by the convergence characteristics of the state-update procedure. Therefore, this work explores the implementation of complex constitutive models, focusing on generic multisurface plasticity models. This study formulates and evaluates state-update algorithms that form a robust framework for simulating materials governed by multisurface plasticity. Implicit integration algorithms are developed with an emphasis on achieving robustness, comprehensiveness and flexibility to handle cumbersome plasticity applications effectively. The state-update algorithms, based on the backward Euler method and the Newton-Raphson and Newton-Krylov methods, are formulated using line search strategies to improve their convergence characteristics. Additionally, a substepping scheme is implemented to provide further robustness to the state-update procedure. The flexibility of the algorithms is explored, considering various stress conditions such as plane stress and plane strain states, within a single, versatile integration scheme. In this scenario, the robustness and performance of the algorithms are assessed through classical finite element applications. Furthermore, the developed multisurface plasticity background is applied to formulate a coupled elastoplastic-damage model, which is evaluated using experimental tests in concrete structures. The achieved results highlight the effectiveness of the proposed state-update algorithms in integrating multisurface plasticity equations and their ability to handle challenging finite element problems.

Page generated in 0.0382 seconds