Spelling suggestions: "subject:"newton raphson c.method"" "subject:"newton raphson 20method""
41 |
Spherically-actuated platform manipulator with passive prismatic jointsNyzen, Ronald A. January 2002 (has links)
No description available.
|
42 |
CIRCE a new software to predict the steady state equilibrium of chemical reactions / CIRCE un nouveau logiciel pour prédire l'équilibre des réactions chimiques à l'état d'équilibreLiu, Qi 11 December 2018 (has links)
L'objectif de cette thèse est de développer un nouveau code pour prédire l'équilibre final d'un processus chimique complexe impliquant beaucoup de produits, plusieurs phases et plusieurs processus chimiques. Des méthodes numériques ont été développées au cours des dernières décennies pour prédire les équilibres chimiques finaux en utilisant le principe de minimisation de l'enthalpie libre du système. La plupart des méthodes utilisent la méthode des « multiplicateurs de Lagrange » et résolvent les équations en employant une approximation du problème de Lagrange et en utilisant un algorithme de convergence pas à pas de type Newton-Raphson. Les équations mathématiques correspondantes restent cependant fortement non linéaires, de sorte que la résolution, notamment de systèmes multiphasiques, peut être très aléatoire. Une méthode alternative de recherche du minimum de l’énergie de Gibbs (MCGE) est développée dans ce travail, basée sur une technique de Monte-Carlo associée à une technique de Pivot de Gauss pour sélectionner des vecteurs composition satisfaisant la conservation des atomes. L'enthalpie libre est calculée pour chaque vecteur et le minimum est recherché de manière très simple. Cette méthode ne présente a priori pas de limite d’application (y compris pour las mélanges multiphasiques) et l’équation permettant de calculer l’énergie de Gibbs n’a pas à être discrétisée. Il est en outre montré que la précision des prédictions dépend assez significativement des valeurs thermodynamiques d’entrée telles l'énergie de formation des produits et les paramètres d'interaction moléculaire. La valeur absolue de ces paramètres n'a pas autant d’importance que la précision de leur évolution en fonction des paramètres du process (pression, température, ...). Ainsi, une méthode d'estimation cohérente est requise. Pour cela, la théorie de la « contribution de groupe » est utilisée (ceux de UNIFAC) et a été étendue en dehors du domaine d'interaction moléculaire traditionnel, par exemple pour prédire l'énergie de formation d’enthalpie libre, la chaleur spécifique... Enfin, l'influence du choix de la liste finale des produits est discutée. On montre que la prédictibilité dépend du choix initial de la liste de produits et notamment de son exhaustivité. Une technique basée sur le travail de Brignole et Gani est proposée pour engendrer automatiquement la liste des produits stable possibles. Ces techniques ont été programmées dans un nouveau code : CIRCE. Les travaux de Brignole et de Gani sont mis en œuvre sur la base de la composition atomique des réactifs pour prédire toutes les molécules « réalisables ». La théorie de la « contribution du groupe » est mise en œuvre pour le calcul des propriétés de paramètres thermodynamiques. La méthode MCGE est enfin utilisée pour trouver le minimum absolu de la fonction d'enthalpie libre. Le code semble plus polyvalent que les codes traditionnels (CEA, ASPEN, ...) mais il est plus coûteux en termes de temps de calcul. Il peut aussi être plus prédictif. Des exemples de génie des procédés illustrent l'étendue des applications potentielles en génie chimique. / The objective of this work is to develop a new code to predict the final equilibrium of a complex chemical process with many species/reactions and several phases. Numerical methods were developed in the last decades to predict final chemical equilibria using the principle of minimizing the Gibbs free energy of the system. Most of them use the “Lagrange Multipliers” method and solve the resulting system of equations under the form of an approximate step by step convergence technique. Notwithstanding the potential complexity of the thermodynamic formulation of the “Gibbs problem,” the resulting mathematical formulation is always strongly non-linear so that solving multiphase systems may be very tricky and having the difficult to reach the absolute minimum. An alternative resolution method (MCGE) is developed in this work based on a Monte Carlo technique associated to a Gaussian elimination method to map the composition domain while satisfying the atom balance. The Gibbs energy is calculated at each point of the composition domain and the absolute minimum can be deduced very simply. In theory, the technique is not limited, the Gibbs function needs not be discretised and multiphase problem can be handled easily. It is further shown that the accuracy of the predictions depends to a significant extent on the “coherence” of the input thermodynamic data such the formation Gibbs energy of the species and molecular interaction parameters. The absolute value of such parameters does not matter as much as their evolution as function of the process parameters (pressure, temperature, …). So, a self-consistent estimation method is required. To achieve this, the group contribution theory is used (UNIFAC descriptors) and extended somewhat outside the traditional molecular interaction domain, for instance to predict the Gibbs energy of formation of the species, the specific heat capacity… Lastly the influence of the choice of the final list of products is discussed. It is shown that the relevancy of the prediction depends to a large extent on this initial choice. A first technique is proposed, based on Brignole and Gani‘s work, to avoid omitting species and another one to select, in this list, the products likely to appear given the process conditions. These techniques were programmed in a new code name CIRCE. Brignole and Gani-‘s method is implemented on the basis of the atomic composition of the reactants to predict all “realisable” molecules. The extended group contribution theory is implemented to calculate the thermodynamic parameters. The MCGE method is used to find the absolute minimum of the Gibbs energy function. The code seems to be more versatile than the traditional ones (CEA, ASPEN…) but more expensive in calculation costs. It can also be more predictive. Examples are shown illustrating the breadth of potential applications in chemical engineering.
|
43 |
Počítačová simulace kolapsu budovy zplastizováním kloubů / Computer simulation of building collapse due to formation of plastic hingesValeš, Jan January 2012 (has links)
The aim of the thesis is to create an analytical 2D model of a multi-storey building and its consequent loading until the point of collapse which occures due to formation of plastic hinges. The first part is going to present a linear analysis of the problem; it focuses on location and level of load when the plastic hinges are formed. Then a nonlinaer analysis is going to be performed by RFEM programme using postcritical analysis and dynamic relaxation. Differences between the results of mentioned types and methods of analysis are going to be compared and an impact of variables is going to be evaluated .
|
44 |
Porovnání různých metod nelineárního výpočtu konstrukcí s hlediska rychlosti, přesnosti a robustnosti. / Comparison of various methods for nonlinear analysis of structures from the point of view of speed, accuracy and robustness.Bravenec, Ladislav January 2013 (has links)
The aim of the thesis is to compare the iterative methods which program RFEM 5 uses the non-linear calculations of structures, namely the analysis of large deformations and post critical analysis. Comparison should serve as a basis for which calculation method is the most accurate, fastest and most reliable in terms of getting results. Time-consuming will be judged according to the calculation of the solution and the time needed to compute one iterativ. Robustness we will compare the reliability of methods in in normal use. Accuracy of the calculation will be determined by comparing the maximum deformation structures. Comparison will be made with examples from practice.
|
45 |
[en] ADVANCES IN IMPLICIT INTEGRATION ALGORITHMS FOR MULTISURFACE PLASTICITY / [pt] AVANÇOS EM ALGORITMOS DE INTEGRAÇÃO IMPLÍCITA PARA PLASTICIDADE COM MÚLTIPLAS SUPERFÍCIESRAFAEL OTAVIO ALVES ABREU 04 December 2023 (has links)
[pt] A representação matemática de comportamentos complexos em materiais
exige formulações constitutivas sofisticada, como é o caso de modelos com
múltiplas superfícies de plastificação. Assim, um modelo elastoplástico complexo
demanda um procedimento robusto de integração das equações de evolução
plástica. O desenvolvimento de esquemas de integração para modelos de
plasticidade é um tópico de pesquisa importante, já que estes estão diretamente
ligados à acurácia e eficiência de simulações numéricas de materiais como metais,
concretos, solos e rochas. O desempenho da solução de elementos finitos é
diretamente afetado pelas características de convergência do procedimento de
atualização de estados. Dessa forma, este trabalho explora a implementação de
modelos constitutivos complexos, focando em modelos genéricos com múltiplas
superfícies de plastificação. Este estudo formula e avalia algoritmos de atualização
de estado que formam uma estrutura robusta para a simulação de materiais regidos
por múltiplas superfícies de plastificação. Algoritmos de integração implícita são
desenvolvidos com ênfase na obtenção de robustez, abrangência e flexibilidade para
lidar eficazmente com aplicações complexas de plasticidade. Os algoritmos de
atualização de estado, baseados no método de Euler implícito e nos métodos de
Newton-Raphson e Newton-Krylov, são formulados utilizando estratégias de busca
unidimensional para melhorar suas características de convergência. Além disso, é
implementado um esquema de subincrementação para proporcionar mais robustez
ao procedimento de atualização de estado. A flexibilidade dos algoritmos é
explorada, considerando várias condições de tensão, como os estados plano de
tensões e plano de deformações, num esquema de integração único e versátil. Neste
cenário, a robustez e o desempenho dos algoritmos são avaliados através de
aplicações clássicas de elementos finitos. Além disso, o cenário desenvolvido no
contexto de modelos com múltiplas superfícies de plastificação é aplicado para
formular um modelo elastoplástico com dano acoplado, que é avaliado através de
ensaios experimentais em estruturas de concreto. Os resultados obtidos evidenciam
a eficácia dos algoritmos de atualização de estado propostos na integração de
equações de modelos com múltiplas superfícies de plastificação e a sua capacidade
para lidar com problemas desafiadores de elementos finitos. / [en] The mathematical representation of complex material behavior requires a
sophisticated constitutive formulation, as it is the case of multisurface plasticity.
Hence, a complex elastoplastic model demands a robust integration procedure for
the plastic evolution equations. Developing integration schemes for plasticity
models is an important research topic because these schemes are directly related to
the accuracy and efficiency of numerical simulations for materials such as metals,
concrete, soils and rocks. The performance of the finite element solution is directly
influenced by the convergence characteristics of the state-update procedure.
Therefore, this work explores the implementation of complex constitutive models,
focusing on generic multisurface plasticity models. This study formulates and
evaluates state-update algorithms that form a robust framework for simulating
materials governed by multisurface plasticity. Implicit integration algorithms are
developed with an emphasis on achieving robustness, comprehensiveness and
flexibility to handle cumbersome plasticity applications effectively. The state-update algorithms, based on the backward Euler method and the Newton-Raphson
and Newton-Krylov methods, are formulated using line search strategies to improve
their convergence characteristics. Additionally, a substepping scheme is
implemented to provide further robustness to the state-update procedure. The
flexibility of the algorithms is explored, considering various stress conditions such
as plane stress and plane strain states, within a single, versatile integration scheme.
In this scenario, the robustness and performance of the algorithms are assessed
through classical finite element applications. Furthermore, the developed
multisurface plasticity background is applied to formulate a coupled elastoplastic-damage model, which is evaluated using experimental tests in concrete structures.
The achieved results highlight the effectiveness of the proposed state-update
algorithms in integrating multisurface plasticity equations and their ability to handle
challenging finite element problems.
|
46 |
Inference for Generalized Multivariate Analysis of Variance (GMANOVA) Models and High-dimensional ExtensionsJana, Sayantee 11 1900 (has links)
A Growth Curve Model (GCM) is a multivariate linear model used for analyzing longitudinal data with short to moderate time series. It is a special case of Generalized Multivariate Analysis of Variance (GMANOVA) models. Analysis using the GCM involves comparison of mean growths among different groups. The classical GCM, however, possesses some limitations including distributional assumptions, assumption of identical degree of polynomials for all groups and it requires larger sample size than the number of time points. In this thesis, we relax some of the assumptions of the traditional GCM and develop appropriate inferential tools for its analysis, with the aim of reducing bias, improving precision and to gain increased power as well as overcome limitations of high-dimensionality.
Existing methods for estimating the parameters of the GCM assume that the underlying distribution for the error terms is multivariate normal. In practical problems, however, we often come across skewed data and hence estimation techniques developed under the normality assumption may not be optimal. Simulation studies conducted in this thesis, in fact, show that existing methods are sensitive to the presence of skewness in the data, where estimators are associated with increased bias and mean square error (MSE), when the normality assumption is violated. Methods appropriate for skewed distributions are, therefore, required. In this thesis, we relax the distributional assumption of the GCM and provide estimators for the mean and covariance matrices of the GCM under multivariate skew normal (MSN) distribution. An estimator for the additional skewness parameter of the MSN distribution is also provided. The estimators are derived using the expectation maximization (EM) algorithm and extensive simulations are performed to examine the performance of the estimators. Comparisons with existing estimators show that our estimators perform better than existing estimators, when the underlying distribution is multivariate skew normal. Illustration using real data set is also provided, wherein Triglyceride levels from the Framingham Heart Study is modelled over time.
The GCM assumes equal degree of polynomial for each group. Therefore, when groups means follow different shapes of polynomials, the GCM fails to accommodate this difference in one model. We consider an extension of the GCM, wherein mean responses from different groups can have different shapes, represented by polynomials of different degree. Such a model is referred to as Extended Growth Curve Model (EGCM). We extend our work on GCM to EGCM, and develop estimators for the mean and covariance matrices under MSN errors. We adopted the Restricted Expectation Maximization (REM) algorithm, which is based on the multivariate Newton-Raphson (NR) method and Lagrangian optimization. However, the multivariate NR method and hence, the existing REM algorithm are applicable to vector parameters and the parameters of interest in this study are matrices. We, therefore, extended the NR approach to matrix parameters, which consequently allowed us to extend the REM algorithm to matrix parameters. The performance of the proposed estimators were examined using extensive simulations and a motivating real data example was provided to illustrate the application of the proposed estimators.
Finally, this thesis deals with high-dimensional application of GCM. Existing methods for a GCM are developed under the assumption of ‘small p large n’ (n >> p) and are not appropriate for analyzing high-dimensional longitudinal data, due to singularity of the sample covariance matrix. In a previous work, we used Moore-Penrose generalized inverse to overcome this challenge. However, the method has some limitations around near singularity, when p~n. In this thesis, a Bayesian framework was used to derive a test for testing the linear hypothesis on the mean parameter of the GCM, which is applicable in high-dimensional situations. Extensive simulations are performed to investigate the performance of the test statistic and establish optimality characteristics. Results show that this test performs well, under different conditions, including the near singularity zone. Sensitivity of the test to mis-specification of the parameters of the prior distribution are also examined empirically. A numerical example is provided to illustrate the usefulness of the proposed method in practical situations. / Thesis / Doctor of Philosophy (PhD)
|
47 |
Voltage Stability Analysis of Unbalanced Power SystemsSantosh Kumar, A January 2016 (has links) (PDF)
The modern day power system is witnessing a tremendous change. There has been a rapid rise in the distributed generation, along with this the deregulation has resulted in a more complex system. The power demand is on a rise, the generation and trans-mission infrastructure hasn't yet adapted to this growing demand. The economic and operational constraints have forced the system to be operated close to its design limits, making the system vulnerable to disturbances and possible grid failure. This makes the study of voltage stability of the system important more than ever.
Generally, voltage stability studies are carried on a single phase equivalent system assuming that the system is perfectly balanced. However, the three phase power system is not always in balanced state. There are a number of untransposed lines, single phase and double phase lines. This thesis deals with three phase voltage stability analysis, in particular the voltage stability index known as L-Index. The equivalent single phase analysis for voltage stability fails to work in case of any unbalance in the system or in presence of asymmetrical contingency. Moreover, as the system operators are giving importance to synchrophasor measurements, PMUs are being installed throughout the system. Hence, the three phase voltages can be obtained, making three phase analysis easier.
To study the effect of unbalanced system on voltage stability a three phase L-Index based on traditional L-Index has been proposed. The proposed index takes into consideration the unbalance resulting due to untransposed transmission lines and unbalanced
loads in the system. This index can handle any unbalance in the system and is much more realistic. To obtain bus voltages during unbalanced operation of the system a three phase decoupled Newton Raphson load ow was used.
Reactive power distribution in a system can be altered using generators voltage set-ting, transformers OLTC settings and SVC settings. All these settings are usually in balanced mode i.e. all the phases have the same setting. Based on this reactive power optimization using LP technique on an equivalent single phase system is proposed. This method takes into account generator voltage settings, OLTC settings of transformers and SVC settings. The optimal settings so obtained are applied to corresponding three phase system. The effectiveness of the optimal settings during unbalanced scenario is studied. This method ensures better voltage pro les and decrease in power loss.
Case studies of the proposed methods are carried on 12 bus and 24 bus EHV systems of southern Indian grid and a modified IEEE 30 bus system. Both balanced and unbalanced systems are studied and the results are compared.
|
48 |
[pt] OTIMIZAÇÃO TOPOLÓGICA PARA PROBLEMAS DE ESCOAMENTO DE FLUIDOS NÃO NEWTONIANOS USANDO O MÉTODO DOS ELEMENTOS VIRTUAIS / [en] TOPOLOGY OPTIMIZATION FOR NON-NEWTONIAN FLUID-FLOW PROBLEMS USING THE VIRTUAL ELEMENT METHODMIGUEL ANGEL AMPUERO SUAREZ 28 August 2020 (has links)
[pt] Este trabalho apresenta aplicações da técnica de otimização topológica para problemas de escoamento com fluidos não Newtonianos, usando o método dos elementos virtuais (VEM) em domínios bidimensionais arbitrários. O objetivo é projetar a trajetória ótima, a partir da minimização da energia dissipativa, de um escoamento governado pelas equações de Navier-Stokes-Brinkman e do modelo não Newtoniano de Carreau-Yasuda. A abordagem de porosidade proposta por (Borrvall e Petersson, 2003) [1] é usada na formulação do problema de otimização topológica. Para resolver este problema numericamente é usado o método VEM, recentemente proposto. A principal característica que diferencia o VEM do método dos elementos finitos (FEM) é que as funções de interpolação no interior dos elementos não precisam ser computadas explicitamente. Isso ocorre porque a integração é feita em funções polinomiais e bases de ordem inferior, permitindo assim uma grande flexibilidade no que diz respeito ao uso de elementos não convexos. Portanto, o cálculo das matrizes e vetores elementares se reduz à avaliação de grandezas geométricas nos contornos desses elementos. Finalmente, são apresentados exemplos numéricos representativos para demonstrar a eficiência do VEM em comparação com o FEM e a aplicabilidade da otimização topológica para esta classe de problemas de escoamento. / [en] This work presents selected applications of topology optimization for non-Newtonian fluid flow problems using the virtual element method (VEM) in arbitrary two-dimensional domains. The objective is to design an optimal layout into a fluid flow domain to minimize dissipative energy governed by the Navier-Stokes-Brinkman and non-Newtonian Carreau-Yasuda model equations. The porosity approach proposed by (Borrvall and Petersson, 2003) [1] is used in the topology optimization formulation. To solve this problem numerically, the recently proposed VEM method is used. The key feature that distinguishes VEM from the standard finite element method (FEM) is that the interpolation functions in the interior of the elements do not need to be computed explicitly. This is because the integration is on lower-order polynomial and basis functions, and there is great flexibility by using a non-convex element. Therefore, the computation of the main element matrices and vectors are reduced to the evaluation of geometric quantities on the boundary of the elements. Finally, several numerical examples are provided to demonstrate the efficiency of the VEM compared to FEM and the applicability of the topology optimization to fluid flow problems.
|
49 |
Das neue Kontaktmodell in Mechanica WF 4.0 mit Reibung : Theoretische Grundlagen und AnwendungsbeispieleJakel, Roland 11 May 2009 (has links) (PDF)
Der Vortrag stellt das neue, unendlich reibungsbehaftete Kontaktmodell der FEM-Berechnungssoftware Pro/ENGINEER Mechanica in der Version Wildfire 4.0 von PTC vor. Dabei werden sowohl die Grundlagen des reibungsfreien Kontaktes als auch die Theorie des unendlich reibungsbehafteten Kontaktmodells behandelt sowie die Grundlagen der zur numerischen Lösung in der Software verwendeten Penalty- und Newton-Raphson-Methode erläutert.
Als Anwendungsbeispiel für das reibungsfreie Kontaktmodell wird ein Zylinderrollenlager vollständig mit sämtlichen Wälzkontakten für verschiedene Lager- und Einbauspiele berechnet, die Ergebnisse umfassend dargestellt sowie eine analytische Gegenrechnung nach der Hertzschen Theorie ausgeführt, die sehr gute Übereinstimmung mit der numerischen Simulation zeigt. Für das reibungsbehaftete Kontaktmodell wird exemplarisch eine geschrumpfte Welle-Nabe-Verbindung unter Torsion berechnet. Diese wird einer analytischen Lösung sowie verschiedenen 2D-Idealisierungen (ebener Spannungszustand, ebener Dehnungszustand, 2D-Axialsymmetrie) gegenübergestellt.
|
50 |
Das neue Kontaktmodell in Mechanica WF 4.0 mit Reibung : Theoretische Grundlagen und AnwendungsbeispieleJakel, Roland 11 May 2009 (has links)
Der Vortrag stellt das neue, unendlich reibungsbehaftete Kontaktmodell der FEM-Berechnungssoftware Pro/ENGINEER Mechanica in der Version Wildfire 4.0 von PTC vor. Dabei werden sowohl die Grundlagen des reibungsfreien Kontaktes als auch die Theorie des unendlich reibungsbehafteten Kontaktmodells behandelt sowie die Grundlagen der zur numerischen Lösung in der Software verwendeten Penalty- und Newton-Raphson-Methode erläutert.
Als Anwendungsbeispiel für das reibungsfreie Kontaktmodell wird ein Zylinderrollenlager vollständig mit sämtlichen Wälzkontakten für verschiedene Lager- und Einbauspiele berechnet, die Ergebnisse umfassend dargestellt sowie eine analytische Gegenrechnung nach der Hertzschen Theorie ausgeführt, die sehr gute Übereinstimmung mit der numerischen Simulation zeigt. Für das reibungsbehaftete Kontaktmodell wird exemplarisch eine geschrumpfte Welle-Nabe-Verbindung unter Torsion berechnet. Diese wird einer analytischen Lösung sowie verschiedenen 2D-Idealisierungen (ebener Spannungszustand, ebener Dehnungszustand, 2D-Axialsymmetrie) gegenübergestellt.
|
Page generated in 0.106 seconds