• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 39
  • 23
  • 18
  • 16
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 226
  • 49
  • 34
  • 33
  • 30
  • 28
  • 28
  • 27
  • 24
  • 24
  • 23
  • 23
  • 22
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Element failure probability of soil slope under consideration of random groundwater level

Li, Z., Chen, Y., Guo, Yakun, Zhang, X., Du, S. 28 April 2021 (has links)
Yes / The instability of soil slopes is directly related to both the shear parameters of the soil material and the groundwater, which usually causes some uncertainty. In this study, a novel method, the element failure probability method (EFP), is proposed to analyse the failure of soil slopes. Based on the upper bound theory, finite element discretization, and the stochastic programming theory, an upper bound stochastic programming model is established by simultaneously considering the randomness of shear parameters and groundwater level to analyse the reliability of slopes. The model is then solved by using the Monte-Carlo method based on the random shear parameters and groundwater levels. Finally, a formula is derived for the element failure probability (EFP) based on the safety factors and velocity fields of the upper bound method. The probability of a slope failure can be calculated by using the safety factor, and the distribution of failure regions in space can be determined by using the location information of the element. The proposed method is validated by using a classic example. This study has theoretical value for further research attempting to advance the application of plastic limit analysis to analyse slope reliability. / National Natural Science Foundation of China (grant no. 51564026), the Research Foundation of Kunming University of Science and Technology (grant no. KKSY201904006) and the Key Laboratory of Rock Mechanics and Geohazards of Zhejiang Province (grant no. ZJRM-2018-Z-02).
92

Development of an efficient fluid-structure interaction model for floating objects

Brutto, Cristian 18 June 2024 (has links)
This thesis gives an overview of the process that led to the development of a novel semi-implicit fluid-structure interaction model. The thesis is dedicated to the creation of a new numerical model that allows to study ship generated waves and ship manoeuvers in waterways for various vessel characteristics and speeds in different external current situations. A model like this requires a coupling between the fluid and the solid to generate the waves and the hydrodynamic forces on the hull. Since the horizontal dimensions are significantly larger than the vertical dimension, we started by employing the shallow water equations, which are based on the assumption of hydrostatic pressure. The discretization was carried out taking only the nonlinear advective terms explicitly while the pressure terms are discretized implicitly, which makes the CFL condition milder. The price to pay for this semi-implicit discretization is an increase in the algorithm complexity compared to a fully-explicit method, but it is still much simpler than a fully-implicit discretization of the governing equations. Indeed, the mass and momentum equations couple, and finding the unknowns involves solving a system of equations with dimensions equal to the number of cells. The grid supporting the discretization is staggered, overlapping and Cartesian. Since the aimed application domain is inland waterways, it is paramount to allow wetting and drying of the cells. This was achieved by acting on the depth function, the relationship between the free-surface elevation and the water depth in the cell. The main novelty of this research project is the two-way coupling of the PDE system for the water flow with the ODE system for the rigid body motion of the ship. The hull defines the ship region, and its shape can range from a simple box to an STL file of a real 3D ship geometry. Where the hull is in contact with the water, the cells are pressurized. This pressurized group of cells generates waves as it moves, and its motion is influenced by incoming external waves. This result is obtained by imposing an upper bound to the depth function, so that the water depth does not increase when it reaches the hull elevation, while the pressure is allowed to increase. This upper bound increases the nonlinearity of the system, which may have dry cells, wet free-surface cells and pressurized cells. The solution of this system is found by a single nested-Newton iterative solver of Casulli and Zanolli [36], in which with two separate linearizations the system is written in a sparse, symmetric, positive semi-definite form. This particular form allows us to employ a matrix-free conjugate gradient method, and efficiently get the unknown pressure. The integral of the pressure over the hull is applied for the hydrodynamic force and torque acting on the ship. After adding the skin friction and other external forces from the propeller or the rudder, the total force is inserted in the equation of motion of the rigid body. The ODE system is discretized with a second-order Taylor method, and it is solved for the six degrees of freedom (3 coordinates for the position vector of the barycenter and 3 rotation angles), providing the next position and orientation of the ship. The vertical translation of the rigid body is governed by the gravitational force and the restoring force from Archimedes' principle. As the ship oscillates up and down, the gravitational potential energy is partially transferred to the radiated free-surface water waves, damping and eventually stopping the motion. Also, the ship pushes and pulls the water around it, inducing the added mass force. All these elements constitute the ODE that was used for the verification of the vertical degree of freedom. The numerical simulation gave the expected results for the vertical motion. The horizontal translation, important for the manoeuvers, presented a numerical instability unseen in our previous test cases, which is connected to the relative motion between the ship and the grid. In each time step in which the ship enters a new cell, the pressure sharply increases and decreases at the ship bow. An oscillation can build up in time and create an unphysical void below the vessel. We implemented a few ideas to attenuate the oscillations. At the heart of all the following techniques is the reduction of the time derivative of the water depth, especially for those cells transitioning to a pressurized state. All these modifications were effective at controlling the oscillations, each with a different intensity, and simulations with a horizontal motion are much more stable than without these techniques. With the collaboration of the BAW research institute, we worked on the model validation. We used data from two separate experiments to compare the measurements with the numerical results. Specifically, we focused on the ship-generated wave height and the hydrodynamic forces on the hull. The comparison is satisfactory for the wave height. The force and torque prediction is plausible but underestimated compared to the measurements. The model seems to displace the water volume correctly during the ship passage, while the force and torque response might need additional work to be trusted in applications. Even though the hydrostatic assumption is mostly correct in our range of applications, the presence and the motion of a ship could generate strong vertical accelerations of the flow, which may not be negligible. For this reason, we implemented an algorithm that corrects the velocity field, introducing also dispersive effects due to a non-hydrostatic pressure. The correction consists of a higher-order Bousinnesq-type term in the momentum equation and the solution of the resulting system. The non-hydrostatic update has a small influence on the wave generation, while it alters significantly the reaction forces. The subgrid method implementation allowed to benefit from high-resolution bottom descriptions while keeping the grid size coarse. The same subgrid can also be used for a refined definition of the hull, which makes the volume computations more accurate. Furthermore, the subgrid introduces new possible states for the cells, as they can be partially dry or partially pressurized. These intermediate states translate into smoother transitions from one state to the other when the free-surface is close to the bathymetry or to the hull. Concerning the software implementation of the developed scheme, in order to improve the execution performance of the prototype script formulated initially in Matlab, the numerical method was rewritten as a Fortran program. Also, thanks to the domain decomposition technique and the MPI standard, each simulation can run in parallel on multiple CPUs, leveraging the computational power of supercomputers. The coupling of the PDE and ODE system, together with an appropriate redefinition of the depth function, proved to be a valuable method for studying fluid-structure interaction problems. The combination of efficient numerical techniques led to the development of a tool with a potential to be applied in the practice for the simulation of floating objects in wide domains.
93

Parametric Dynamical Systems: Transient Analysis and Data Driven Modeling

Grimm, Alexander Rudolf 02 July 2018 (has links)
Dynamical systems are a commonly used and studied tool for simulation, optimization and design. In many applications such as inverse problem, optimal control, shape optimization and uncertainty quantification, those systems typically depend on a parameter. The need for high fidelity in the modeling stage leads to large-scale parametric dynamical systems. Since these models need to be simulated for a variety of parameter values, the computational burden they incur becomes increasingly difficult. To address these issues, parametric reduced models have encountered increased popularity in recent years. We are interested in constructing parametric reduced models that represent the full-order system accurately over a range of parameters. First, we define a global joint error mea- sure in the frequency and parameter domain to assess the accuracy of the reduced model. Then, by assuming a rational form for the reduced model with poles both in the frequency and parameter domain, we derive necessary conditions for an optimal parametric reduced model in this joint error measure. Similar to the nonparametric case, Hermite interpolation conditions at the reflected images of the poles characterize the optimal parametric approxi- mant. This result extends the well-known interpolatory H2 optimality conditions by Meier and Luenberger to the parametric case. We also develop a numerical algorithm to construct locally optimal reduced models. The theory and algorithm are data-driven, in the sense that only function evaluations of the parametric transfer function are required, not access to the internal dynamics of the full model. While this first framework operates on the continuous function level, assuming repeated transfer function evaluations are available, in some cases merely frequency samples might be given without an option to re-evaluate the transfer function at desired points; in other words, the function samples in parameter and frequency are fixed. In this case, we construct a parametric reduced model that minimizes a discretized least-squares error in the finite set of measurements. Towards this goal, we extend Vector Fitting (VF) to the parametric case, solving a global least-squares problem in both frequency and parameter. The output of this approach might lead to a moderate size reduced model. In this case, we perform a post- processing step to reduce the output of the parametric VF approach using H2 optimal model reduction for a special parametrization. The final model inherits the parametric dependence of the intermediate model, but is of smaller order. A special case of a parameter in a dynamical system is a delay in the model equation, e.g., arising from a feedback loop, reaction time, delayed response and various other physical phenomena. Modeling such a delay comes with several challenges for the mathematical formulation, analysis, and solution. We address the issue of transient behavior for scalar delay equations. Besides the choice of an appropriate measure, we analyze the impact of the coefficients of the delay equation on the finite time growth, which can be arbitrary large purely by the influence of the delay. / Ph. D. / Mathematical models play an increasingly important role in the sciences for experimental design, optimization and control. These high fidelity models are often computationally expensive and may require large resources, especially for repeated evaluation. Parametric model reduction offers a remedy by constructing models that are accurate over a range of parameters, and yet are much cheaper to evaluate. An appropriate choice of quality measure and form of the reduced model enable us to characterize these high quality reduced models. Our first contribution is a characterization of optimal parametric reduced models and an efficient implementation to construct them. While this first framework assumes we have access to repeated evaluations of the full model, in some cases merely measurement data might be available. In this case, we construct a parametric model that fits the measurements in a least squares sense. The output of this approach might lead to a moderate size reduced model, which we address with a post-processing step that reduces the model size while maintaining important properties. A special case of a parameter is a delay in the model equation, e.g., arising from a feedback loop, reaction time, delayed response and various other physical phenomena. While asymptotically stable solutions eventually vanish, they might grow large before asymptotic behavior takes over; this leads to the notion of transient behavior, which is our main focus for a simple class of delay equations. Besides the choice of an appropriate measure, we analyze the impact of the structure of the delay equation on the transient growth, which can be arbitrary large purely by the influence of the delay.
94

On Viscous Flux Discretization Procedures For Finite Volume And Meshless Solvers

Munikrishna, N 06 1900 (has links)
This work deals with discretizing viscous fluxes in the context of unstructured data based finite volume and meshless solvers, two competing methodologies for simulating viscous flows past complex industrial geometries. The two important requirements of a viscous discretization procedure are consistency and positivity. While consistency is a fundamental requirement, positivity is linked to the robustness of the solution methodology. The following advancements are made through this work within the finite volume and meshless frameworks. Finite Volume Method: Several viscous discretization procedures available in the literature are reviewed for: 1. ability to handle general grid elements 2. efficiency, particularly for 3D computations 3. consistency 4. positivity as applied to a model equation 5. global error behavior as applied to a model equation. While some of the popular procedures result in inconsistent formulation, the consistent procedures are observed to be computationally expensive and also have problems associated with robustness. From a systematic global error study, we have observed that even a formally inconsistent scheme exhibits consistency in terms of global error i.e., the global error decreases with grid refinement. This observation is important and also encouraging from the view point of devising a suitable discretization scheme for viscous fluxes. This study suggests that, one can relax the consistency requirement in order to gain in terms of robustness and computational cost, two key ingredients for any industrial flow solver. Some of the procedures are analysed for positivity as applied to a Laplacian and it is found that the two requirements of a viscous discretization procedure, consistency(accuracy) and positivity are essentially conflicting. Based on the review, four representative schemes are selected and used in HIFUN-2D(High resolution Flow Solver on UNstructured Meshes), an unstructured data based cell center finite volume flow solver, to simulate standard laminar and turbulent flow test cases. From the analysis, we can advocate the use of Green Gauss theorem based diamond path procedure which can render high level of robustness to the flow solver for industrial computations. Meshless Method: An Upwind-Least Squares Finite Difference(LSFD-U) meshless solver is developed for simulating viscous flows. Different viscous discretization procedures are proposed and analysed for positivity and the procedure which is found to be more positive is employed. Obtaining suitable point distribution, particularly for viscous flow computations happens to be one of the important components for the success of the meshless solvers. In principle, the meshless solvers can operate on any point distribution obtained using structured, unstructured and Cartesian meshes. But, the Cartesian meshing happens to be the most natural candidate for obtaining the point distribution. Therefore, the performance of LSFD-U for simulating viscous flows using point distribution obtained from Cartesian like grids is evaluated. While we have successfully computed laminar viscous flows, there are difficulties in terms of solving turbulent flows. In this context, we have evolved a strategy to generate suitable point distribution for simulating turbulent flows using meshless solver. The strategy involves a hybrid Cartesian point distribution wherein the region of boundary layer is filled with high aspect ratio body-fitted structured mesh and the potential flow region with unit aspect ratio Cartesian mesh. The main advantage of our solver is in terms of handling the structured and Cartesian grid interface. The interface algorithm is considerably simplified compared to the hybrid Cartesian mesh based finite volume methodology by exploiting the advantage accrue out of the use of meshless solver. Cheap, simple and robust discretization procedures are evolved for both inviscid and viscous fluxes, exploiting the basic features exhibited by the hybrid point distribution. These procedures are also subjected to positivity analysis and a systematic global error study. It should be remarked that the viscous discretization procedure employed in structured grid block is positive and in fact, this feature imparts the required robustness to the solver for computing turbulent flows. We have demonstrated the capability of the meshless solver LSFDU to solve turbulent flow past complex aerodynamic configurations by solving flow past a multi element airfoil configuration. In our view, the success shown by this work in computing turbulent flows can be considered as a landmark development in the area of meshless solvers and has great potential in industrial applications.
95

Approaches to accommodate remeshing in shape optimization

Wilke, Daniel Nicolas 20 January 2011 (has links)
This study proposes novel optimization methodologies for the optimization of problems that reveal non-physical step discontinuities. More specifically, it is proposed to use gradient-only techniques that do not use any zeroth order information at all for step discontinuous problems. A step discontinuous problem of note is the shape optimization problem in the presence of remeshing strategies, since changes in mesh topologies may - and normally do - introduce non-physical step discontinuities. These discontinuities may in turn manifest themselves as non-physical local minima in which optimization algorithms may become trapped. Conventional optimization approaches for step discontinuous problems include evolutionary strategies, and design of experiment (DoE) techniques. These conventional approaches typically rely on the exclusive use of zeroth order information to overcome the discontinuities, but are characterized by two important shortcomings: Firstly, the computational demands of zero order methods may be very high, since many function values are in general required. Secondly, the use of zero order information only does not necessarily guarantee that the algorithms will not terminate in highly unfit local minima. In contrast, the methodologies proposed herein use only first order information, rather than only zeroth order information. The motivation for this approach is that associated gradient information in the presence of remeshing remains accurately and uniquely computable, notwithstanding the presence of discontinuities. From a computational effort point of view, a gradient-only approach is of course comparable to conventional gradient based techniques. In addition, the step discontinuities do not manifest themselves as local minima. / Thesis (PhD)--University of Pretoria, 2010. / Mechanical and Aeronautical Engineering / unrestricted
96

Adaptive modeling of plate structures / Modélisation adaptive des structures

Bohinc, Uroš 05 May 2011 (has links)
L’objectif principal de la thèse est de répondre à des questions liées aux étapes clé d’un processus de l’adaptation de modèles de plaques. Comme l’adaptativité dépend des estimateurs d’erreurs fiables, une part importante du rapport est dédiée au développement des méthodes numériques pour les estimateurs d’erreurs aussi bien dues à la discrétisation qu’au choix du modèle. Une comparaison des estimateurs d’erreurs de discrétisation d’un point de vue pratique est présentée. Une attention particulière est prêtée a la méthode de résiduels équilibrés (en anglais, "equilibrated residual method"), laquelle est potentiellement applicable aux estimations des deux types d’erreurs, de discrétisation et de modèle.Il faut souligner que, contrairement aux estimateurs d’erreurs de discrétisation, les estimateurs d’erreur de modèle sont plus difficiles à élaborer. Le concept de l’adaptativité de modèles pour les plaques est implémenté sur la base de la méthode de résiduels équilibrés et de la famille hiérarchique des éléments finis de plaques. Les éléments finis dérivés dans le cadre de la thèse, comprennent aussi bien les éléments de plaques minces et que les éléments de plaques épaisses. Ces derniers sont formulés en s’appuyant sur une théorie nouvelle de plaque, intégrant aussi les effets d’étirement le long de l’épaisseur. Les erreurs de modèle sont estimées via des calculs élément par élément. Les erreurs de discrétisation et de modèle sont estimées d’une manière indépendante, ce qui rend l’approche très robuste et facile à utiliser. Les méthodes développées sont appliquées sur plusieurs exemples numériques. Les travaux réalisés dans le cadre de la thèse représentent plusieurs contributions qui visent l’objectif final de la modélisation adaptative, ou une procédure complètement automatique permettrait de faire un choix optimal du modèle de plaques pour chaque élément de la structure. / The primary goal of the thesis is to provide some answers to the questions related to the key steps in the process of adaptive modeling of plates. Since the adaptivity depends on reliable error estimates, a large part of the thesis is related to the derivation of computational procedures for discretization error estimates as well as model error estimates. A practical comparison of some of the established discretization error estimates is made. Special attention is paid to what is called equilibrated residuum method, which has a potential to be used both for discretization error and model error estimates. It should be emphasized that the model error estimates are quite hard to obtain, in contrast to the discretization error estimates. The concept of model adaptivity for plates is in this work implemented on the basis of equilibrated residuum method and hierarchic family of plate finite element models.The finite elements used in the thesis range from thin plate finite elements to thick plate finite elements. The latter are based on a newly derived higher order plate theory, which includes through the thickness stretching. The model error is estimated by local element-wise computations. As all the finite elements, representing the chosen plate mathematical models, are re-derived in order to share the same interpolation bases, the difference between the local computations can be attributed mainly to the model error. This choice of finite elements enables effective computation of the model error estimate and improves the robustness of the adaptive modeling. Thus the discretization error can be computed by an independent procedure.Many numerical examples are provided as an illustration of performance of the derived plate elements, the derived discretization error procedures and the derived modeling error procedure. Since the basic goal of modeling in engineering is to produce an effective model, which will produce the most accurate results with the minimum input data, the need for the adaptive modeling will always be present. In this view, the present work is a contribution to the final goal of the finite element modeling of plate structures: a fully automatic adaptive procedure for the construction of an optimal computational model (an optimal finite element mesh and an optimal choice of a plate model for each element of the mesh) for a given plate structure.
97

Optimizing the Number of Time-steps Used in Option Pricing / Optimering av Antal Tidssteg inom Optionsprissättning

Lewenhaupt, Hugo January 2019 (has links)
Calculating the price of an option commonly uses numerical methods and can becomputationally heavy. In general, longer computations result in a more precisresult. As such, improving existing models or creating new models have been thefocus in the research field. More recently the focus has instead shifted towardcreating neural networks that can predict the price of a given option directly.This thesis instead studied how the number of time-steps parameter can beoptimized, with regard to precision of the resulting price, and then predict theoptimal number of time-steps for other options. The number of time-stepsparameter determines the computation time of one of the most common models inoption pricing, the Cox-Ross-Rubinstein model (CRR). Two different methodsfor determining the optimal number of time-steps were created and tested. Bothmodels use neural networks to learn the relationship between the input variablesand the output. The first method tried to predict the optimal number oftime-steps directly. The other method instead tried to predict the parameters ofan envelope around the oscillations of the option pricing method. It wasdiscovered that the second method improved the performance of the neuralnetworks tasked with predicting the optimal number of time-steps. It was furtherdiscovered that even though the best neural network that was found significantlyoutperformed the benchmark method, there was no significant difference incalculation times, most likely because the range of log moneyness and pricesthat were used. It was also noted that the neural network tended tounderestimate the parameter and that might not be a desirable property of asystem in charge of estimating a price in the financial sector.
98

Geração, contração e polarização de bases gaussianas para cálculos quânticos de átomos e moléculas / Generation, contraction and polarization for gaussian basis set for quantum calculations of atoms and molecules

Guimarães, Amanda Ribeiro 10 September 2013 (has links)
Muitos grupos de pesquisa já trabalharam com o desenvolvimento de conjuntos de bases, no intuito de obter melhores resultados em tempo e custo de cálculo computacional reduzidos. Para tal finalidade, o tamanho e a precisão são fatores a ser considerados, para que o número de funções do conjunto gerado proporcione uma boa descrição do sistema em estudo, num tempo de convergência reduzido. Esta dissertação tem como objetivo apresentar os conjuntos de bases obtidos pelo Método da Coordenada Geradora, para os átomos Na, Mg, Al, Si, P, S e Cl, e avaliar a qualidade de tais conjuntos pela comparação da energia eletrônica total, em nível atômico e molecular. Foi realizada uma busca para a obtenção do melhor conjunto contraído e do melhor conjunto de funções de polarização. A qualidade do conjunto gerado foi avaliada pelo cálculo DFT-B3LYP, cujos resultados foram comparados aos valores obtidos por cálculos que utilizam funções de bases conhecidas na literatura, tais como: cc-pVXZ do Dunning e pc-n do Jensen. Pelos resultados obtidos, pode-se notar que os conjuntos de bases gerados neste trabalho, denominados MCG-3d2f, podem representar sistemas atômicos ou moleculares. Tanto os valores de energia quanto os de tempo computacional são equivalentes e, em alguns casos, melhores que os obtidos aqui com os conjuntos de bases escolhidos como referência (conjuntos de Dunning e Jensen). / Many research groups have been working with the development of basis sets in order to get the best results in reduced time and cost of computational calculation. It is known that for such purpose, size and accuracy are the primary factors to be considered, so that the number of the generated set of functions allows a good description of the system being studied in a small convergence time. This essay aims to present the basis sets obtained by the Generator Coordinate Method for the atoms Na, Mg, Al, Si, P, S and Cl, as well as evaluating the quality of such clusters by comparing the electron energy at atomic and molecular levels. A research was also performed to obtain the best set contracted as well as the best set of polarization functions. The quality of the generated set was evaluated by calculating DFT-B3LYP results, which were compared to values obtained through calculation using basis functions such as cc-pVXZ of Dunning and pcn of Jensen. It can be noted, from the results obtained, that the basis sets generated in this study, named MCG-3d2f, may well represent atomic or molecular systems. Energy values and the computational time are equivalent and in some cases, even better than those obtained with the sets of bases chosen here as reference sets (Dunning and Jensen).
99

Calor específico do modelo de Anderson de uma impureza por grupo de renormalização numérico / Numerical Renormalization-group Computation of Specific Heats.

Costa, Sandra Cristina 24 March 1995 (has links)
Neste trabalho, calculam-se o calor específico e a entropia do Modelo de Anderson simétrico de uma impureza usando o Grupo de Renormalização Numérico (GRN). O método é baseado na discretização logarítmica da banda de condução do metal hospedeiro a qual a impureza está acoplada. Porém, esta discretização introduz oscilações nas propriedades termodinâmicas. Esta inconveniência, inerente ao método, é contornável para a suscetibilidade magnética, mas é crítica para o calor específico, restringindo o alcance do GRN. Para sobrepor essa dificuldade, é usado o novo procedimento denominado intercalado que foi desenvolvido para o cálculo da suscetibilidade magnética de modelos de duas impurezas. Para reduzir as matrizes e o tempo computacional, é usado, também, o operador carga axial, recentemente definido no contexto do Modelo de Kondo de duas impurezas, e que é conservado pelo Hamiltoniano de Anderson simétrico. As curvas obtidas são comparadas com resultados exatos obtidos por ansatz de Bethe e pelo Modelo de Nível Ressonante. / The specific heat and the entropy of the one-impurity symmetric Anderson Model are calculated using the Numerical Renormalization Group (NRG). The heart of the method is the logarithmic discretization of the metal conduction band where the impurity is coupled. However, this discretization, inherent in the method, introduces oscillations in the thermodynamical properties. For the susceptibility it is not so critical but for the specific heat the usual calculation is prohibitive. To overcome this difficulty, we use the new procedure called interleaved that was developed to calculate the susceptibility of two-impurity models. In order to reduce the matrices and computation time, use is made of the axial charge operator recently defined in the two-impurity Kondo Model context and that is conserved by the symmetric Anderson Hamiltonian. The curves obtained are compared with exacts results of Bethe ansatz and Resonant Level Model.
100

Fluxo de potência ótimo multiobjetivo com restrições de segurança e variáveis discretas / Multiobjective security constrained optimal power flow with discrete variables

Ferreira, Ellen Cristina 11 May 2018 (has links)
O presente trabalho visa a investigação e o desenvolvimento de estratégias de otimização contínua e discreta para problemas de Fluxo de Potência Ótimo com Restrições de Segurança (FPORS) Multiobjetivo, incorporando variáveis de controle associadas a taps de transformadores em fase, chaveamentos de bancos de capacitores e reatores shunt. Um modelo Problema de Otimização Multiobjetivo (POM) é formulado segundo a soma ponderada, cujos objetivos são a minimização de perdas ativas nas linhas de transmissão e de um termo adicional que proporciona uma maior margem de reativos ao sistema. Investiga-se a incorporação de controles associados a taps e shunts como grandezas fixas, ou variáveis contínuas e discretas, sendo neste último caso aplicadas funções auxiliares do tipo polinomial e senoidal, para fins de discretização. O problema completo é resolvido via meta-heurísticas Evolutionary Particle Swarm Optimization (EPSO) e Differential Evolutionary Particle Swarm Optimization (DEEPSO). Os algoritmos foram desenvolvidos utilizando o software MatLab R2013a, sendo a metodologia aplicada aos sistemas IEEE de 14, 30, 57, 118 e 300 barras e validada sob os prismas diversidade e qualidade das soluções geradas e complexidade computacional. Os resultados obtidos demonstram o potencial do modelo e estratégias de resolução propostas como ferramentas auxiliares ao processo de tomada de decisão em Análise de Segurança de redes elétricas, maximizando as possibilidades de ação visando a redução de emergências pós-contingência. / The goal of the present work is to investigate and develop continuous and discrete optimization strategies for SCOPF problems, also taking into account control variables related to in-phase transformers, capacitor banks and shunt reactors. Multiobjective optimization model is formulated under a weighted sum criteria whose objectives are the minimization of active power losses and an additional term that yields a greater reactive support to the system. Controls associated with taps and shunts are modeled either as fixed quantities, or continuous and discrete variables, in which case auxiliary functions of polynomial and sinusoidal types are applied for discretization purposes. The complete model is solved via EPSO and DEEPSO metaheuristics. Routines coded in Matlab were applied to the IEEE 14,30, 57, 118 and 300-bus test systems, where the method was validated in terms of diversity and quality of solutions and computational complexity. The results demonstrate the robustness of the model and solution approaches and uphold it as an effective support tool for the decision-making process in Power Systems Security Analysis, maximizing preventive actions in order to avoid insecure operating conditions.

Page generated in 0.1142 seconds