Spelling suggestions: "subject:"anumerical integration"" "subject:"bnumerical integration""
51 |
Hybrid Numerical Integration Scheme for Highly Oscillatory Dynamical SystemsGil, Gibin January 2013 (has links)
Computational efficiency of solving the dynamics of highly oscillatory systems is an important issue due to the requirement of small step size of explicit numerical integration algorithms. A system is considered to be highly oscillatory if it contains a fast solution that varies regularly about a slow solution. As for multibody systems, stiff force elements and contacts between bodies can make a system highly oscillatory. Standard explicit numerical integration methods should take a very small step size to satisfy the absolute stability condition for all eigenvalues of the system and the computational cost is dictated by the fast solution. In this research, a new hybrid integration scheme is proposed, in which the local linearization method is combined with a conventional integration method such as the fourth-order Runge-Kutta. In this approach, the system is partitioned into fast and slow subsystems. Then, the two subsystems are transformed into a reduced and a boundary-layer system using the singular perturbation theory. The reduced system is solved by the fourth-order Runge-Kutta method while the boundary-layer system is solved by the local linearization method. This new hybrid scheme can handle the coupling between the fast and the slow subsystems efficiently. Unlike other multi-rate or multi-method schemes, extrapolation or interpolation process is not required to deal with the coupling between subsystems. Most of the coupling effect can be accounted for by the reduced (or quasi-steady-state) system while the minor transient effect is taken into consideration by averaging. In this research, the absolute stability region for this hybrid scheme is derived and it is shown that the absolute stability region is almost independent of the fast variables. Thus, the selection of the step size is not dictated by the fast solution when a highly oscillatory system is solved, in turn, the computational efficiency can be improved. The advantage of the proposed hybrid scheme is validated through several dynamic simulations of a vehicle system including a flexible tire model. The results reveal that the hybrid scheme can reduce the computation time of the vehicle dynamic simulation significantly while attaining comparable accuracy.
|
52 |
Numerical Simulations of Giant Planetary Core FormationNGO, HENRY 28 August 2012 (has links)
In the widely accepted core accretion model of planet formation, small rocky and/or icy bodies (planetesimals) accrete to form protoplanetary cores. Gas giant planets are believed to have solid cores that must reach a critical mass, ∼10 Earth masses (ME), after which there is rapid inflow of gas from the gas disk. In order to accrete the gas giants’ massive atmospheres, this step must occur within the gas disk’s lifetime (1 − 10 million years).
Numerical simulations of solid body accretion in the outer Solar System are performed using two integrators. The goal of these simulations is to investigate the effects of important dynamical processes instead of specifically recreating the formation of the Solar System’s giant planets.
The first integrator uses the Symplectic Massive Body Algorithm (SyMBA) with a modification to allow for planetesimal fragmentation. Due to computational constraints, this code has some physical limitations, specifically that the planetesimals themselves cannot grow, so protoplanets must be seeded in the simulations. The second integrator, the Lagrangian Integrator for Planetary Accretion and Dynamics (LIPAD), is more computationally expensive. However, its treatment of planetesimals allows for growth of potential giant planetary cores from a disk consisting only of planetesimals. Thus, this thesis’ preliminary simulations use the first integrator to explore a wider range of parameters while the main simulations use LIPAD to further investigate some specific processes.
These simulations are the first use of LIPAD to study giant planet formation and they identify a few important dynamical processes affecting core formation. Without any fragmentation, cores tend to grow to ∼2ME. When planetesimal fragmentation is included, the resulting fragments are easier to accrete and larger cores are formed (∼4ME). But, in half of the runs, the fragments force the entire system to migrate towards the Sun. In other half, outward migration via scattering off a large number of planetesimal helps the protoplanets grow and survive. However, in a preliminary set of simulations including protoplanetary fragmentation, very few collisions are found to result in accretion so it is difficult for any cores to form. / Thesis (Master, Physics, Engineering Physics and Astronomy) -- Queen's University, 2012-08-20 14:48:39.443
|
53 |
New formulae for higher order derivatives and a new algorithm for numerical integrationSlevinsky, Richard Unknown Date
No description available.
|
54 |
Análise de alta precisão em modelos tridimensionais de elementos de contorno utilizando técnicas avançadas de integração numérica. / Advanced numerical integration in three-dimensional boundary elements analysis.Calebe Paiva Gomes de Souza 06 June 2007 (has links)
Um dos principais problemas que o Método de Elementos de Contorno (MEC) apresenta encontra-se na avaliação de integrais singulares e quase-singulares que envolvem as soluções fundamentais de Kelvin em deslocamento e força. O processo de integração numérica em MEC tem sido o objetivo de inúmeras pesquisas, pois dele depende a qualidade das respostas quando se deseja obter uma excelente precisão numérica em uma análise. Este trabalho apresenta uma nova proposta de integração numérica para análise tridimensional com MEC. Esta técnica possui três características importantes. A primeira é a determinação da parcela efetiva de singularidade que ocorre na função raio, distância entre o ponto fonte e o elemento de contorno bidimensional. A correta obtenção desta parcela permite representar sem aproximações o comportamento da singularidade da função raio, que é a verdadeira fonte de singularidade e quase-singularidade nas soluções fundamentais. A segunda característica da técnica proposta é que ela baseia-se em um Método Semi-Analítico de avaliação de integrais, onde, para cada parcela efetiva de singularidade, utiliza-se uma quadratura numérica cujos pesos específicos são calculados analiticamente. A terceira característica da técnica proposta é apresentar um tratamento unificado para todos os tipos de integrais singulares, quasesingulares e regulares. Esta técnica foi implementada na plataforma computacional desenvolvida pelo grupo GoBEM, utilizando o conceito de Programação Orientada a Objetos através da Linguagem de programação Java. Com a implementação da nova técnica de integração na plataforma computacional torna-se possível realizar o desenvolvimento de vários tipos de pesquisa sobre análise tridimensional com o MEC como, por exemplo: visualização de isosuperfícies em análise tridimensional sem discretização do domínio, automatização do cálculo elasto-plástico, modelagem de problemas geotécnicos de forma precisa, etc. Para a validação da técnica proposta três procedimentos foram considerados: análise da eficiência da parcela efetiva de singularidade, testes de convergência da integração numérica específica e exemplos numéricos utilizando o MEC em problemas de engenharia. / One of the main problems with the Boundary Elements Method (BEM) is the evaluation of singular and quasi-singular integrals due to the Kelvin\'s Fundamental Solutions in displacement and traction. Today there is an increasing body of research work that focuses on numerical evaluation of BEM integrals, for this is a crucial issue in order to achieve highly accurate results. This work presents an innovative numerical integration procedure for threedimensional analysis with BEMs. The proposed technique encompasses three important features. First, it corresponds to an accurate representation of the effective term of singularity in the radius function, which measures the distance between the source point and a twodimensional boundary element. The correct evaluation of this term leads without approximation to the actual singularity behavior of the radius function, which is the true source of singularity and quasi-singularity in the fundamental solutions. Second, the proposed technique is based on a semi-analytical procedure, i.e, for each singularity effective term it uses a quadrature scheme with specific weights analytically evaluated. Last, the proposed technique represents a unified procedure to singular, quasi-singular and regular integrals. This technique was implemented in the computational platform developed by the group GoBEM, using Object Oriented Programming and the Java programming language. The implementation of the proposed technique into this computational plataform opens new possibilities for future researches on three-dimensional BEM, e.g.: visualization of isosurfaces in three-dimensional analysis without any domain discretization, automatic elasto-plastic analysis, accurate modeling of geomechanical problems, etc. Validation of the proposed technique was carried out using three procedures: efficiency analysis of the singularity effective term, convergence tests for the specific numerical integration and numerical examples using BEM in engineering problems.
|
55 |
Time course analysis of complex enzyme systemsRentergent, Julius January 2015 (has links)
In studies of enzyme kinetics, reaction time courses are often condensed into a single set of initial rates describing the rate at the start of the reaction. This set is then analysed with the Henri-Michaelis-Menten equation. However, this process necessarily removes information from experimental data and diminishes its statistical significance due to a reduction of available data points. Further, if the examined system does not approach steady-state rapidly, the application of the steady-state-assumption can lead to flawed conclusions. Here, the analysis of two complex enzyme systems by numerical integration of kinetic rate equations is demonstrated. DNA polymerase catalyses the synthesis of DNA in a reaction that involves two substrates, DNA template and dNTP, both of which are highly heterogeneous in nature. The tight binding of DNA to DNA polymerase and its polymer properties prohibit the application of the initial-rate approach. By combining an explicit DNA binding step with a steady-state dNTP incorporation on a template of finite length, the DNA binding parameters and the dNTP incorporation steady-state parameters were estimated from processive polymerisation data in a global regression analysis. This approach is described in Chapter 2 and the results are in good agreement with previously published values. Further properties were investigated in studies of the temperature dependence and solvent isotope dependence of the kinetics. The processive polymerisation of DNA template was monitored using the fluorophore PicoGreen in a simple and inexpensive method described in Chapter 3. The catalytic cycle of ethanolamine ammonia lyase involves the homoloysis of the Co-C bond within the intrinsic B12 cofactor. This homolysis results in the formation of a Co(II)-adenosyl radical intermediate, which can be monitored using stopped-flow spectroscopy. The stopped-flow transients observed for EAL and related enzymes have long been difficult to analyse and interpret, possibly due to rapid methyl group rotation on the substrate. In Chapter 4 of this thesis we were able to rationalise this behaviour using numerical integration of the rate equations of a branched 16-state-kinetic model to fit stopped-flow transients in a global regression analysis. We were able to determine some intrinsic rate constants, and showed that the initial hydrogen atom transfer step is unlikely to have an inflated primary kinetic isotope effect, despite previous claims. More generally, this study demonstrates that the numerical integration analysis used here is likely to be applicable to a broad range of enzyme reaction kinetics.
|
56 |
Mixed, Nonsplit, Extended Stability, Stiff Integration of Reaction Diffusion EquationsAlzahrani, Hasnaa H. 26 July 2016 (has links)
A tailored integration scheme is developed to treat stiff reaction-diffusion prob- lems. The construction adapts a stiff solver, namely VODE, to treat reaction im- plicitly together with explicit treatment of diffusion. The second-order Runge-Kutta- Chebyshev (RKC) scheme is adjusted to integrate diffusion. Spatial operator is de- scretised by second-order finite differences on a uniform grid. The overall solution is advanced over S fractional stiff integrations, where S corresponds to the number of RKC stages. The behavior of the scheme is analyzed by applying it to three simple problems. The results show that it achieves second-order accuracy, thus, preserving the formal accuracy of the original RKC. The presented development sets the stage for future extensions, particularly, to multidimensional reacting flows with detailed chemistry.
|
57 |
High Order Implementation in Integral EquationsMarshall, Joshua P 09 August 2019 (has links)
The present work presents a number of contributions to the areas of numerical integration, singular integrals, and boundary element methods. The first contribution is an elemental distortion technique, based on the Duffy transformation, used to improve efficiency for the numerical integration of near hypersingular integrals. Results show that this method can reduce quadrature expense by up to 75 percent over the standard Duffy transformation. The second contribution is an improvement to integration of weakly singular integrals by using regularization to smooth weakly singular integrals. Errors show that the method may reduce errors by several orders of magnitude for the same quadrature order. The final work investigated the use of regularization applied to hypersingular integrals in the context of the boundary element method in three dimensions. This work showed that by using the simple solutions technique, the BEM is reduced to a weakly singular form which directly supports numerical integration. Results support that the method is more efficient than the state-of-the-art.
|
58 |
Using Symmetry to Accelerate Materials DiscoveryMorgan, Wiley Spencer 01 April 2019 (has links)
Computational methods are commonly used by materials scientists to make predictions about materials. These methods can achieve in hours what would take days or weeks to accomplish in a lab. However, there are limits to what computational methods can do and how accurate the predictions are.A limiting factor for computational materials science is the size of the search space. The space of potential materials is infinite. Selecting specific systems of elements on a fixed lattice to study reduces the number of possible arrangements of atoms in the lattice to a finite number. However, this number can still be very large. Additionally this list of arrangements will contain duplicates, i.e., two different atomic arrangements could be equivalent by a rotation or translation of the lattice. Using symmetry to eliminate the duplicates saves time and resources. In order to ensure that the final list of unique structures will fit into computer memory it is also useful to know how many unique arrangements there are before actually finding them. For this reason the Pòlya enumeration algorithm was created to determine the number of unique arrangements before enumerating them. A new atomic enumeration algorithm has also been implemented in the enumlib package. This new algorithm has been optimized to find the symmetrically unique arrangements for systems with large amounts of configurational freedom, such as high-entropy alloys, which have been too computationally expensive for other algorithms.A popular computational method in materials science is Density Functional Theory (DFT). DFT codes perform first principles calculations by calculating the electron energy using numerical integrals. It is well known that the accuracy of the integrals depends heavily on the number of sample points, k-points, used. We have conducted a detailed study of how k-point sampling methods effect the accuracy of DFT calculations. This study shows that the most efficient k-point grids are those that have the fewest symmetrically distinct k-points, we call these general regular (GR) grids. GR grids are, however, difficult to generate, requiring a search across many possible grids. In order to make GR grids more accessible to the DFT community we have implemented an algorithm that can search k-point grids for the grid that has the fewest symmetry reduction in a matter of seconds.
|
59 |
The development of a PC based software to solve M/M/1 and M/M/S queueing systems by using a numerical integration techniqueHo, Jinchun January 1994 (has links)
No description available.
|
60 |
Acceleration Methods of Discontinuous Galerkin Integral Equation for Maxwell's EquationsLee, Chung Hyun 15 September 2022 (has links)
No description available.
|
Page generated in 0.0989 seconds