• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 39
  • 23
  • 18
  • 16
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 226
  • 49
  • 34
  • 33
  • 30
  • 28
  • 28
  • 27
  • 24
  • 24
  • 23
  • 23
  • 22
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

A Theoretical and Experimental investigation of Nonlinear Vibrations of Buckled Beams

Lacarbonara, Walter 27 February 1997 (has links)
There is a need for reliable methods to determine approximate solutions of nonlinear continuous systems. Recently, it has been proved that finite-degree-of-freedom Galerkin-type discretization procedures applied to some distributed-parameter systems may fail to predict the correct dynamics. By contrast, direct procedures yield reliable approximate solutions. Starting from these results and extending some of these concepts and procedures, we compare the outcomes of these two approaches (the Galerkin discretization and the direct application of a reduction method to the original governing equations) with experimental results. The nonlinear planar vibrations of a buckled beam around its first buckling mode shape are investigated. Frequency-response curves characterizing single-mode responses of the beam under a primary resonance are generated using both approaches and contrasted with experimentally obtained frequency-response curves. It is shown that discretization leads to erroneous quantitative as well as qualitative results in certain ranges of the buckling level, whereas the direct approach predicts the correct dynamics of the system. / Master of Science
82

Element failure probability of soil slope under consideration of random groundwater level

Li, Z., Chen, Y., Guo, Yakun, Zhang, X., Du, S. 28 April 2021 (has links)
Yes / The instability of soil slopes is directly related to both the shear parameters of the soil material and the groundwater, which usually causes some uncertainty. In this study, a novel method, the element failure probability method (EFP), is proposed to analyse the failure of soil slopes. Based on the upper bound theory, finite element discretization, and the stochastic programming theory, an upper bound stochastic programming model is established by simultaneously considering the randomness of shear parameters and groundwater level to analyse the reliability of slopes. The model is then solved by using the Monte-Carlo method based on the random shear parameters and groundwater levels. Finally, a formula is derived for the element failure probability (EFP) based on the safety factors and velocity fields of the upper bound method. The probability of a slope failure can be calculated by using the safety factor, and the distribution of failure regions in space can be determined by using the location information of the element. The proposed method is validated by using a classic example. This study has theoretical value for further research attempting to advance the application of plastic limit analysis to analyse slope reliability. / National Natural Science Foundation of China (grant no. 51564026), the Research Foundation of Kunming University of Science and Technology (grant no. KKSY201904006) and the Key Laboratory of Rock Mechanics and Geohazards of Zhejiang Province (grant no. ZJRM-2018-Z-02).
83

Effects of Design Space Discretization on Constraint Based Design Space Exploration / Effekter av designrymdsdiskretisering på villkorsbaserad designrymdsutforskning

Karlsson, Ludwig January 2023 (has links)
Design Space Exploration (DSE) is the exploration of a space of possible designs with the goal of finding some optimal design according to some constraints and criteria. Within embedded systems design, automated DSE in particular can allow the system designer to efficiently find good solutions in highly complex design spaces. One particular tool for performing automated DSE is IDeSyDe which uses Constraint Programming (CP) and constraint optimization for modelling and optimization. The constraint models of DSE often include some real-valued parameters, but optimized CP-solvers typically require integer arguments. This makes it necessary to discretize the problem in order to make the approach useful in practice, effectively limiting the size of the search space significantly. The effects of this discretization procedure on the quality of the solutions have not previously been well studied. An investigation into how this kind of discretization affects the approximate solutions could make the approach more rigorous, and possibly also uncover exploitable details that could facilitate the development of even more efficient algorithms. This project presents a convergence proof based in CP and Multiresolutional analysis (MRA), including a practically useful error bound for solutions obtained with different discretizations. In particular, the mapping and scheduling of Syncronous Data Flow (SDF) models for streaming applications onto tile-based multiple processor system-on-chip platforms with a common time-division multiplexing bus interconnect is studied. The theoretical results are also verified using IDeSyDe for a few different configurations of applications and platforms. It can be seen that the experiments behave as predicted, with first order convergence in total error and adherence to the bound. / Designrymdsutforskning är benämningen för en systematisk utforskning av en rymd av möjliga designer i syfte att hitta bra eller optimala lösningar som optimerar något mål och som uppfyller krav och begränsningar. Automatiserad designrymdsutforskning har i synnerhet sett utveckling för tillämpningar inom design av inbyggda system, där den ständigt ökande komplexiteten hos moderna plattformar motiverat utvecklingen av nya metoder. Två stora delar är nödvändiga för att kunna tillämpa designrymdsutforskning för design av inbyggda system: en modell av systemet och en optimiseringsprocess. Beroende på situation kan systemmodeller variera från detaljerade simuleringar på transistornivå till övergripande analytiska modeller på applikationsnivå eller högre. Detaljerade simuleringar gör det möjligt att utvärdera en viss lösning mycket noggrant, men till en hög beräkningskostnad. Med analytiska modeller är det istället billigt att utvärdera enskilda lösningar, men på bekostnad av noggrannhet. På samma sätt kan olika optimeringsprocesser också användas: snabbare approximativa algoritmer kan användas för att hitta lösningar relativt snabbt men utan garantier för optimalitet, medans mer uttömmande algoritmer typiskt kräver mycket beräkningskraft. Ett verktyg för automatiserad designrymdsutforskning är IDeSyDe. IDeSyDe använder villkorsbaserade modeller och uttömmande sökning genom Branch and Bound. Optimerade algoritmiska lösare för villkorsprogrammeringsproblem kräver ofta heltalsparametrar. Modeller för designrymdsutforskning innehåller å andra sidan ofta kontinuerliga parametrar. På grund av detta är det ofta nödvändigt att disktretisera problemet för att effektivt kunna hitta lösningar. Eftersom en diskretisering begränsar mängden lösningar i sökrymden riskerar en sådan omformulering att ta bort även optimala lösningar. En designrymdsutforskningsalgoritm som utnyttjar diskretisering av designrymden måste på grund av detta generellt ses som en approximativ algoritm. Hur en sådan diskretisering påverkar lösningarna -- dvs. hur nära de approximativa lösningarna kan förväntas komma den optimala lösningen utan diskretisering -- har dock inte studerats i närmare detalj. En bättre förståelse för hur diskreta, approximativa problem och lösningar relaterar till sina exakta motsvarigheter kan ge metoden mer rigör. En undersökning av den underliggande matematiken har också potential att belysa andra samband och strukturer som potentiellt skulle kunna användas för att utveckla bättre eller mer effektiva algoritmer. I den här rapporten presenteras ett konvergensbevis baserat på villkorsprogrammering och multiupplösningsanalys med ett begränsat felintervall i termer av probleminstansspecifika parametrar och en diskretiseringsparameter. Beviset är framtaget för tillämpning med IDeSyDe och är därför begränsat till en kombination av modeller som verktyget för närvarande stödjer, nämligen strömmande-dataflödesapplikationer beskrivna som synkrona dataflödesmodeller (Synchronous Data Flow, SDF) samt en ''tile''-baserad modell för system med flera processorer på ett chip (MPSoC) med en gemensam tidspartitionerad multiplexor-bus för kommunikation mellan processor-''tiles''. De teoretiska resultaten är verifierade och tillämpade på ett flertal exempelfall beräknade med IDeSyDe, där konvergensen studerats experimentellt.
84

UNCERTAINTIES IN THE SOLUTIONS TO BOUNDARY ELEMENT METHOD: AN INTERVAL APPROACH

Zalewski, Bartlomiej Franciszek 04 June 2008 (has links)
No description available.
85

Modeling of turbulent mixing in combustion LES

Jain, Abhishek January 2017 (has links)
No description available.
86

Code Verification and Numerical Accuracy Assessment for Finite Volume CFD Codes

Veluri, Subrahmanya Pavan Kumar 30 August 2010 (has links)
A detailed code verification study of an unstructured finite volume Computational Fluid Dynamics (CFD) code is performed. The Method of Manufactured Solutions is used to generate exact solutions for the Euler and Navier-Stokes equations to verify the correctness of the code through order of accuracy testing. The verification testing is performed on different mesh types which include triangular and quadrilateral elements in 2D and tetrahedral, prismatic, and hexahedral elements in 3D. The requirements of systematic mesh refinement are discussed, particularly in regards to unstructured meshes. Different code options verified include the baseline steady state governing equations, transport models, turbulence models, boundary conditions and unsteady flows. Coding mistakes, algorithm inconsistencies, and mesh quality sensitivities uncovered during the code verification are presented. In recent years, there has been significant work on the development of algorithms for the compressible Navier-Stokes equations on unstructured grids. One of the challenging tasks during the development of these algorithms is the formulation of consistent and accurate diffusion operators. The robustness and accuracy of diffusion operators depends on mesh quality. A survey of diffusion operators for compressible CFD solvers is conducted to understand different formulation procedures for diffusion fluxes. A patch-wise version of the Method of Manufactured Solutions is used to test the accuracy of selected diffusion operators. This testing of diffusion operators is limited to cell-centered finite volume methods which are formally second order accurate. These diffusion operators are tested and compared on different 2D mesh topologies to study the effect of mesh quality (stretching, aspect ratio, skewness, and curvature) on their numerical accuracy. Quantities examined include the numerical approximation errors and order of accuracy associated with face gradient reconstruction. From the analysis, defects in some of the numerical formulations are identified along with some robust and accurate diffusion operators. / Ph. D.
87

CPU/GPU Code Acceleration on Heterogeneous Systems and Code Verification for CFD Applications

Xue, Weicheng 25 January 2021 (has links)
Computational Fluid Dynamics (CFD) applications usually involve intensive computations, which can be accelerated through using open accelerators, especially GPUs due to their common use in the scientific computing community. In addition to code acceleration, it is important to ensure that the code and algorithm are implemented numerically correctly, which is called code verification. This dissertation focuses on accelerating research CFD codes on multi-CPUs/GPUs using MPI and OpenACC, as well as the code verification for turbulence model implementation using the method of manufactured solutions and code-to-code comparisons. First, a variety of performance optimizations both agnostic and specific to applications and platforms are developed in order to 1) improve the heterogeneous CPU/GPU compute utilization; 2) improve the memory bandwidth to the main memory; 3) reduce communication overhead between the CPU host and the GPU accelerator; and 4) reduce the tedious manual tuning work for GPU scheduling. Both finite difference and finite volume CFD codes and multiple platforms with different architectures are utilized to evaluate the performance optimizations used. A maximum speedup of over 70 is achieved on 16 V100 GPUs over 16 Xeon E5-2680v4 CPUs for multi-block test cases. In addition, systematic studies of code verification are performed for a second-order accurate finite volume research CFD code. Cross-term sinusoidal manufactured solutions are applied to verify the Spalart-Allmaras and k-omega SST model implementation, both in 2D and 3D. This dissertation shows that the spatial and temporal schemes are implemented numerically correctly. / Doctor of Philosophy / Computational Fluid Dynamics (CFD) is a numerical method to solve fluid problems, which usually requires a large amount of computations. A large CFD problem can be decomposed into smaller sub-problems which are stored in discrete memory locations and accelerated by a large number of compute units. In addition to code acceleration, it is important to ensure that the code and algorithm are implemented correctly, which is called code verification. This dissertation focuses on the CFD code acceleration as well as the code verification for turbulence model implementation. In this dissertation, multiple Graphic Processing Units (GPUs) are utilized to accelerate two CFD codes, considering that the GPU has high computational power and high memory bandwidth. A variety of optimizations are developed and applied to improve the performance of CFD codes on different parallel computing systems. The program execution time can be reduced significantly especially when multiple GPUs are used. In addition, code-to-code comparisons with some NASA CFD codes and the method of manufactured solutions are utilized to verify the correctness of a research CFD code.
88

Numerical Simulation of Viscous Flow: A Study of Molecular Dynamics and Computational Fluid Dynamics

Fried, Jeremy 14 September 2007 (has links)
Molecular dynamics (MD) and computational fluid dynamics (CFD) allowresearchers to study fluid dynamics from two very different standpoints. From a microscopic standpoint, molecular dynamics uses Newton's second law of motion to simulate the interatomic behavior of individual atoms, using statistical mechanics as a tool for analysis. In contrast, CFD describes the motion of a fluid from a macroscopic level using the transport of mass, momentum, and energy of a system as a model. This thesis investigates both MD and CFD as a viable means of studying viscous flow on a nanometer scale. Specifically, we investigate a pressure-driven Poiseuille flow. The results of the MD simulations are processed using software we created to measure velocity, density, and pressure. The CFD simulations are run on numerical software that implements the MacCormack method for the Navier-Stokes equations. Additionally, the CFD simulations incorporate a local definition of viscosity, which is usually uncharacteristic of this simulation method. Based on the results of the simulations, we point out similarities and differences in the obtained steady-state solutions. / Master of Science
89

Residual-based Discretization Error Estimation for Computational Fluid Dynamics

Phillips, Tyrone 30 October 2014 (has links)
The largest and most difficult numerical approximation error to estimate is discretization error. Residual-based discretization error estimation methods are a category of error estimators that use an estimate of the source of discretization error and information about the specific application to estimate the discretization error using only one grid level. The higher-order terms are truncated from the discretized equations and are the local source of discretization error. The accuracy of the resulting discretization error estimate depends solely on the accuracy of the estimated truncation error. Residual-based methods require only one grid level compared to the more commonly used Richardson extrapolation which requires at least two. Reducing the required number of grid levels reduces computational expense and, since only one grid level is required, can be applied to unstructured grids where multiple quality grid levels are difficult to produce. The two residual-based discretization error estimators of interest are defect correction and error transport equations. The focus of this work is the development, improvement, and evaluation of various truncation error estimation methods considering the accuracy of the truncation error estimate and the resulting discretization error estimates. The minimum requirements for accurate truncation error estimation is specified along with proper treatment for several boundary conditions. The methods are evaluated using various Euler and Navier-Stokes applications. The discretization error estimates are compared to Richardson extrapolation. The most accurate truncation error estimation method was found to be the k-exact method where the fine grid with a correction factor was considerably reliable. The single grid methods including the k-exact require that the continuous operator be modified at the boundary to be consistent with the implemented boundary conditions. Defect correction showed to be more accurate for areas of larger discretization error; however, the cost was substantial (although cheaper than the primal problem) compared to the cost of solving the ETEs which was essential free due to the linearization. Both methods showed significantly more accurate estimates compared to Richardson extrapolation especially for smooth problems. Reduced accuracy was apparent with the presence of stronger shocks and some possible modifications to adapt to singularies are proposed for future work. / Ph. D.
90

Parametric Dynamical Systems: Transient Analysis and Data Driven Modeling

Grimm, Alexander Rudolf 02 July 2018 (has links)
Dynamical systems are a commonly used and studied tool for simulation, optimization and design. In many applications such as inverse problem, optimal control, shape optimization and uncertainty quantification, those systems typically depend on a parameter. The need for high fidelity in the modeling stage leads to large-scale parametric dynamical systems. Since these models need to be simulated for a variety of parameter values, the computational burden they incur becomes increasingly difficult. To address these issues, parametric reduced models have encountered increased popularity in recent years. We are interested in constructing parametric reduced models that represent the full-order system accurately over a range of parameters. First, we define a global joint error mea- sure in the frequency and parameter domain to assess the accuracy of the reduced model. Then, by assuming a rational form for the reduced model with poles both in the frequency and parameter domain, we derive necessary conditions for an optimal parametric reduced model in this joint error measure. Similar to the nonparametric case, Hermite interpolation conditions at the reflected images of the poles characterize the optimal parametric approxi- mant. This result extends the well-known interpolatory H2 optimality conditions by Meier and Luenberger to the parametric case. We also develop a numerical algorithm to construct locally optimal reduced models. The theory and algorithm are data-driven, in the sense that only function evaluations of the parametric transfer function are required, not access to the internal dynamics of the full model. While this first framework operates on the continuous function level, assuming repeated transfer function evaluations are available, in some cases merely frequency samples might be given without an option to re-evaluate the transfer function at desired points; in other words, the function samples in parameter and frequency are fixed. In this case, we construct a parametric reduced model that minimizes a discretized least-squares error in the finite set of measurements. Towards this goal, we extend Vector Fitting (VF) to the parametric case, solving a global least-squares problem in both frequency and parameter. The output of this approach might lead to a moderate size reduced model. In this case, we perform a post- processing step to reduce the output of the parametric VF approach using H2 optimal model reduction for a special parametrization. The final model inherits the parametric dependence of the intermediate model, but is of smaller order. A special case of a parameter in a dynamical system is a delay in the model equation, e.g., arising from a feedback loop, reaction time, delayed response and various other physical phenomena. Modeling such a delay comes with several challenges for the mathematical formulation, analysis, and solution. We address the issue of transient behavior for scalar delay equations. Besides the choice of an appropriate measure, we analyze the impact of the coefficients of the delay equation on the finite time growth, which can be arbitrary large purely by the influence of the delay. / Ph. D.

Page generated in 0.0593 seconds