• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 241
  • 182
  • 84
  • 40
  • 11
  • 8
  • 8
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 652
  • 135
  • 93
  • 78
  • 76
  • 75
  • 74
  • 69
  • 68
  • 65
  • 59
  • 50
  • 49
  • 48
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Application of Marine Magnetometer for Underwater Object Exploration: Assessment of Depth and Structural Index

Chang, En-Hsin 31 July 2012 (has links)
Magnetic survey is a common geophysical exploration technique. By measuring the magnetic field strength at specific area, the characteristics and physical meaning of the target can be obtained through the analysis of the Earth's magnetic field anomalies within a stratigraphic zone or archaeological sites. In recent years, the marine magnetometer is employed to conduct underwater archaeological expedition at surrounding waters of Taiwan for ancient shipwrecks researching. The purpose of this study is to understand the relationship between the magnetic anomalies with the magnetic object via the various signal processing methods, included the calculation horizontal and vertical derivatives using fast Fourier transform (FFT) to eliminate the regional magnetic influence and gain the anomalies characteristics of the target itself, as well as highlight the location and boundaries of the magnetic source through the analytical signal. In addition, the Euler deconvolution implements as a tool for magnetic source inversion. The theory of Euler deconvolution was first proposed by Thompson (1982), this method is able to detect the magnetic source and estimate its locations by choosing the suitable structural index. Hsu (2002) proposed the enhanced Euler deconvolution, which is a combined inversion for structural index and source location through the use of the vertical derivative of measured data.In this study, we first generate various anomalies as testing models which are correspond with different geometric shape of magnetic source, the position and structural index for model is inversed by enhanced Euler deconvolution in both 2D and 3D.Moreover, the experiment was planned at offshore of Dalinpu in Kaohsiung, we took the CPC's pipelines as investigation objects which were buried under the seabed, than compare with sub-bottom profiler data to assess the feasibility of this method for underwater exploring applications.The most estimated results in 2D are correspond to the theory, but it does not have significant results in 3D due to the lack of observed data for the whole surface.In general, this method is concise and fast, it is fit for interpreting the magnetic data for exploring the underwater object.
252

Nonlinear Analysis of Beams Using Least-Squares Finite Element Models Based on the Euler-Bernoulli and Timoshenko Beam Theories

Raut, Ameeta A. 2009 December 1900 (has links)
The conventional finite element models (FEM) of problems in structural mechanics are based on the principles of virtual work and the total potential energy. In these models, the secondary variables, such as the bending moment and shear force, are post-computed and do not yield good accuracy. In addition, in the case of the Timoshenko beam theory, the element with lower-order equal interpolation of the variables suffers from shear locking. In both Euler-Bernoulli and Timoshenko beam theories, the elements based on weak form Galerkin formulation also suffer from membrane locking when applied to geometrically nonlinear problems. In order to alleviate these types of locking, often reduced integration techniques are employed. However, this technique has other disadvantages, such as hour-glass modes or spurious rigid body modes. Hence, it is desirable to develop alternative finite element models that overcome the locking problems. Least-squares finite element models are considered to be better alternatives to the weak form Galerkin finite element models and, therefore, are in this study for investigation. The basic idea behind the least-squares finite element model is to compute the residuals due to the approximation of the variables of each equation being modeled, construct integral statement of the sum of the squares of the residuals (called least-squares functional), and minimize the integral with respect to the unknown parameters (i.e., nodal values) of the approximations. The least-squares formulation helps to retain the generalized displacements and forces (or stress resultants) as independent variables, and also allows the use of equal order interpolation functions for all variables. In this thesis comparison is made between the solution accuracy of finite element models of the Euler-Bernoulli and Timoshenko beam theories based on two different least-square models with the conventional weak form Galerkin finite element models. The developed models were applied to beam problems with different boundary conditions. The solutions obtained by the least-squares finite element models found to be very accurate for generalized displacements and forces when compared with the exact solutions, and they are more accurate in predicting the forces when compared to the conventional finite element models.
253

Analytical Study on Adhesively Bonded Joints Using Peeling Test and Symmetric Composite Models Based on Bernoulli-Euler and Timoshenko Beam Theories for Elastic and Viscoelastic Materials

Su, Ying-Yu 2010 December 1900 (has links)
Adhesively bonded joints have been investigated for several decades. In most analytical studies, the Bernoulli-Euler beam theory is employed to describe the behaviour of adherends. In the current work, three analytical models are developed for adhesively bonded joints using the Timoshenko beam theory for elastic material and a Bernoulli-Euler beam model for viscoelastic materials. One model is for the peeling test of an adhesively bonded joint, which is described using a Timoshenko beam on an elastic foundation. The adherend is considered as a Timoshenko beam, while the adhesive is taken to be a linearly elastic foundation. Three cases are considered: (1) only the normal stress is acting (mode I); (2) only the transverse shear stress is present (mode II); and (3) the normal and shear stresses co-exist (mode III) in the adhesive. The governing equations are derived in terms of the displacement and rotational angle of the adherend in each case. Analytical solutions are obtained for the displacements, rotational angle, and stresses. Numerical results are presented to show the trends of the displacements and rotational angle changing with geometrical and loading conditions. In the second model, the peeling test of an adhesively bonded joint is represented using a viscoelastic Bernoulli-Euler beam on an elastic foundation. The adherend is considered as a viscoelastic Bernoulli-Euler beam, while the adhesive is taken to be a linearly elastic foundation. Two cases under different stress history are considered: (1) only the normal stress is acting (mode I); and (2) only the transverse shear stress is present (mode II). The governing equations are derived in terms of the displacements. Analytical solutions are obtained for the displacements. The numerical results show that the deflection increases as time and temperature increase. The third model is developed using a symmetric composite adhesively bonded joint. The constitutive and kinematic relations of the adherends are derived based on the Timoshenko beam theory, and the governing equations are obtained for the normal and shear stresses in the adhesive layer. The numerical results are presented to reveal the normal and shear stresses in the adhesive.
254

Implementation Of Different Flux Evaluation Schemes Into A Two-dimensional Euler Solver

Eraslan, Elvan 01 September 2006 (has links) (PDF)
This study investigates the accuracy and efficiency of several flux splitting methods for the compressible, two-dimensional Euler equations. Steger-Warming flux vector splitting method, Van Leer flux vector splitting method, The Advection Upstream Splitting Method (AUSM), Artificially Upstream Flux Vector Splitting Scheme (AUFS) and Roe&rsquo / s flux difference splitting schemes were implemented using the first- and second-order reconstruction methods. Limiter functions were embedded to the second-order reconstruction methods. The flux splitting methods are applied to subsonic, transonic and supersonic flows over NACA0012 airfoil, as well as subsonic, transonic and supersonic flows in a channel. The comparison of the obtained results with each other and the ones in the literature is presented. The advantages and disadvantages of each scheme among others are identified.
255

Numerical Solution Of One Dimensional Detonation Tube With Reactive Euler Equations Using High Resolution Method

Ungun, Yigit 01 February 2012 (has links) (PDF)
In this thesis, numerical simulation of one dimensional detonation tube problem is solved with finite rate chemistry. For the numerical simulation, Euler equations have been used. Since detonation tube phenomena occurs in high speed flows, viscosity eects and gravity forces are negligible. In this thesis, Godunov type methods have been studied and afterwards high resolution method is used for the numerical solution of the detonation tube problem. To solve the chemistry aspect of the problem ZND theory have been used. For the numerical solution, a FORTRAN code is written and the numerical solution of the problems compared with the exact ZND solutions.
256

none

Li, Chin-Yu 02 August 2001 (has links)
none
257

Development and Application of Kinetic Meshless Methods for Euler Equations

C, Praveen 07 1900 (has links)
Meshless methods are a relatively new class of schemes for the numerical solution of partial differential equations. Their special characteristic is that they do not require a mesh but only need a distribution of points in the computational domain. The approximation at any point of spatial derivatives appearing in the partial differential equations is performed using a local cloud of points called the "connectivity" (or stencil). A point distribution can be more easily generated than a grid since we have less constraints to satisfy. The present work uses two meshless methods; an existing scheme called Least Squares Kinetic Upwind Method (LSKUM) and a new scheme called Kinetic Meshless Method (KMM). LSKUM is a "kinetic" scheme which uses a "least squares" approximation} for discretizing the derivatives occurring in the partial differential equations. The first part of the thesis is concerned with some theoretical properties and application of LSKUM to 3-D point distributions. Using previously established results we show that first order LSKUM in 1-D is positivity preserving under a CFL-like condition. The 3-D LSKUM is applied to point distributions obtained from FAME mesh. FAME, which stands for Feature Associated Mesh Embedding, is a composite overlapping grid system developed at QinetiQ (formerly DERA), UK, for store separation problems. The FAME mesh has a cell-based data structure and this is first converted to a node-based data structure which leads to a point distribution. For each point in this distribution we find a set of nearby nodes which forms the connectivity. The connectivity at each point (which is also the "full stencil" for that point) is split along each of the three coordinate directions so that we need six split (or half or one-sided) stencils at each point. The split stencils are used in LSKUM to calculate the split-flux derivatives arising in kinetic schemes which gives the upwind character to LSKUM. The "quality" of each of these stencils affects the accuracy and stability of the numerical scheme. In this work we focus on developing some numerical criteria to quantify the quality of a stencil for meshless methods like LSKUM. The first test is based on singular value decomposition of the over-determined problem and the singular values are used to measure the ill-conditioning (generally caused by a flat stencil). If any of the split stencils are found to be ill-conditioned then we use the full stencil for calculating the corresponding split flux derivative. A second test that is used is based on an accuracy measurement. The idea of this test is that a "good" stencil must give accurate estimates of derivatives and vice versa. If the error in the computed derivatives is above some specified tolerance the stencil is classified as unacceptable. In this case we either enhance the stencil (to remove disc-type degenerate structure) or switch to full stencil. It is found that the full stencil almost always behaves well in terms of both the tests. The use of these two tests and the associated modifications of defective stencils in an automatic manner allows the solver to converge without any blow up. The results obtained for a 3-D configuration compare favorably with wind tunnel measurements and the framework developed here provides a rational basis for approaching the connectivity selection problem. The second part of the thesis deals with a new scheme called Kinetic Meshless Method (KMM) which was developed as a consequence of the experience obtained with LSKUM and FAME mesh. As mentioned before the full stencil is generally better behaved than the split stencils. Hence the new scheme is constructed so that it does not require split stencils but operates on a full stencil (which is like a centered stencil). In order to obtain an upwind bias we introduce mid-point states (between a point and its neighbour) and the least squares fitting is performed using these mid-point states. The mid-point states are defined in an upwind-biased manner at the kinetic/Boltzmann level and moment-method strategy leads to an upwind scheme at the Euler level. On a standard 4-point Cartesian stencil this scheme reduces to finite volume method with KFVS fluxes. We can also show the rotational invariance of the scheme which is an important property of the governing equations themselves. The KMM is extended to higher order accuracy using a reconstruction procedure similar to finite volume schemes even though we do not have (or need) any cells in the present case. Numerical studies on a model 2-D problem show second order accuracy. Some theoretical and practical advantages of using a kinetic formulation for deriving the scheme are recognized. Several 2-D inviscid flows are solved which also demonstrate many important characteristics. The subsonic test cases show that the scheme produces less numerical entropy compared to LSKUM, and is also better in preserving the symmetry of the flow. The test cases involving discontinuous flows show that the new scheme is capable of resolving shocks very sharply especially with adaptation. The robustness of the scheme is also very good as shown in the supersonic test cases.
258

Regularization of Parameter Problems for Dynamic Beam Models

Rydström, Sara January 2010 (has links)
<p>The field of inverse problems is an area in applied mathematics that is of great importance in several scientific and industrial applications. Since an inverse problem is typically founded on non-linear and ill-posed models it is a very difficult problem to solve. To find a regularized solution it is crucial to have <em>a priori</em> information about the solution. Therefore, general theories are not sufficient considering new applications.</p><p>In this thesis we consider the inverse problem to determine the beam bending stiffness from measurements of the transverse dynamic displacement. Of special interest is to localize parts with reduced bending stiffness. Driven by requirements in the wood-industry it is not enough considering time-efficient algorithms, the models must also be adapted to manage extremely short calculation times.</p><p>For the developing of efficient methods inverse problems based on the fourth order Euler-Bernoulli beam equation and the second order string equation are studied. Important results are the transformation of a nonlinear regularization problem to a linear one and a convex procedure for finding parts with reduced bending stiffness.</p>
259

Optimal transportation and action-minimizing measures

Figalli, Alessio. January 1900 (has links)
Texte remanié de : Thèse de doctorat : Mathématiques : Lyon, École normale supérieure (sciences) : 2007. / Bibliogr. p. [243]-251.
260

A unified framework for spline estimators

Schwarz, Katsiaryna 24 January 2013 (has links)
No description available.

Page generated in 0.0786 seconds