• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 26
  • 26
  • 9
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Criteria for Numerical Stability of Explicit Time-Stepping Elastic-Viscoplasticity

Higgins, Jerry 06 1900 (has links)
A simple yet effective technique is used to obtain a numerical stability criteria for explicit time-marching algorithms in elastic-viscoplasticity. The resulting stability criteria are capable of accounting for non-associative and work hardening viscoplasticity for a wide variety of constitutive laws of the Perzyna-type. Conservative estimates for maximum permissible time step are obtained. This thesis investigates the level of conservativeness by considering different problems exhibiting various levels of constraint. Using the proposed stability criterion, assuming a linear flow function, non-hardening and uniform material properties, it is shown that the initial strain algorithm for plasticity and the initial strain viscoplastic algorithms are numerically the same. The intuitive approach used to obtain an estimate of maximum permissible time step was also used to develop an unconditionally stable implicit time marching scheme which avoids expensive matrix inversions. / Thesis / Master of Engineering (ME)
2

Estabilidade numérica de fórmulas baricêntricas para interpolação / Numerical stability of barycentric formulae for interpolation.

Camargo, André Pierro de 15 December 2015 (has links)
O problema de reconstruir uma função f a partir de um número finito de valores conhecidos f(x0), f(x1), ..., f(xn) aparece com frequência em modelagem matemática. Em geral, não é possível determinar f completamente a partir de f(x0), f(x1), ..., f(xn), mas, em muitos casos de interesse, podemos encontrar aproximações razoáveis para f usando interpolação, que consiste em determinar uma função (um polinômio, ou uma função racional ou trigonométrica, etc) g que satisfaça g(xi) = f(xi); i = 0, 1, ..., n: Na prática, a função interpoladora g é avaliada em precisão finita e o valor final computado de g(x) pode diferir do valor exato g(x) devido a erros de arredondamento. Essa diferença pode, inclusive, ultrapassar o erro de interpolação E(x) = f(x) - g(x) em várias ordens de magnitude, comprometendo todo o processo de aproximação. A estabilidade numérica de um algoritmo reflete sua sensibilidade em relação a erros de arredondamento. Neste trabalho apresentamos uma análise detalhada da estabilidade numérica de alguns algoritmos utilizados no cálculo de interpoladores polinomiais ou racionais que podem ser postos na forma baricêntrica. Os principais resultados deste trabalho também estão disponíveis em língua inglesa nos artigos - Mascarenhas, W e Camargo, A. P., On the backward stability of the second barycentric formula for interpolation, Dolomites research notes on approximation v. 7 (2014) pp. 1-12. - Camargo, A. P., On the numerical stability of Floater-Hormann\'s rational interpolant, Numerical Algorithms, DOI 10.1007/s11075-015-0037-z. - Camargo, A. P., Erratum: On the numerical stability of Floater-Hormann\'s rational interpolant\", Numerical Algorithms, DOI 10.1007/s11075-015-0071-x. - Camargo, A. P. e Mascarenhas, W., The stability of extended Floater-Hormann interpolants, Numerische Mathematik, submetido. arXiv:1409.2808v5 / The problem of reconstructing a function f from a finite set of known values f(x0), f(x1), ..., f(xn) appears frequently in mathematical modeling. It is not possible, in general, to completely determine f from f(x0), f(x1), ..., f(xn) but, in several cases of interest, it is possible to find reasonable approximations for f by interpolation, which consists in finding a suitable function (a polynomial function, a rational or trigonometric function, etc.) g such that g(xi) = f(xi); i = 0, 1, ..., n: In practice, the interpolating function g is evaluated in finite precision and the final computed value of g(x) may differ from the exact value g(x) due to rounding. In fact, such difference can even exceed the interpolation error E(x) = f(x)-g(x) in several orders of magnitude, compromising the entire approximation process. The numerical stability of an algorithm reflect is sensibility with respect to rounding. In this work we present a detailed analysis of the numerical stability of some algorithms used to evaluate polynomial or rational interpolants which can be put in the barycentric format. The main results of this work are also available in english in the papers - Mascarenhas, W e Camargo, A. P., On the backward stability of the second barycentric formula for interpolation, Dolomites research notes on approximation v. 7 (2014) pp. 1-12. - Camargo, A. P., On the numerical stability of Floater-Hormann\'s rational interpolant, Numerical Algorithms, DOI 10.1007/s11075-015-0037-z. - Camargo, A. P., Erratum: On the numerical stability of Floater-Hormann\'s rational interpolant\", Numerical Algorithms, DOI 10.1007/s11075-015-0071-x. - Camargo, A. P. e Mascarenhas, W., The stability of extended Floater-Hormann interpolants, Numerische Mathematik, submetido. arXiv:1409.2808v5
3

Numerical stabilization for multidimensional coupled convection-diffusion-reaction equations: Applications to continuum dislocation transport

Hernandez Velazquez, Hector Alonso 13 September 2017 (has links) (PDF)
Partial differential equations having diffusive, convective and reactive terms appear naturally in the modeling of a large variety of processes of practical interest in several branches of science such as biology, chemistry, economics, physics, physiology and materials science. Moreover, in some instances several species or components interact with each other requiring to solve strongly coupled systems of convection-diffusion-reaction equations. Of special interest for us is the numerical treatment of the advection dominated continuum dislocation transport equations used to describe the plastic behavior of crystalline materials.Analytical solutions for such equations are extremely scarce and practically limited to linear equations with homogeneous coefficients and simple initial and boundary conditions. Therefore, resorting to numerical approximations is the most affordable and often the only viable strategy to deal with such models. However, when classical numerical methods are used to approximate the solutions of such equations, even in the simplest one dimensional case in the steady state regime for a single equation, instabilities in the form of node to node spurious oscillations are found when the convective or reactive terms dominate over the diffusive term.To address such issues, stabilization techniques have been developed over the years in order to handle such transport equations by numerical means, overcoming the stability difficulties. However, such stabilization techniques are most often suited for particular problems. For instance the Streamline Upwind Petrov-Galerkin method, to name only one of the most well-known, successfully eliminates spurious oscillations for single advection-diffusion equations when its advective form is discretized, but have been shown useless if the divergence form is used instead. Additionally, no extensive work has been carried out for systems of coupled equations. The reason for this immaturity is the lack of a maximum principle when going from a single transport equation towards systems of coupled equations.The main aim of this work is to present a stabilization technique for systems of coupled multidimensional convection-diffusion-reaction equations based on coefficient perturbations. These perturbations are optimally chosen in such a way that certain compatibility conditions analogous to a maximum principle are satisfied. Once the computed perturbations are injected in the classical Bubnov-Galerkin finite element method, they provide smooth and stable numerical approximations.Such a stabilization technique is first developed for the single one-dimensional convection-diffusion-reaction equation. Rigorous proof of its effectiveness in rendering unconditionally stable numerical approximations with respect to the space discretization is provided for the convection-diffusion case via the fulfillment of the discrete maximum principle. It is also demonstrated and confirmed by numerical assessments that the stabilized solution is consistent with the discretized partial differential equation, since it converges to the classical Bubnov-Galerkin solution if the mesh Peclet number is small enough. The corresponding proofs for the diffusion-reaction and the general convection-diffusion-reaction cases can be obtained in a similar manner. Furthermore, it is demonstrated that this stabilization technique is applicable irrespective of whether the advective or the divergence form is used for the spatial discretization, making it highly flexible and general. Subsequently the stabilization technique is extended to the one-dimensional multiple equations case by using the superposition principle, a well-known strategy used when solving non-homogeneous second order ordinary differential equations. Finally, the stabilization technique is applied to mutually perpendicular spatial dimensions in order to deal with multidimensional problems.Applications to several prototypical linear coupled systems of partial differential equations, of interest in several scientific disciplines, are presented. Subsequently the stabilization technique is applied to the continuum dislocation transport equations, involving their non-linearity, their strongly coupled character and the special boundary conditions used in this context; a combination of additional difficulties which most traditional stabilization techniques are unable to deal with. The proposed stabilization scheme has been successfully applied to these equations. Its effectiveness in stabilizing the classical Bubnov-Galerkin scheme and being consistent with the discretized partial differential equation are both demonstrated in the numerical simulations performed. Such effectiveness remains unaffected when different types of dislocation transport models with constant or variable length scales are used.These results allow envisioning the use of the developed technique for simulating systems of strongly coupled convection-diffusion-reaction equations with an affordable computational effort. In particular, the above mentioned crystal plasticity models can now be handled with reasonable computation times without the use of extraordinary computational power, but still being able to render accurate and physically meaningful numerical approximations. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
4

Three material decomposition in dual energy CT for brachytherapy using the iterative image reconstruction algorithm DIRA : Performance of the method for an anthropomorphic phantom

Westin, Robin January 2013 (has links)
Brachytherapy is radiation therapy performed by placing a radiation source near or inside a tumor. Difference between the current water-based brachytherapy dose formalism (TG-43) and new model based dose calculation algorithms (MBSCAs) can differ by more than a factor of 10 in the calculated doses. There is a need for voxel-by-voxel cross-section assignment, ideally, both the tissue composition and mass density of every voxel should be known for individual patients. A method for determining tissue composition via three material decomposition (3MD) from dual energy CT scans was developed at Linköping university. The method (named DIRA) is a model based iterative reconstruction algorithm that utilizes two photon energies for image reconstruction and 3MD for quantitative tissue classification of the reconstructed volumetric dataset. This thesis has investigated the accuracy of the 3MD method applied on prostate tissue in an anthropomorphic phantom when using two different approximations of soft tissues in DIRA. Also the distributions of CT-numbers for soft tissues in a contemporary dual energy CT scanner have been determined. An investigation whether these distributions can be used for tissue classification of soft tissues via thresholding has been conducted. It was found that the relative errors of mass energy absorption coefficient (MEAC) and linear attenuation coefficient (LAC) of the approximated mixture as functions of photon energy were less than 6 \% in the energy region from 1 keV to 1 MeV. This showed that DIRA performed well for the selected anthropomorphic phantom and that it was relatively insensitive to choice of base materials for the approximation of soft tissues. The distributions of CT-numbers of liver, muscle and kidney tissues overlapped. For example a voxel containing muscle could be misclassified as liver in 42 cases of 100. This suggests that pure thresholding is insufficient as a method for tissue classification of soft tissues and that more advanced methods should be used.
5

Estabilidade numérica de fórmulas baricêntricas para interpolação / Numerical stability of barycentric formulae for interpolation.

André Pierro de Camargo 15 December 2015 (has links)
O problema de reconstruir uma função f a partir de um número finito de valores conhecidos f(x0), f(x1), ..., f(xn) aparece com frequência em modelagem matemática. Em geral, não é possível determinar f completamente a partir de f(x0), f(x1), ..., f(xn), mas, em muitos casos de interesse, podemos encontrar aproximações razoáveis para f usando interpolação, que consiste em determinar uma função (um polinômio, ou uma função racional ou trigonométrica, etc) g que satisfaça g(xi) = f(xi); i = 0, 1, ..., n: Na prática, a função interpoladora g é avaliada em precisão finita e o valor final computado de g(x) pode diferir do valor exato g(x) devido a erros de arredondamento. Essa diferença pode, inclusive, ultrapassar o erro de interpolação E(x) = f(x) - g(x) em várias ordens de magnitude, comprometendo todo o processo de aproximação. A estabilidade numérica de um algoritmo reflete sua sensibilidade em relação a erros de arredondamento. Neste trabalho apresentamos uma análise detalhada da estabilidade numérica de alguns algoritmos utilizados no cálculo de interpoladores polinomiais ou racionais que podem ser postos na forma baricêntrica. Os principais resultados deste trabalho também estão disponíveis em língua inglesa nos artigos - Mascarenhas, W e Camargo, A. P., On the backward stability of the second barycentric formula for interpolation, Dolomites research notes on approximation v. 7 (2014) pp. 1-12. - Camargo, A. P., On the numerical stability of Floater-Hormann\'s rational interpolant, Numerical Algorithms, DOI 10.1007/s11075-015-0037-z. - Camargo, A. P., Erratum: On the numerical stability of Floater-Hormann\'s rational interpolant\", Numerical Algorithms, DOI 10.1007/s11075-015-0071-x. - Camargo, A. P. e Mascarenhas, W., The stability of extended Floater-Hormann interpolants, Numerische Mathematik, submetido. arXiv:1409.2808v5 / The problem of reconstructing a function f from a finite set of known values f(x0), f(x1), ..., f(xn) appears frequently in mathematical modeling. It is not possible, in general, to completely determine f from f(x0), f(x1), ..., f(xn) but, in several cases of interest, it is possible to find reasonable approximations for f by interpolation, which consists in finding a suitable function (a polynomial function, a rational or trigonometric function, etc.) g such that g(xi) = f(xi); i = 0, 1, ..., n: In practice, the interpolating function g is evaluated in finite precision and the final computed value of g(x) may differ from the exact value g(x) due to rounding. In fact, such difference can even exceed the interpolation error E(x) = f(x)-g(x) in several orders of magnitude, compromising the entire approximation process. The numerical stability of an algorithm reflect is sensibility with respect to rounding. In this work we present a detailed analysis of the numerical stability of some algorithms used to evaluate polynomial or rational interpolants which can be put in the barycentric format. The main results of this work are also available in english in the papers - Mascarenhas, W e Camargo, A. P., On the backward stability of the second barycentric formula for interpolation, Dolomites research notes on approximation v. 7 (2014) pp. 1-12. - Camargo, A. P., On the numerical stability of Floater-Hormann\'s rational interpolant, Numerical Algorithms, DOI 10.1007/s11075-015-0037-z. - Camargo, A. P., Erratum: On the numerical stability of Floater-Hormann\'s rational interpolant\", Numerical Algorithms, DOI 10.1007/s11075-015-0071-x. - Camargo, A. P. e Mascarenhas, W., The stability of extended Floater-Hormann interpolants, Numerische Mathematik, submetido. arXiv:1409.2808v5
6

Numerical Stability in Linear Programming and Semidefinite Programming

Wei, Hua January 2006 (has links)
We study numerical stability for interior-point methods applied to Linear Programming, LP, and Semidefinite Programming, SDP. We analyze the difficulties inherent in current methods and present robust algorithms. <br /><br /> We start with the error bound analysis of the search directions for the normal equation approach for LP. Our error analysis explains the surprising fact that the ill-conditioning is not a significant problem for the normal equation system. We also explain why most of the popular LP solvers have a default stop tolerance of only 10<sup>-8</sup> when the machine precision on a 32-bit computer is approximately 10<sup>-16</sup>. <br /><br /> We then propose a simple alternative approach for the normal equation based interior-point method. This approach has better numerical stability than the normal equation based method. Although, our approach is not competitive in terms of CPU time for the NETLIB problem set, we do obtain higher accuracy. In addition, we obtain significantly smaller CPU times compared to the normal equation based direct solver, when we solve well-conditioned, huge, and sparse problems by using our iterative based linear solver. Additional techniques discussed are: crossover; purification step; and no backtracking. <br /><br /> Finally, we present an algorithm to construct SDP problem instances with prescribed strict complementarity gaps. We then introduce two <em>measures of strict complementarity gaps</em>. We empirically show that: (i) these measures can be evaluated accurately; (ii) the size of the strict complementarity gaps correlate well with the number of iteration for the SDPT3 solver, as well as with the local asymptotic convergence rate; and (iii) large strict complementarity gaps, coupled with the failure of Slater's condition, correlate well with loss of accuracy in the solutions. In addition, the numerical tests show that there is no correlation between the strict complementarity gaps and the geometrical measure used in [31], or with Renegar's condition number.
7

Numerical Stability in Linear Programming and Semidefinite Programming

Wei, Hua January 2006 (has links)
We study numerical stability for interior-point methods applied to Linear Programming, LP, and Semidefinite Programming, SDP. We analyze the difficulties inherent in current methods and present robust algorithms. <br /><br /> We start with the error bound analysis of the search directions for the normal equation approach for LP. Our error analysis explains the surprising fact that the ill-conditioning is not a significant problem for the normal equation system. We also explain why most of the popular LP solvers have a default stop tolerance of only 10<sup>-8</sup> when the machine precision on a 32-bit computer is approximately 10<sup>-16</sup>. <br /><br /> We then propose a simple alternative approach for the normal equation based interior-point method. This approach has better numerical stability than the normal equation based method. Although, our approach is not competitive in terms of CPU time for the NETLIB problem set, we do obtain higher accuracy. In addition, we obtain significantly smaller CPU times compared to the normal equation based direct solver, when we solve well-conditioned, huge, and sparse problems by using our iterative based linear solver. Additional techniques discussed are: crossover; purification step; and no backtracking. <br /><br /> Finally, we present an algorithm to construct SDP problem instances with prescribed strict complementarity gaps. We then introduce two <em>measures of strict complementarity gaps</em>. We empirically show that: (i) these measures can be evaluated accurately; (ii) the size of the strict complementarity gaps correlate well with the number of iteration for the SDPT3 solver, as well as with the local asymptotic convergence rate; and (iii) large strict complementarity gaps, coupled with the failure of Slater's condition, correlate well with loss of accuracy in the solutions. In addition, the numerical tests show that there is no correlation between the strict complementarity gaps and the geometrical measure used in [31], or with Renegar's condition number.
8

A numerically stable model for simulating high frequency conduction block in nerve fiber

Kieselbach, Rebecca 26 July 2011 (has links)
Previous studies performed on myelinated nerve fibers have shown that a high frequency alternating current stimulus can block impulse conduction. The current threshold at which block occurs increases as the blocking frequency increases. Cable models based on the Hodgkin-Huxley model are consistent with these results. Recent experimental studies on unmyelinated nerve have shown that at higher frequencies, the block threshold decreases. When the block threshold is plotted as a function of frequency the resulting graph is distinctly nonmonotonic. Currently, all published models do not explain this behavior and the physiological mechanisms that create it are unknown. This difference in myelinated vs. unmyelinated block thresholds at high frequencies could have numerous clinical applications, such as chronic pain management. A large body of literature has shown that the specific capacitance of biological tissue decreases at frequencies in the kHz range or higher. Prior research has shown that introducing a frequency-dependent capacitance (FDC) to the Hodgkin-Huxley model will attenuate the block threshold at higher frequencies, but not to the extent that was seen in the experiments. This model was limited by the methods used to solve its higher order partial differential equation. The purpose of this thesis project is to develop a numerically stable method of incorporating the FDC into the model and to examine its effect on block threshold. The final, modified model will also be compared to the original model to ensure that the fundamental characteristics of action potential propagation remain unchanged.
9

A discontinuous Petrov-Galerkin method for seismic tomography problems

Bramwell, Jamie Ann 06 November 2013 (has links)
The imaging of the interior of the Earth using ground motion data, or seismic tomography, has been a subject of great interest for over a century. The full elastic wave equations are not typically used in standard tomography codes. Instead, the elastic waves are idealized as rays and only phase velocity and travel times are considered as input data. This results in the inability to resolve features which are on the order of one wavelength in scale. To overcome this problem, models which use the full elastic wave equation and consider total seismograms as input data have recently been developed. Unfortunately, those methods are much more computationally expensive and are only in their infancy. While the finite element method is very popular in many applications in solid mechanics, it is still not the method of choice in many seismic applications due to high pollution error. The pollution effect creates an increasing ratio of discretization to best approximation error for problems with increasing wave numbers. It has been shown that standard finite element methods cannot overcome this issue. To compensate, the meshes for solving high wave number problems in seismology must be increasingly refined, and are computationally infeasible due to the large scale requirements. A new generalized least squares method was recently introduced. The main idea is to select test spaces such that the discrete problem inherits the stability of the continuous problem. In this dissertation, a discontinuous Petrov-Galerkin method with optimal test functions for 2D time-harmonic seismic tomography problems is developed. First, the abstract DPG framework and key results are reviewed. 2D DPG methods for both static and time-harmonic elasticity problems are then introduced and results indicating the low-pollution property are shown. Finally, a matrix-free inexact-Newton method for the seismic inverse problem is developed. To conclude, results obtained from both DPG and standard continuous Galerkin discretization schemes are compared and the potential effectiveness of DPG as a practical seismic inversion tool is discussed. / text
10

Finite element analysis and experimental study of metal powder compaction

KASHANI ZADEH, HOSSEIN 23 September 2010 (has links)
In metal powder compaction, density non-uniformity due to friction can be a source of flaws. Currently in industry, uniform density distribution is achieved by the optimization of punch motions through trial and error. This method is both costly and time consuming. Over the last decade, the finite element (FE) method has received significant attention as an alternative to the trial and error method; however, there is still lack of an accurate and robust material model for the simulation of metal powder compaction. In this study, Cam-clay and Drucker-Prager cap (DPC) material models were implemented into the commercial FE software ABAQUS/Explicit using the user-subroutine VUMAT. The Cam-clay model was shown to be appropriate for simple geometries. The DPC model is a pressure-dependent, non-smooth, multi-yield surface material model with a high curvature in the cap yield surface. This high curvature tends to result in instability issues; a sub-increment technique was implemented to address this instability problem. The DPC model also shows instability problems at the intersection of the yield surfaces; this problem was solved using the corner region in DPC material models for soils. The computational efficiency of the DPC material model was improved using a novel technique to solve the constitutive equations. In a case study it was shown that the numerical technique leads to a 30% decrease in computational cost, while degrading the accuracy of the analysis by only 0.4%. The forward Euler method was shown to be accurate in the integration of the constitutive equations using an error control scheme. Experimental tests were conducted where cylindrical-shaped parts were compacted from Distaloy AE iron based powder to a final density of 7.0 g/cm3. To measure local density, metallography and image processing was used. The FE results were compared to experimental results and it was shown that the FE analysis predicted local relative density within 2% of the actual experimental density. / Thesis (Ph.D, Mechanical and Materials Engineering) -- Queen's University, 2010-09-23 12:15:27.371

Page generated in 0.0745 seconds