Spelling suggestions: "subject:"applied amathematics"" "subject:"applied bmathematics""
51 |
Analysis of Thermal Conductivity in Composite AdhesivesBihari, Kathleen L. 08 August 2001 (has links)
<p>BIHARI, KATHLEEN LOUISE. Analysis of Thermal Conductivity in Composite Adhesives (Under the direction of H. Thomas Banks). Thermally conductive composite adhesives are desirable in many industrial applications, including computers, microelectronics, machinery and appliances. These composite adhesives are formed when a filler particle of high conductivity is added to a base adhesive. Typically, adhesives are poor thermal conductors. Experimentally only small improvements in the thermal properties of the composite adhesives over the base adhesives have been observed. A thorough understanding of heat transfer through a composite adhesive would aid in the design of a thermally conductive composite adhesive that has the desired thermal properties.In this work, we study design methodologies for thermally conductive composite adhesives. We present a three dimensional model for heat transfer through a composite adhesive based on its composition and on the experimental method for measuring its thermal properties. For proof of concept, we reduce our model to a two dimensional model. We present numerical solutions to our two dimensional model based on a composite silicone and investigate the effect of the particle geometry on the heat flow through this composite. We also present homogenization theory as a tool for computing the ``effective thermal conductivity" of a composite material.We prove existence, uniqueness and continuous dependence theorems for our two dimensional model. We formulate a parameter estimation problem for the two dimensional model and present numerical results. We first estimate the thermal conductivity parameters as constants, and then use a probability based approach to estimate the parameters as realizations of random variables. A theoretical framework for the probability based approach is outlined.Based on the results of the parameter estimation problem, we are led to formally derive sensitivity equations for our system. We investigate the sensitivity of our composite silicone with respect to the thermal conductivity of both the base silicone polymer and the filler particles. Numerical results of this investigation are also presented. <P>
|
52 |
Early Termination Strategies in Sparse Interpolation AlgorithmsLee, Wen-shin 04 December 2001 (has links)
<p>A black box polynomial is an object that takes as input a valuefor each variable and evaluates the polynomial at the given input.The process of determining the coefficients and terms of a blackbox polynomial is the problem of black box polynomialinterpolation. Two major approaches have been addressing suchpurpose: the dense algorithms whose computational complexities aresensitive to the degree of the target polynomial, and the sparsealgorithms that take advantage of the situation when the number ofnon-zero terms in a designate basis is small. In this dissertationwe cover power, Chebyshev, and Pochhammer term bases. However, asparse algorithm is less efficient when the target polynomial isdense, and both approaches require as input an upper bound oneither the degree or the number of non-zero terms. By introducingrandomization into existing algorithms, we demonstrate and developa probabilistic approach which we call "early termination." Inparticular we prove that with high probability of correctness theearly termination strategy makes different polynomialinterpolation algorithms "smart" by adapting to the degree or tothe number of non-zero terms during the process when either is notsupplied as an input. Based on the early termination strategy, wedescribe new efficient univariate algorithms that race a denseagainst a sparse interpolation algorithm in order to exploit thesuperiority of one of them. We apply these racing algorithms asthe univariate interpolation procedure needed in Zippel'smultivariate sparse interpolation method. We enhance the earlytermination approach with thresholds, and present insights toother such heuristic improvements. Some potential of the earlytermination strategy is observed for computing a sparse shift,where a polynomial becomes sparse through shifting the variablesby a constant.<P>
|
53 |
A Distributed Parameter Liver Model of Benzene Transport and Metabolism in Humans and Mice - Developmental, Theoretical, and Numerical ConsiderationsGray, Scott Thomas 03 December 2001 (has links)
<p> GRAY, SCOTT THOMAS. A Distributed ParameterLiver Model of Benzene Transport and Metabolism in Humans and Mice -Developmental, Theoretical, and Numerical Considerations. (Under thedirection of Hien T. Tran.) <p><p>In the Clean Air Act of 1970, the U. S. Congress names benzene ahazardous air pollutant and directs certain government agencies toregulate public exposure. Court battles over subsequent regulations haveled to the need for quantitative risk assessment techniques. Models forhuman exposure to various chemicals exist, but most current modelsassume the liver is well-mixed. This assumption does not recognize (mostsignificantly) the spatial distribution of enzymes involved in benzenemetabolism. </P><p>The development of a distributed parameter liver model that accountsfor benzene transport and metabolism is presented. The mathematicalmodel consists of a parabolic system of nonlinear partial differentialequations and enables the modeling of convection, diffusion, andreaction within the liver. Unlike the commonly used well-mixed model,this distributed parameter model has the capacity to accommodate spatialvariations in enzyme distribution. </p>The system of partial differential equations is formulated in a weak orvariational setting that provides natural means for the mathematical andnumerical analysis. In particular, general well-posedness results ofBanks and Musante for a class of abstract nonlinear parabolic systemsare applied to establish well-posedness for the benzene distributedliver model. Banks and Musante also presented theoretical results for ageneral least squares parameter estimation problem. They included aconvergence result for the Galerkin approximation scheme used in ournumerical simulations as a special case. <p>Preliminary investigations on the qualitative behavior of thedistributed liver model have included simulations with orthograde andretrograde bloodflow through mouse liver tissue. Simulation of humanexposure with the partial differential equation and the existingordinary differential equation model are presented and compared.Finally, the dependence of the solution on model parameters is explored.<p> <ahref="http://www.lib.ncsu.edu/etd/public/etd-1742831110113360/etd.pdf">
|
54 |
Preconditioning KKT SystemsHaws, John Courtney 25 March 2002 (has links)
<p>This research presents new preconditioners for linear systems. We proceed fromthe most general case to the very specific problem area of sparse optimal control.In the first most general approach, we assume only that the coefficient matrix isnonsingular. We target highly indefinite, nonsymmetric problems that cause difficultiesfor preconditioned iterative solvers, and where standard preconditioners, likeincomplete factorizations, often fail. We experiment with nonsymmetric permutationsand scalings aimed at placing large entries on the diagonal in the context of preconditioningfor general sparse matrices. Our numerical experiments indicate that thereliability and performance of preconditioned iterative solvers are greatly enhancedby such preprocessing.Secondly, we present two new preconditioners for KKT systems. KKT systemsarise in areas such as quadratic programming, sparse optimal control, and mixedfinite element formulations. Our preconditioners approximate a constraint preconditionerwith incomplete factorizations for the normal equations. Numerical experimentscompare these two preconditioners with exact constraint preconditioning andthe approach described above of permuting large entries to the diagonal.Finally, we turn to a specific problem area: sparse optimal control. Many optimalcontrol problems are broken into several phases, and within a phase, mostvariables and constraints depend only on nearby variables and constraints. However,free initial and final times and time-independent parameters impact variables andconstraints throughout a phase, resulting in dense factored blocks in the KKT matrix.We drop fill due to these variables to reduce density within each phase. Theresulting preconditioner is tightly banded and nearly block tri-diagonal. Numericalexperiments demonstrate that the preconditioners are effective, with very little fill inthe factorization.<P>
|
55 |
On 4-Regular Planar Hamiltonian GraphsHigh, David 01 May 2006 (has links)
In order to research knots with large crossing numbers, one would like to be able to select a random knot from the set of all knots with n crossings with as close to uniform probability as possible. The underlying graph of a knot diagram can be viewed as a 4-regular planar graph. The existence of a Hamiltonian cycle in such a graph is necessary in order to use the graph to compute an upper bound on rope length for a given knot. The algorithm to generate such graphs is discussed and an exact count of the number of graphs is obtained. In order to allow for the existence of such a count, a somewhat technical definition of graph equivalence is used. The main result of the thesis is the asymptotic results of how fast the number of graphs with n vertices (crossings) grows with n.
|
56 |
Hedging Contingent Claims in Markets with JumpsKennedy, J. Shannon 20 September 2007 (has links)
Contrary to the
Black-Scholes paradigm,
an option-pricing model which incorporates the possibility of
jumps
more
accurately reflects the
evolution of stocks in the real world.
However, hedging a contingent claim
in such a model is a non-trivial issue: in many cases, an infinite
number of hedging instruments are required to eliminate the
risk of an option position.
This thesis develops practical techniques for hedging contingent claims in
markets with jumps. Both regime-switching and
jump-diffusion models are considered.
|
57 |
Comparison of Approximation Schemes in Stochastic Simulation Methods for Stiff Chemical SystemsWells, Chad January 2009 (has links)
Interest in stochastic simulations of chemical systems is growing. One of the aspects
of simulation of chemical systems that has been the prime focus over the past
few years is accelerated simulation methods applicable when there is a separation
of time scale. With so many new methods being developed we have decided to look
at four methods that we consider to be the main foundation for this research area.
The four methods that will be the focus of this thesis are: the slow scale stochastic
simulation algorithm, the quasi steady state assumption applied to the stochastic
simulation algorithm, the nested stochastic simulation algorithm and the implicit
tau leaping method. These four methods are designed to deal with stiff chemical
systems so that the computational time is decreased from that of the "gold
standard" Gillespie algorithm, the stochastic simulation algorithm.
These approximation methods will be tested against a variety of sti examples
such as: a fast reversible dimerization, a network of isomerizations, a fast species
acting as a catalyst, an oscillatory system and a bistable system. Also, these
methods will be tested against examples that are marginally stiff, where the time
scale separation is not that distinct.
From the results of testing stiff examples, the slow scale SSA was typically the
best approximation method to use. The slow scale SSA was highly accurate and
extremely fast in comparison with the other methods. We also found for certain
cases, where the time scale separation was not as distinct, that the nested SSA was
the best approximation method to use.
|
58 |
A Multilevel Method for Image SegmentationAu, Adley January 2010 (has links)
Image segmentation is a branch of computer vision that has received a considerable
amount of interest in recent years. Segmentation describes a process that divides or partitions the pixels of a digital image into groups that correspond to the entities represented in the image. One such segmentation method is the Segmentation by Weighted Aggregation algorithm (SWA). Inspired by Algebraic Multigrid (AMG), the SWA algorithm provides a fast multilevel method for image segmentation.
The SWA algorithm takes a graph-based approach to the segmentation problem. Given
an image Ω, the weighted undirected graph A = (N,E) is constructed with each pixel corresponding to a node in N and each weighted edge connecting neighbouring nodes in E. The edge weight between nodes is calculated as a function of the difference in intensity between connected pixels.
To determine whether a group of pixels should be declared as a segment in the SWA
algorithm, a new scale-invariant measure to calculate the saliency of the group of pixels is introduced. This new measure determines the saliency of a potential segment by taking the ratio of the average similarity to its neighbours and its internal similarity. For complex images, intensity alone is not sufficient in providing a suitable segmentation. The SWA algorithm provides a way to improve the segmentation by incorporating other vision cues
such as texture, shape and colour.
The SWA algorithm with the new scale-invariant saliency measure was implemented
and its performance was tested on simple test images and more complex aerial-view images.
|
59 |
Adaptive finite element methods for linear-quadratic convection dominated elliptic optimal control problemsJanuary 2010 (has links)
The numerical solution of linear-quadratic elliptic optimal control problems requires the solution of a coupled system of elliptic partial differential equations (PDEs), consisting of the so-called state PDE, the adjoint PDE and an algebraic equation. Adaptive finite element methods (AFEMs) attempt to locally refine a base mesh in such a way that the solution error is minimized for a given discretization size. This is particularly important for the solution of convection dominated problems where inner and boundary layers in the solutions to the PDEs need to be sufficiently resolved to ensure that the solution of the discretized optimal control problem is a good approximation of the true solution.
This thesis reviews several AFEMs based on energy norm based error estimates for single convection dominated PDEs and extends them to the solution of the coupled system of convection dominated PDEs arising from the optimality conditions for optimal control problems.
Keywords Adaptive finite element methods, optimal control problems, convection-diffusion equations, local refinement, error estimation.
|
60 |
Implicitly Restarted DEIM_Arnoldi: An inner product free Krylov method for eigenproblemsJanuary 2010 (has links)
This thesis proposes an inner product free Krylov method called Implicitly Restarted DEIM_Arnoldi (IRD) to solve large scale eigenvalue problems. This algorithm is based on the Implicitly Restarted Arnoldi (IRA) scheme, which is very efficient for solving eigenproblems. IRA uses the Arnoldi factorization, which requires inner products. In contrast, IRD employs the Discrete Empirical Interpolation (DEIM) technique and the DEIM_Arnoldi algorithm to avoid inner products, thereby resulting in faster running times for large eigenproblems. Furthermore, IRD may be able to greatly reduce the latency caused by inner products in parallel computation. This work conducts many numerical experiments to compare the performance of IRD and IRA in serial computation, and discusses the possible ways to avoid the need for communication in parallel computation.
|
Page generated in 0.111 seconds