Spelling suggestions: "subject:"applied"" "subject:"appplied""
81 |
RESULTANTS OF COMPOSED POLYNOMIALSMINIMAIR, MANFRED 15 March 2001 (has links)
<p><p>MINIMAIR, MANFRED. Resultants of Composed Polynomials.(Under the direction of Hoon Hong.)</p><p>The objective of this research has been to develop methods forcomputing resultants of composedpolynomials, efficiently, by utilizing their composition structure.By the resultant of several polynomials in several variables (one fewer variables than polynomials) we mean anirreducible polynomial in the coefficients ofthe polynomials that vanishes if theyhave a common zero.By a composed polynomial we mean the polynomial obtained from a given polynomial by replacing each variable by a polynomial.</p><p>The main motivation for this researchcomes from the following observations:Resultants of polynomialsare frequently computedin many areas of science andin applicationsbecause they are fundamentally utilized in solving systemsof polynomial equations.Further, polynomials arising in science and applicationsare often composed because humans tend to structure knowledge modularly and hierarchically.Thus, it is important to have theories and software librariesfor efficientlycomputing resultants of composed polynomials.</p><p>However,most existing mathematical theories do not adequately support composed polynomials and most algorithms as well as software libraries ignore the composition structure, thus suffering from enormous blow up in space and time.Thus, it is important to develop theories and software librariesfor efficientlycomputing resultants of composed polynomials.</p><p>The main finding of this research is thatresultants of composed polynomials can benicely factorized, namely, they can be factorized into products of powers of the resultants of the component polynomialsand of some of their parts. These factorizationscan be utilized to compute resultants of composed polynomialswith dramatically improved efficiency.</p><P>
|
82 |
Physiologically Based Pharmacokinetic Models for the Systemic Transport of TrichloroethylenePotter, Laura Kay 17 May 2001 (has links)
<p>Three physiologically based pharmacokinetic (PBPK) models for thesystemic transport of inhaled trichloroethylene (TCE) are presented.The major focus ofthese modeling efforts is the disposition of TCE in the adiposetissue, where TCE is known to accumulate. Adipose tissue is highly heterogeneous, with wide variations in fat cell size, lipid composition, blood flow rates and cellpermeability. Since TCE is highly lipophilic, the uneven distributionof lipids in the adipose tissue may lead to an uneven distribution of TCEwithin the fat. These physiological characteristics suggest that thedynamics of TCE in the adipose tissue may depend on spatial variations within the tissueitself.<br> <br>The first PBPK model for inhaled TCE presented here is a system ofordinary differential equations which includes the standardperfusion-limited compartmental model for each of the adipose, brain,kidney, liver, muscle and remaining tissue compartments.Model simulations predict relatively rapiddecreases in TCE fat concentrations following exposure, which may notreflect the accumulation and relative persistence of TCE inside the fattissue. The second PBPK model is identical to the first except forthe adipose tissue compartment, which is modeled as a diffusion-limited compartment.Although this model yields various concentration profiles for TCE inthe adipose tissue depending on the value of the permeabilitycoefficient, this model may not be physically appropriate for TCE,which is highly lipophilic and has a low molecular weight. Moreover,neither of these two PBPK models is able to capture spatialvariation of TCE concentrations in adipose tissue as suggested bythe physiology.<br><br>The third model we present is a hybrid PBPK model with adispersion-type model for the transport of TCE in the adipose tissue. Thedispersion model is designed to account for the heterogeneities within fattissue, as well as the corresponding spatial variation of TCE concentrationsthat may occur. This partial differential equation model is based onthe dispersion model of Roberts and Rowland for hepatic uptake andelimination, adapted here for the specific physiology of adiposetissue. <br><br>Theoretical results are given for the well-posedness of a generalclass of abstract nonlinear parabolic systems which includes the TCEPBPK-hybrid model as a special case. Moreover, theoretical issues related to associated general least squares estimation problems are addressed,including the standard type of deterministic problem and aprobability-based identification problem that incorporates variability inparameters across a population. We also establish thetheoretical convergence of the Galerkin approximations used in our numericalschemes.<br><br>The qualitative behavior of the TCE PBPK-hybrid model is studied usingmodel simulations and parameter estimation techniques. In general, theTCE PBPK-hybrid model can generate various predictions of the dynamicsof TCE in adipose tissue by varying the adipose model parameters.These predictions include simulations that are similar to the expectedbehavior of TCE in the adipose tissue, which involves a rapid increaseof TCE adipocyte concentrations during the exposure period, followed by aslow decay of TCE levels over several hours.<br><br>Results are presented for several types of parameter estimationproblems associatedwith the TCE PBPK-hybrid model. We test theseestimation strategies using two types of simulated data: observationsrepresenting TCE concentrations from a single individual, andobservations that simulateinter-individual variability. The latter type of data, which iscommonly found in experiments related to toxicokinetics, assumesvariability in the parameters across a population, and may includeobservations from multiple individuals. Using both deterministic andprobability-based estimation techniques, we demonstrate thatthe probability-based estimation strategiesthat incorporate variability in the parameters may be best suited forestimating adipose model parameters that vary across the population.<P>
|
83 |
Modifications of the DIRECT Algorithm.Gablonsky, Jörg M. 22 May 2001 (has links)
<p>GABLONSKY, JÖRG MAXIMILIAN XAVER. Modifications of the DIRECTAlgorithm. (Under the direction of Carl Timothy Kelley.)This work presents improvements of a global optimization method for boundconstraint problems along with theoretical results. These improvements arestrongly biased towards local search. The global optimization method known asDIRECT was modified specifically for small-dimensional problems with few globalminima. The motivation for our work comes from our theoretical results regarding thebehavior of DIRECT. Specifically, we explain how DIRECT clusters its search neara global minimizer. An additional influence is our explanation of DIRECT'sbehavior for both constant and linear functions. We further improved the effectiveness of both DIRECT, and our modification, bycombining them with another global optimization method known as ImplicitFiltering. In addition to these improvements the methods were also extended tohandle problems where the objective function is defined solely on an unknownsubset of the bounding box. We demonstrate the increased effectiveness androbustness of our modifications using optimization problems from the natural gastransmission industry, as well as commonly used test problems from theliterature. <P>
|
84 |
Benzene and Its Effect on Erythropoiesis: Models, Optimal Controls, and AnalysesCole, Cammey Elizabeth 11 June 2001 (has links)
<p>Benzene (C_6 H_6) is a highly flammable, colorless liquid. Ubiquitous exposures result from its presence in gasoline vapors, cigarette smoke, and industrial processes. Benzene increases the incidence of leukemia in humans when they are exposed to high doses for extended periods; however, leukemia risks in humans subjected to low exposures are uncertain. The exposure-dose-response relationship of benzene in humans is expected to be nonlinear because benzene undergoes a series of metabolic transformations, detoxifying and activating, resulting in multiple metabolites that exert toxic effects on the bone marrow. <br><br>We developed a physiologically based pharmacokinetic model for the uptake and elimination of benzene in mice to relate the concentration of inhaled and orally administered benzene to the tissue doses of benzene and its key metabolites, benzene oxide, phenol, and hydroquinone. Analysis was done to examine the existence and uniqueness of solutions of the system. We then formulated an inverse problem to obtain estimates for the unknown parameters; data from multiple independent laboratories and experiments were used.<br><br>The model was then revised to take into account the zonal distribution of enzymes and metabolisms in the liver, rather than assuming the liver is one homogeneous compartment, and extrahepatic metabolisms were also considered. Despite the sources of variability, the multicompartment metabolism model simulations matched the data reasonably well in most cases and improved results from the original model, showing that data for benzene metabolism and dosimetry.<br><br>Since benzene is a known human leukemogen, the toxicity of benzene in the bone marrow is of most importance. And because blood cells are produced in the bone marrow, we investigated the effects of benzene on hematopoiesis (blood cell production and development). An age-structured model was used to examine the process of erythropoiesis, the development of red blood cells. Again, the existence and uniqueness of the solution of the system of coupled partial and ordinary differential equations was proven. The system was then written in weak form and reduced from an infinite dimensional system to a system of ordinary differential equations by employing a finite element formulation. The control of this process is governed by the hormone erythropoietin. The form of the feedback of this hormone is unknown. Although a Hill function has previously been used to represent this feedback, it has no physiological basis, so an optimal control problem was formulated in order to find the optimal form of the feedback function and to track the total number of mature cells. Numerical results for both forms of the feedback function are presented.<P>
|
85 |
An Application of a Reduced Order Computational Methodology for Eddy Current Based Nondestructive Evaluation TechniquesJoyner, Michele Lynn 11 June 2001 (has links)
<p>In the field of nondestructive evaluation, new and improved techniques are constantly being sought to facilitate the detection of hidden corrosion and flaws in structures such as airplanes and pipelines. In this dissertation, we explore the feasibility of detecting such damages by application of an eddy current based technique and reduced order modeling. <br><br>We begin by developing a model for a specific eddy current method in which we make some simplifying assumptions reducing the three-dimensional problem to a two-dimensional problem. (We do this for proof-of-concept.) Theoretical results are then presented which establish the existence and uniqueness of solutions as well as continuous dependence of the solution on the parameters which represent the damage. We further discuss theoretical issues concerning the least squares parameter estimation problem used in identifying the geometry of the damage. <br><br>To solve the identification problem, an optimization algorithm is employed which requires solving the forward problem numerous times. To implement these methods in a practical setting, the forward algorithm must be solved with extremely fast and accurate solution methods. Therefore in constructing these computational methods, we employ reduced order Proper Orthogonal Decomposition (POD) techniques which allows one to create a set of basis elements spanning a data set consisting of either numerical simulations or experimental data. <br><br>We investigate two different approaches in forming the POD approximation, a POD/Galerkin technique and a POD/Interpolation technique. We examine the error in the approximation using one approach versus the other as well as present results of the parameter estimation problem for both techniques. <br><br>Finally, results of the parameter estimation problem are given using both simulated data with relative noise added as well as experimental data obtained using a giant magnetoresistive (GMR) sensor. The experimental results are based on successfully using actual experimental data to form the POD basis elements (instead of numerical simulations) thus illustrating the effectiveness of this method on a wide range of applications. In both instances the methods are found to be efficient and robust. Furthermore, the methods were fast; our findings suggest a significant reduction in computational time. <P>
|
86 |
Patterns of air flow and particle deposition in the diseased human lungSegal, Rebecca Anne 05 July 2001 (has links)
<p>SEGAL, REBECCA ANNE. Patterns of air flow and particle depositionin the diseased human lung. (Under the direction of Michael Shearer.)In this work, we investigate particle deposition and air flow in thehuman lung. In particular we are interested in how the motion ofparticulate matter and air is affected by the presence of lungdisease. Patients with compromised lung function are more sensitiveto air pollution; understanding the extent of that sensitivity canlead to more effective air quality standards. Also, understanding ofair flow andparticle trajectories could lead to the development of better aerosoldrugs to treat the lung diseases.We focus our efforts on twodiseases: chronic obstructive pulmonary disease (COPD) and bronchialtumors. Because COPD affects the majority of airways in a patientwith the disease, we are able to take a more global approach toanalyzing the effects of the disease. Using a FORTRAN codewhich computes total deposition in the lung over the course of onebreath, we modified the pre-existing code to account forthe difference between healthy subjects and patients with COPD. Usingthe model, itwas possible to isolate the different disease components of COPD andsimulate their effects separately. It was determined thatthe chronic bronchitis component of COPD was responsible for the increaseddeposition seen in COPD patients.While COPD affects the whole lung, tumors tend to belocalized to one or several airways. This led us to investigate theeffects of bronchial tumors in detail within these individualairways. Using a computational fluid dynamics package, FIDAP, wedefined a Weibel type branching network of airways.In particular, we modeled theairways of a four-year-old child.In the work with the tumors, we ran numerous simulations with variousinitial velocities and tumor locations. It was determined that tumorslocated on the carinal ridge had the dominant effect on the flow. Athigher initial velocities, areas of circulation developed downstreamfrom the tumors. Extensive simulations were run with a 2-D model. Theresults from the 2-D model were then compared with some initial 3-Dsimulations.In the development of the FIDAP model, we avoided thecomplications of flow past the larynx, by limiting the model togenerations 2-5 of the Weibel lung. We developed a realistic inletvelocity profile to be used as the input into the model. The skewednature ofthis inlet profile led to thequestion of boundary layer development and the determination of theentrance length needed to achieve fully developed parabolicflow. Simple scale analysis of the Navier-Stokes equations did notcapture the results we were seeing with the CFD simulations.We turned to a more quantitative, energy correctionanalysis to determine the theoretical entrance length.In conclusion, the presence of disease in the lunghas a large effect both on global deposition patterns and on localizedairflow patterns. This indicates the need for different protocolsregarding susceptibility of people to airborne pollutants that take intoaccount lung disease. It also suggests that treatment should accountfor changes in airflow in the diseased lung.<P>
|
87 |
Analysis of Thermal Conductivity in Composite AdhesivesBihari, Kathleen L. 08 August 2001 (has links)
<p>BIHARI, KATHLEEN LOUISE. Analysis of Thermal Conductivity in Composite Adhesives (Under the direction of H. Thomas Banks). Thermally conductive composite adhesives are desirable in many industrial applications, including computers, microelectronics, machinery and appliances. These composite adhesives are formed when a filler particle of high conductivity is added to a base adhesive. Typically, adhesives are poor thermal conductors. Experimentally only small improvements in the thermal properties of the composite adhesives over the base adhesives have been observed. A thorough understanding of heat transfer through a composite adhesive would aid in the design of a thermally conductive composite adhesive that has the desired thermal properties.In this work, we study design methodologies for thermally conductive composite adhesives. We present a three dimensional model for heat transfer through a composite adhesive based on its composition and on the experimental method for measuring its thermal properties. For proof of concept, we reduce our model to a two dimensional model. We present numerical solutions to our two dimensional model based on a composite silicone and investigate the effect of the particle geometry on the heat flow through this composite. We also present homogenization theory as a tool for computing the ``effective thermal conductivity" of a composite material.We prove existence, uniqueness and continuous dependence theorems for our two dimensional model. We formulate a parameter estimation problem for the two dimensional model and present numerical results. We first estimate the thermal conductivity parameters as constants, and then use a probability based approach to estimate the parameters as realizations of random variables. A theoretical framework for the probability based approach is outlined.Based on the results of the parameter estimation problem, we are led to formally derive sensitivity equations for our system. We investigate the sensitivity of our composite silicone with respect to the thermal conductivity of both the base silicone polymer and the filler particles. Numerical results of this investigation are also presented. <P>
|
88 |
Early Termination Strategies in Sparse Interpolation AlgorithmsLee, Wen-shin 04 December 2001 (has links)
<p>A black box polynomial is an object that takes as input a valuefor each variable and evaluates the polynomial at the given input.The process of determining the coefficients and terms of a blackbox polynomial is the problem of black box polynomialinterpolation. Two major approaches have been addressing suchpurpose: the dense algorithms whose computational complexities aresensitive to the degree of the target polynomial, and the sparsealgorithms that take advantage of the situation when the number ofnon-zero terms in a designate basis is small. In this dissertationwe cover power, Chebyshev, and Pochhammer term bases. However, asparse algorithm is less efficient when the target polynomial isdense, and both approaches require as input an upper bound oneither the degree or the number of non-zero terms. By introducingrandomization into existing algorithms, we demonstrate and developa probabilistic approach which we call "early termination." Inparticular we prove that with high probability of correctness theearly termination strategy makes different polynomialinterpolation algorithms "smart" by adapting to the degree or tothe number of non-zero terms during the process when either is notsupplied as an input. Based on the early termination strategy, wedescribe new efficient univariate algorithms that race a denseagainst a sparse interpolation algorithm in order to exploit thesuperiority of one of them. We apply these racing algorithms asthe univariate interpolation procedure needed in Zippel'smultivariate sparse interpolation method. We enhance the earlytermination approach with thresholds, and present insights toother such heuristic improvements. Some potential of the earlytermination strategy is observed for computing a sparse shift,where a polynomial becomes sparse through shifting the variablesby a constant.<P>
|
89 |
A Distributed Parameter Liver Model of Benzene Transport and Metabolism in Humans and Mice - Developmental, Theoretical, and Numerical ConsiderationsGray, Scott Thomas 03 December 2001 (has links)
<p> GRAY, SCOTT THOMAS. A Distributed ParameterLiver Model of Benzene Transport and Metabolism in Humans and Mice -Developmental, Theoretical, and Numerical Considerations. (Under thedirection of Hien T. Tran.) <p><p>In the Clean Air Act of 1970, the U. S. Congress names benzene ahazardous air pollutant and directs certain government agencies toregulate public exposure. Court battles over subsequent regulations haveled to the need for quantitative risk assessment techniques. Models forhuman exposure to various chemicals exist, but most current modelsassume the liver is well-mixed. This assumption does not recognize (mostsignificantly) the spatial distribution of enzymes involved in benzenemetabolism. </P><p>The development of a distributed parameter liver model that accountsfor benzene transport and metabolism is presented. The mathematicalmodel consists of a parabolic system of nonlinear partial differentialequations and enables the modeling of convection, diffusion, andreaction within the liver. Unlike the commonly used well-mixed model,this distributed parameter model has the capacity to accommodate spatialvariations in enzyme distribution. </p>The system of partial differential equations is formulated in a weak orvariational setting that provides natural means for the mathematical andnumerical analysis. In particular, general well-posedness results ofBanks and Musante for a class of abstract nonlinear parabolic systemsare applied to establish well-posedness for the benzene distributedliver model. Banks and Musante also presented theoretical results for ageneral least squares parameter estimation problem. They included aconvergence result for the Galerkin approximation scheme used in ournumerical simulations as a special case. <p>Preliminary investigations on the qualitative behavior of thedistributed liver model have included simulations with orthograde andretrograde bloodflow through mouse liver tissue. Simulation of humanexposure with the partial differential equation and the existingordinary differential equation model are presented and compared.Finally, the dependence of the solution on model parameters is explored.<p> <ahref="http://www.lib.ncsu.edu/etd/public/etd-1742831110113360/etd.pdf">
|
90 |
Preconditioning KKT SystemsHaws, John Courtney 25 March 2002 (has links)
<p>This research presents new preconditioners for linear systems. We proceed fromthe most general case to the very specific problem area of sparse optimal control.In the first most general approach, we assume only that the coefficient matrix isnonsingular. We target highly indefinite, nonsymmetric problems that cause difficultiesfor preconditioned iterative solvers, and where standard preconditioners, likeincomplete factorizations, often fail. We experiment with nonsymmetric permutationsand scalings aimed at placing large entries on the diagonal in the context of preconditioningfor general sparse matrices. Our numerical experiments indicate that thereliability and performance of preconditioned iterative solvers are greatly enhancedby such preprocessing.Secondly, we present two new preconditioners for KKT systems. KKT systemsarise in areas such as quadratic programming, sparse optimal control, and mixedfinite element formulations. Our preconditioners approximate a constraint preconditionerwith incomplete factorizations for the normal equations. Numerical experimentscompare these two preconditioners with exact constraint preconditioning andthe approach described above of permuting large entries to the diagonal.Finally, we turn to a specific problem area: sparse optimal control. Many optimalcontrol problems are broken into several phases, and within a phase, mostvariables and constraints depend only on nearby variables and constraints. However,free initial and final times and time-independent parameters impact variables andconstraints throughout a phase, resulting in dense factored blocks in the KKT matrix.We drop fill due to these variables to reduce density within each phase. Theresulting preconditioner is tightly banded and nearly block tri-diagonal. Numericalexperiments demonstrate that the preconditioners are effective, with very little fill inthe factorization.<P>
|
Page generated in 0.0477 seconds