Spelling suggestions: "subject:"anumerical 2analysis."" "subject:"anumerical 3analysis.""
501 |
Investigations of stream-aquifer interactions using a coupled surface-water and ground-water flow modelVionnet, Leticia Beatriz, Maddock, Thomas, III, Goodrich, David C. 01 1900 (has links)
A finite element numerical model is developed for the modeling of coupled
surface-water flow and ground-water flow. The mathematical treatment of subsurface
flows follows the confined aquifer theory or the classical Dupuit approximation for
unconfined aquifers whereas surface-water flows are treated with the kinematic wave
approximation for open channel flow. A detailed discussion of the standard approaches to
represent the coupling term is provided. In this work, a mathematical expression similar
to Ohm's law is used to simulate the interacting term between the two major hydrological
components. Contrary to the standard approach, the coupling term is incorporated
through a boundary flux integral that arises naturally in the weak form of the governing
equations rather than through a source term. It is found that in some cases, a branch cut
needs to be introduced along the internal boundary representing the stream in order to
define a simply connected domain, which is an essential requirement in the derivation of
the weak form of the ground-water flow equation. The fast time scale characteristic of
surface-water flows and the slow time scale characteristic of ground-water flows are
clearly established, leading to the definition of three dimensionless parameters, namely, a
Peclet number that inherits the disparity between both time scales, a flow number that
relates the pumping rate and the streamflow, and a Biot number that relates the
conductance at the river-aquifer interface to the aquifer conductance.
The model, implemented in the Bill Williams River Basin, reproduces the observed
streamflow patterns and the ground-water flow patterns. Fairly good results are obtained
using multiple time steps in the simulation process.
|
502 |
Physically motivated registration of diagnostic CT and PET/CT of lung volumesBaluwala, Habib January 2013 (has links)
Lung cancer is a disease affecting millions of people every year and poses a serious threat to global public health. Accurate lung cancer staging is crucial to choose an appropriate treatment protocol and to determine prognosis, this requires the acquisition of contrast-enhanced diagnostic CT (d-CT) that is usually followed by a PET/CT scan. Information from both d-CT and PET scan is used by the clinician in the staging process; however, these images are not intrinsically aligned because they are acquired on different days and on different scanners. Establishing anatomical correspondence, i.e., aligning the d-CT and the PET images is an inherently difficult task due to the absence of a direct relationship between the intensities of the images. The CT acquired during the PET/CT scan is used for attenuation correction (AC-CT) and is implicitly aligned with the PET image as they are acquired at the same time using a hybrid scanner. Patients are required to maintain shallow breathing for both scans. In contrast to that, the d-CT image is acquired after the injection of a contrast agent, and patients are required to maximally inhale, for better view of the lungs. Differences in the AC-CT and d-CT image volumes are thus due to differences in breathhold positions and image contrast. Nonetheless, both images are from the same modality. In this thesis, we present a new approach that aligns the d-CT with the PET image through an indirect registration process that uses the AC-CT. The deformation field obtained after the registration of the AC-CT to d-CT is used to align the PET image to the d-CT. Conventional image registration techniques deform the entire image using homogeneous regularization without taking into consideration the physical properties of the various anatomical structures. This homogeneous regularization may lead to physiologically and physically implausible deformations. To register the d-CT and AC-CT images, we developed a 3D registration framework based on a fluid transformation model including three physically motivated properties: (i) sliding motion of the lungs against the pleura; (ii) preservation of rigid structures; and (iii) preservation of topology. The sliding motion is modeled using a direction dependent regularization that decouples the tangential and the normal components of the external force term. The rigid shape of the bones is preserved using a spatially varying filter for the deformations. Finally, the topology is maintained using the concept of log-unbiased deformations. To solve the multi-modal registration problem due to the contrast injected for the d-CT, but lack thereof in the AC-CT, we use local cross correlation (LCC) as the similarity measure. To illustrate and validate the proposed registration framework, different intra-patient CT datasets are used, including the NCAT phantom, EMPIRE10 and POPI datasets. Results show that our proposed registration framework provides improved alignment and physically motivated deformations when compared to the classic elastic and fluid registration techniques. The final goal of our work was to demonstrate the clinical utility of our new approach that aligns d-CT and PET/AC-CT images for fusion. We apply our method to ten real patients. Our results show that the PET images have much improved alignment with the d-CT images using our proposed registration technique. Our method was successful in providing a good overlap of the lungs, improved alignment of the tumours and a lower target registration error for landmarks in comparison to the classic fluid registration. The main contribution of this thesis is the development of a comprehensive registration framework that integrates important physical properties into a state-of-the-art transformation model with application to lung imaging in cancer.
|
503 |
Multilevel Monte Carlo for jump processesXia, Yuan January 2013 (has links)
This thesis consists of two parts. The first part (Chapters 2-4) considers multilevel Monte Carlo for option pricing in finite activity jump-diffusion models. We use a jump-adapted Milstein discretisation for constant rate cases and with the thinning method for bounded state-dependent rate cases. Multilevel Monte Carlo estimators are constructed for Asian, lookback, barrier and digital options. The computational efficiency is numerically demonstrated and analytically justified. The second part (Chapter 5) deals with option pricing problems in exponential Lévy models where the increments of the underlying process can be directly simulated. We discuss several examples: Variance Gamma, Normal Inverse Gaussian and alpha-stable processes and present numerical experiments of multilevel Monte Carlo for Asian, lookback, barrier options, where the running maximum of the Lévy process involved in lookback and barrier payoffs is approximated using discretely monitored maximum. To analytically verify the computational complexity of multilevel method, we also prove some upper bounds on L<sup>p</sup> convergence rate of discretely monitored error for a broad class of Lévy processes.
|
504 |
Efficient numerical methods for the solution of coupled multiphysics problemsAsner, Liya January 2014 (has links)
Multiphysics systems with interface coupling are used to model a variety of physical phenomena, such as arterial blood flow, air flow around aeroplane wings, or interactions between surface and ground water flows. Numerical methods enable the practical application of these models through computer simulations. Specifically a high level of detail and accuracy is achieved in finite element methods by discretisations which use extremely large numbers of degrees of freedom, rendering the solution process challenging from the computational perspective. In this thesis we address this challenge by developing a twofold strategy for improving the efficiency of standard finite element coupled solvers. First, we propose to solve a monolithic coupled problem using block-preconditioned GMRES with a new Schur complement approximation. This results in a modular and robust method which significantly reduces the computational cost of solving the system. In particular, numerical tests show mesh-independent convergence of the solver for all the considered problems, suggesting that the method is well-suited to solving large-scale coupled systems. Second, we derive an adjoint-based formula for goal-oriented a posteriori error estimation, which leads to a time-space mesh refinement strategy. The strategy produces a mesh tailored to a given problem and quantity of interest. The monolithic formulation of the coupled problem allows us to obtain expressions for the error in the Lagrange multiplier, which often represents a physically relevant quantity, such as the normal stress on the interface between the problem components. This adaptive refinement technique provides an effective tool for controlling the error in the quantity of interest and/or the size of the discrete system, which may be limited by the available computational resources. The solver and the mesh refinement strategy are both successfully employed to solve a coupled Stokes-Darcy-Stokes problem modelling flow through a cartridge filter.
|
505 |
Computing with functions in two dimensionsTownsend, Alex January 2014 (has links)
New numerical methods are proposed for computing with smooth scalar and vector valued functions of two variables defined on rectangular domains. Functions are approximated to essentially machine precision by an iterative variant of Gaussian elimination that constructs near-optimal low rank approximations. Operations such as integration, differentiation, and function evaluation are particularly efficient. Explicit convergence rates are shown for the singular values of differentiable and separately analytic functions, and examples are given to demonstrate some paradoxical features of low rank approximation theory. Analogues of QR, LU, and Cholesky factorizations are introduced for matrices that are continuous in one or both directions, deriving a continuous linear algebra. New notions of triangular structures are proposed and the convergence of the infinite series associated with these factorizations is proved under certain smoothness assumptions. A robust numerical bivariate rootfinder is developed for computing the common zeros of two smooth functions via a resultant method. Using several specialized techniques the algorithm can accurately find the simple common zeros of two functions with polynomial approximants of high degree (≥ 1,000). Lastly, low rank ideas are extended to linear partial differential equations (PDEs) with variable coefficients defined on rectangles. When these ideas are used in conjunction with a new one-dimensional spectral method the resulting solver is spectrally accurate and efficient, requiring O(n<sup>2</sup>) operations for rank $1$ partial differential operators, O(n<sup>3</sup>) for rank 2, and O(n<sup>4</sup>) for rank &geq,3 to compute an n x n matrix of bivariate Chebyshev expansion coefficients for the PDE solution. The algorithms in this thesis are realized in a software package called Chebfun2, which is an integrated two-dimensional component of Chebfun.
|
506 |
Relativistic eikonal formalism applied to inclusive quasielastic proton-induced nuclear reactionsTitus, Nortin, P-D 12 1900 (has links)
Thesis (PhD (Physics))--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: In this dissertation we present, for the first time, a relativistic distorted wave impulse approximation
formalism to describe quasielastic proton-nucleus scattering. We start from a full many-body description
of the transition matrix element and show systematically how to derive the equivalent two-body
form. This procedure allows for a clear and unambiguous method to introduce relativistic distorted
waves. It is shown that the polarized double differential cross section may be written as the contraction
of two tensors namely, the hadronic tensor (describing the projectile and ejectile), and the polarization
tensor describing the target nucleus. The basic nucleon-nucleon (NN) interaction is described by the
SPVAT or IA1 representation of the NN scattering matrix. Analytical expressions are derived for the
polarization tensor using a Fermi gas model for the target nucleus. The nuclear distortion effects on
the projectile and ejectile are described using the relativistic eikonal formalism. The expression for the
double differential cross section is a nine dimensional oscillatory integral and an efficient procedure is
developed to calculate this quantity. Comparison of Gaussian, Monte Carlo and quasi-Monte Carlo
numerical integration schemes reveal that for this work, Gaussian quadrature is best suited for this
problem. Traditional Gaussian quadrature is used to generate single variable functions whereby these
functions are used in combination with modern software such as MATLAB to complete the computation
of the full multidimensional integral in a reasonable amount of time. Even though the calculation
of the cross section for a single value of the energy transfer is still time consuming, the computational
time can be decreased by spreading the calculational burden across a number of nodes in a cluster
computing system. A test calculation is performed whereby a proton with incident laboratory energy
of 400 MeV is scattered off a 40Ca target nucleus at θcm = 40◦. For this reaction we calculate the
unpolarized double differential cross section, as well as a complete set of spin observables namely Ay,
Dℓ′,ℓ, Ds′s, Dnn,Ds′ℓ and Dℓ′s. We find that the distortions lead to a reduction of the unpolarized
double differential cross section. On the other hand the spin observables are complex entities which
show no uniformity in behaviour. However, the differences between the distorted wave spin observables
and that of the plane wave observables are minor and we conclude that distortions have little effect on
spin observables. / AFRIKAANSE OPSOMMING: Hierdie proefskrif bevat, vir die eerste keer, ’n relatiwistiese vervormdegolf impuls benadering formalisme
vir die beskrywing van kwasielastiese proton-kern verstrooiing. Daar word aangetoon hoe
om stapsgewys te gaan vanaf ’n veel-deeltjie beskrywing van die oorgangsmatriks element na die ekwivalente
twee-deeltjie vorm. Hierdie metode laat toe dat die vervormde golwe op ’n duidelike en
ondubbelsinnige manier ingevoer kan word. Daar word aangetoon dat die gepolariseerde dubbele
differensiële kansvlak geskryf kan word as die kontraksie van twee tensore naamlik, die hadroniese
tensor (wat die projektiel en uitgaande nukleon beskryf), sowel as die polarisasie tensor wat die kern
beskryf. Die basiese kern-kern (NN) wisselwerking word beskryf deur gebruik te maak van die SPVAT
of IA1 daarstelling van die NN verstrooiingsmatriks. Analitiese uitdrukkings word ook afgelei vir die
polarisasie tensor binne die Fermi gas model. Die vervormdegolf beskrywing van die projektiel en
uitgaande deeltjie word gedoen deur gebruik te maak van die eikonal vervormdegolf benadering. Die
uitdrukking vir die ongepolariseerde dubbele differsieële kansvlak bevat ’n nege dimensioneële ossilatoriese
integraal en ’n prakties-effektiewe prosedure is ontwikkel om hierdie waarneembare te bereken.
Vegelyking van Gauss, Monte Carlo en kwasi-Monte Carlo numeriese integrasie tegnieke het uitgewys
dat die Gauss integrasie tegniek die beste geskik is om die probleem op te los. Die gebruik van Gauss
integrasie om funksies te bereken wat afhanklik is van net een veranderlike en dit te kombineer met
moderne sagteware programme soos MATLAB laat ons toe om die gepolariseerde dubbele differensieële
kansvlak te bekeren in ’n redelike tyd. Alhoewel die berekening van die kansvlak vir een waarde van
die energie-oordrag nogsteeds tydrowend is, word dit bespoedig deur die berekeningslas te versprei oor
’n aantal nodusse in ’n rekenaarbondel sisteem. ’n Toets berekening word gedoen waarby ’n proton
met inkomende laboratoriese energie van 400 MeV vanaf ’n 40Ca kern verstrooi word teen ’n hoek van
θcm = 400. Vir hierdie reaksie word die ongepolariseerde dubbele differensieële kansvlak bereken sowel
as ’n volledige stel spin waarneembares naamlik Ay, Dℓ′,ℓ, Ds′s, Dnn, Ds′ℓ en Dℓ′s. Daar word gevind
dat die versteurings lei tot ’n afname in die differensieële kansvlak. Die spin waarneembares egter,
is komplekse hoeveelhede wat geen univorme gedrag toon nie. Die verskil tussen die vervormde golf
spin waarneembares en die van vlak golf waarneembares is minimaal en ons lei daarvan af dat spin
waarneembares onsensitief is teen oor versteurings.
|
507 |
Numerical Approximation of Reaction and Diffusion Systems in Complex Cell GeometryChaudry, Qasim Ali January 2010 (has links)
<p>The mathematical modelling of the reaction and diffusion mechanism of lipophilic toxic compounds in the mammalian cell is a challenging task because of its considerable complexity and variation in the architecture of the cell. The heterogeneity of the cell regarding the enzyme distribution participating in the bio-transformation, makes the modelling even more difficult. In order to reduce the complexity of the model, and to make it less computationally expensive and numerically treatable, Homogenization techniques have been used. The resulting complex system of Partial Differential Equations (PDEs), generated from the model in 2-dimensional axi-symmetric setting is implemented in Comsol Multiphysics. The numerical results obtained from the model show a nice agreement with the in vitro cell experimental results. The model can be extended to more complex reaction systems and also to 3-dimensional space. For the reduction of complexity and computational cost, we have implemented a model of mixed PDEs and Ordinary Differential Equations (ODEs). We call this model as Non-Standard Compartment Model. Then the model is further reduced to a system of ODEs only, which is a Standard Compartment Model. The numerical results of the PDE Model have been qualitatively verified by using the Compartment Modeling approach. The quantitative analysis of the results of the Compartment Model shows that it cannot fully capture the features of metabolic system considered in general. Hence we need a more sophisticated model using PDEs for our homogenized cell model.</p> / Computational Modelling of the Mammalian Cell and Membrane Protein Enzymology
|
508 |
Simulation and parameter estimation of spectrophotometric instruments / Simulering och parameterestimering av spektrofotometriska instrumentAvramidis, Stefanos January 2009 (has links)
<p>The paper and the graphics industries use two instruments with different optical geometry (d/0 and 45/0) to measure the quality of paper prints. The instruments have been reported to yield incompatible measurements and even rank samples differently in some cases, causing communication problems between these sectors of industry.A preliminary investigation concluded that the inter-instrument difference could be significantly influenced by external factors (background, calibration, heterogeneity of the medium). A simple methodology for eliminating these external factors and thereby minimizing the instrument differences has been derived. The measurements showed that, when the external factors are eliminated, and there is no fluorescence or gloss influence, the inter-instrument difference becomes small, depends on the instrument geometry, and varies systematically with the scattering, absorption, and transmittance properties of the sample.A detailed description of the impact of the geometry on the results has been presented regarding a large sample range. Simulations with the radiative transfer model DORT2002 showed that the instruments measurements follow the physical radiative transfer model except in cases of samples with extreme properties. The conclusion is that the physical explanation of the geometrical inter-instrument differences is based on the different degree of light permeation from the two geometries, which eventually results in a different degree of influence from near-surface bulk scattering. It was also shown that the d/0 instrument fulfils the assumptions of a diffuse field of reflected light from the medium only for samples that resemble the perfect diffuser but it yields an anisotropic field of reflected light when there is significant absorption or transmittance. In the latter case, the 45/0 proves to be less anisotropic than the d/0.In the process, the computational performance of the DORT2002 has been significantly improved. After the modification of the DORT2002 in order to include the 45/0 geometry, the Gauss-Newton optimization algorithm for the solution of the inverse problem was qualified as the most appropriate one, after testing different optimization methods for performance, stability and accuracy. Finally, a new homotopic initial-value algorithm for routine tasks (spectral calculations) was introduced, which resulted in a further three-fold speedup of the whole algorithm.The paper and the graphics industries use two instruments with different optical geometry (d/0 and 45/0) to measure the quality of paper prints. The instruments have been reported to yield incompatible measurements and even rank samples differently in some cases, causing communication problems between these sectors of industry.A preliminary investigation concluded that the inter-instrument difference could be significantly influenced by external factors (background, calibration, heterogeneity of the medium). A simple methodology for eliminating these external factors and thereby minimizing the instrument differences has been derived. The measurements showed that, when the external factors are eliminated, and there is no fluorescence or gloss influence, the inter-instrument difference becomes small, depends on the instrument geometry, and varies systematically with the scattering, absorption, and transmittance properties of the sample.A detailed description of the impact of the geometry on the results has been presented regarding a large sample range. Simulations with the radiative transfer model DORT2002 showed that the instruments measurements follow the physical radiative transfer model except in cases of samples with extreme properties. The conclusion is that the physical explanation of the geometrical inter-instrument differences is based on the different degree of light permeation from the two geometries, which eventually results in a different degree of influence from near-surface bulk scattering. It was also shown that the d/0 instrument fulfils the assumptions of a diffuse field of reflected light from the medium only for samples that resemble the perfect diffuser but it yields an anisotropic field of reflected light when there is significant absorption or transmittance. In the latter case, the 45/0 proves to be less anisotropic than the d/0.In the process, the computational performance of the DORT2002 has been significantly improved. After the modification of the DORT2002 in order to include the 45/0 geometry, the Gauss-Newton optimization algorithm for the solution of the inverse problem was qualified as the most appropriate one, after testing different optimization methods for performance, stability and accuracy. Finally, a new homotopic initial-value algorithm for routine tasks (spectral calculations) was introduced, which resulted in a further three-fold speedup of the whole algorithm.</p> / QC 20100707 / PaperOpt, Paper Optics and Colour
|
509 |
Coarse Graining Monte Carlo Methods for Wireless Channels and Stochastic Differential EquationsHoel, Håkon January 2010 (has links)
<p>This thesis consists of two papers considering different aspects of stochastic process modelling and the minimisation of computational cost.</p><p>In the first paper, we analyse statistical signal properties and develop a Gaussian pro- cess model for scenarios with a moving receiver in a scattering environment, as in Clarke’s model, with the generalisation that noise is introduced through scatterers randomly flip- ping on and off as a function of time. The Gaussian process model is developed by extracting mean and covariance properties from the Multipath Fading Channel model (MFC) through coarse graining. That is, we verify that under certain assumptions, signal realisations of the MFC model converge to a Gaussian process and thereafter compute the Gaussian process’ covariance matrix, which is needed to construct Gaussian process signal realisations. The obtained Gaussian process model is under certain assumptions less computationally costly, containing more channel information and having very similar signal properties to its corresponding MFC model. We also study the problem of fitting our model’s flip rate and scatterer density to measured signal data.</p><p>The second paper generalises a multilevel Forward Euler Monte Carlo method intro- duced by Giles [1] for the approximation of expected values depending on the solution to an Ito stochastic differential equation. Giles work [1] proposed and analysed a Forward Euler Multilevel Monte Carlo method based on realsiations on a hierarchy of uniform time discretisations and a coarse graining based control variates idea to reduce the computa- tional effort required by a standard single level Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretisations generated by adaptive algorithms developed by Moon et al. [3, 2]. These adaptive algorithms apply either deterministic time steps or stochastic time steps and are based on a posteriori error expansions first developed by Szepessy et al. [4]. Under sufficient regularity conditions, our numerical results, which include one case with singular drift and one with stopped dif- fusion, exhibit savings in the computational cost to achieve an accuracy of O(T ol), from O(T ol−3 ) to O (log (T ol) /T ol)2 . We also include an analysis of a simplified version of the adaptive algorithm for which we prove similar accuracy and computational cost results.</p><p> </p>
|
510 |
Contributions à la modélisation et à la commande des systèmes mécaniques de corps rigides avec contraintes unilatéralesGénot, Frank 30 January 1998 (has links) (PDF)
Cette thèse traite de la modélisation de systèmes de corps rigides soumis à un nombre fini de contraintes unilatérales. Comme toute modélisation, celle-ci ne constitue au mieux qu'une approximation de la réalité. Pour des architectures lourdes comme les robots marcheurs, les masses importantes des différents segments peuvent remettre en cause l'hypothèse de rigidité des corps, du moins à l'instant des impacts. Notons néanmoins que les phénomènes vibratoires aux moments des chocs ne remettent nullement en question l'aspect "corps rigides" grace à un un coefficient de restitution prenant en compte de manière explicite l'énergie dissipée par les vibrations. Une autre critique potentielle de l'approche pour laquelle nous avons opté est que les appuis au sol peuvent être réalisés par l'intermédiaire de structures molles, en collant par exemple des semelles en caoutchouc pour lesquelles la loi de frottement sec d'Amontons-Coulomb ne constituerait qu'une piètre approximation (il serait alors nécessaire d'avoir recours à un modèle incluant du frottement visqueux). Notons encore que des résultats expérimentaux récents concernant des chocs de barres pouvant constituer une première approximation d'une jambe d'un bipède avec un sol rigide montrent que le modèle très simple d'Amontons-Coulomb appliqué au niveau impulsionnel livre des résultats acceptables.
|
Page generated in 0.0553 seconds