• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 174
  • 24
  • Tagged with
  • 198
  • 198
  • 198
  • 198
  • 198
  • 198
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Permeability Upscaling Using the DUNE-Framework : Building Open-Source Software for the Oil Industry

Rekdal, Arne January 2009 (has links)
<p>In this thesis an open-source software for permeability upscaling is developed. The software is based on DUNE, an open-source C++ framework for finding numerical solutions of partial differential equations (PDEs). It provides functionality used in finite elements, finite volumes and finite differences methods. Permeability is a measure of the ability of a material to transmit fluids, and determines the flow characteristics in reservoir models. Permeability upscaling is a technique to include fine-scale variations of the permeability field in a coarse-scale reservoir model. The upscaling technique used in this thesis involves solving an elliptic partial differential equation. This is solved with mixed and hybrid finite element methods. The mixed method transforms the original second order PDE into a system of two linear equations. The great advantage with these methods compared with standard finite element methods is continuity of the variable of interest in the upscaling problem. The hybrid method was introduced for being able to solve larger problems. The resulting system of equations from the hybrid method can be transformed into a symmetric positive definite system, which again can be solved with efficient iterative methods. Efficiency of the implementation is important, and as for most implementations of PDE solvers, the computational time is dominated by solving a system of linear equations. In this implementation it is used an algebraic multigrid (AMG) preconditioner provided with DUNE. This is known to be efficient on system arising from elliptic PDEs. The efficiency of the AMG preconditioner is compared with other alternatives, and is superior to the others. On the largest problem investigated, the AMG based solver is almost three times faster than the next best alternative. The performance of the implementation based on DUNE is also compared with an existing implementation by Sintef. Sintef's implementation is based on a mimetic finite difference method, but on the grid type investigated in this thesis, the methods are equivalent. Sintef's implementation uses the proprietary SAMG solver developed by Fraunhofer SCAI to solve the linear system of equations. SAMG is 58% faster than DUNE's solver on a test case consisting of 322 200 unknowns. The scalability of SAMG seem to be better than DUNE's AMG as the problem size increases. However, a great advantage with DUNE's solver is 50% lower memory usage measured on a problem consisting of approx. 3 million unknowns. Another advantage is the licensing of the software. Both DUNE and the upscaling software developed in this thesis is GPL licensed which means that anyone is free to improve or adjust the software.</p>
82

Numerical Methods for Nonholonomic Mechanics

Hilden, Sindre Kristensen January 2009 (has links)
<p>We discuss nonholonomic systems in general and numerical methods for solving them. Two different approaches for obtaining numerical methods are considered; discretization of the Lagrange-d'Alembert equations on the one hand, and using the discrete Lagrange-d'Alembert principle to obtain nonholonomic integrators on the other. Among methods using the first approach, we focus on the super partitioned additive Runge-Kutta (SPARK) methods. Among nonholonomic integrators, we focus on a reversible second order method by McLachlan and Perlmutter. Through several numerical experiments the methods we present are compared by considering error-growth, conservation of energy, geometric properties of the solution and how well the constraints are satisfied. Of special interest is the comparison of the 2-stage SPARK Lobatto IIIA-B method and the nonholonomic integrator by McLachlan and Perlmutter, which both are reversible and of second order. We observe a clear connection between energy-conservation and the geometric properties of the numerical solution. To preserve energy in long-time integrations is seen to be important in order to get solutions with the correct qualitative properties. Our results indicate that the nonholonomic integrator by McLachlan and Perlmutter sometimes conserves energy better than the 2-stage SPARK Lobatto IIIA-B method. In a recent work by Jay, however, the same two methods are compared and are found to conserve energy equally well in long-time integrations.</p>
83

Numerical solution of buoyancy-driven flow problems

Christensen, Einar Rossebø January 2009 (has links)
<p>Numerical solution of buoyancy-driven flow problems in two spatial dimensions is presented. A high-order spectral method is applied for the spatial discretization, while the temporal discretization is done by operator splitting methods. By solving the convection-diffusion equation, which governs the temperature distribution, a thorough description of both the spatial and the temporal discretization methods is given. A fast direct solver for the arising system of algebraic equations is presented, and the expected convergence rates of both the spatial and the temporal discretizations are verified. As a step towards the Navier--Stokes equations, a solution of the Stokes problem is given, where a splitting scheme technique is introduced. An extension of this framework is used to solve the incompressible Navier--Stokes equations, which govern the fluid flow. By solving the Navier-Stokes equations and the convection-diffusion equation as a coupled system, two different buoyancy-driven flow problems in two-dimensional enclosures are studied numerically. In the first problem, emphasis is put on the arising fluid flow and the corresponding thermal distribution, while the second problem mainly consists of determining critical parameters for the onset of convection rolls.</p>
84

Precipitation forecasting using Radar Data

Botnen, Tore January 2009 (has links)
<p>The main task of this assignment is to filter out noise from a series of radar images and to carry out short term precipitation forecasts. It is important that the final routine is performed online, yielding new forecasts as radar images arrive with time. The data available is a time series arriving at a one hour ratio, from the Rissa radar located in Sør Trøndelag. Gaussian radial basis functions are introduced to create the precipitation field, whose movement is solely governed by its velocity field, called advection. By performing discretization forward in time, from the basis given by the differential advection equation, prior distributions can be obtained for both basis functions and advection. Assuming normal distributed radar errors, the basis functions and advection are conditioned on associating radar images, which in turn can be taken into the prior distributions, yielding new forecasts. A modification to the model, labeling the basis functions either active or inactive, enable the process of birth and death of new rain showers. The preferred filtering technique is a joint MCMC sampler, but we make some approximations, sampling from a single MCMC sampler, to successfully implement an online routine. The model yield good results on synthetic data. In the real data situation the filtered images are satisfying, and the forecast images are approximately predicting the forthcoming precipitation. The model removes statistical noise efficiently and obtain satisfying predictions. However, due to the approximation in the MCMC algorithm used, the variance is somewhat underestimated. With some further work with the MCMC update scheme, and given a higher frequency of incoming data, it is the authors belief that the model can be a very useful tool in short term precipitation forecasting. Using gauge data to estimate the radar errors, and merging online gauge data with incoming radar images using block-Kriging, will further improve the estimates.</p>
85

A comparison of accuracy and computational efficiency between the Finite Element Method and the Isogeometric Analysis for two-dimensional Poisson Problems

Larsen, Per Ståle January 2009 (has links)
<p>For small error the isogeometric analysis is more efficient than the finite element method. The condition number is lower in the isogeometric analysis than the finite element method. The isogeometric analysis produce general and robust implementation methods. The isogeometric analysis basis has higher continuity than the finite element method basis, and is more suitable to represent different geometries.</p>
86

Noncommutative Gröbner bases in Polly Cracker cryptosystems

Helde, Andreas January 2009 (has links)
<p>We present the noncommutative version of the Polly Cracker cryptosystem, which is more promising than the commutative version. This is partly because many of the ideals in a free (noncommutative) algebra have an infinite Gröbner basis, which can be used as the public key in the cryptosystem. We start with a short brief of the commutative case which ends with the conclusion that the existence of "intelligent" linear algebra attacks ensures that such cryptosystems are left insecure. Further, we see that it is hard to prove that noncommutative ideals have an infinite reduced Gröbner basis for all admissible orders. Nevertheless, in chapter 4 we consider some ideals for which it seems infeasible to realize a finite Gröbner basis. These are considered further in a cryptographic setting, and there will be shown that one class of ideals seems more promising than the others with respect to encountering attacks on the cryptosystem. In fact, at the end of this thesis we are proposing a way of constructing a cryptosystem based on this class of ideals, such that any linear algebra attack will not be successful. However, many of the results are on experimental level, so there remains a serious amount of research in order to conclude that we have found a secure cryptosystem.</p>
87

Minimal Surfaces in Sub-Riemannian Geometries with Applications to Perceptual Completion

Viddal, Per Martin January 2009 (has links)
<p>A preliminary study of the papers ``A Cortical Based Model of Perceptual Completion in the Roto-Translation Space'' and ``Minimal Surfaces in the Roto-Translation Group with Applications to a Neuro-Biological Image Completion Model'' is done. The first one, written by Citti and Sarti, describe a perceptual completion model where a part of the visual cortex is modelled using a sub-Riemannian geometry on the Lie group SE(2). The second one, written by Hladky and Pauls, describe a model which completes the interior of a circular hole by spanning the lifted boundary by a minimal surface, presuming such a surface exists. These surfaces are solutions of occluded visual data as described by Citti and Sarti. Based on the models above, we propose a new model. The lifted boundary of an arbitrary hole is spanned by a surface consisting of geodesics between points with matching Dirichlet boundary values. All the three models are based on the sub-Riemannian geometry for the roto-translational space introduced by Citti and Sarti. The basic theory of sub-Riemannian geometries, including the derivation of some flows and operators in this degenerate space, is described. The models are implemented, and numerical results are presented.</p>
88

Numerical solution of non-local PDEs arising in Finance.

Johnsen, Håkon Berg January 2009 (has links)
<p>It is a well known fact that the value of an option on an asset following a Levy jump-process, can be found by solving a Partial Integro-Differential Equation (PIDE). In this project, two new schemes are presented to solve these kinds of PIDEs when the underlying Levy process is of infinite activity. The infinite activity jump-process leads to a singular Levy measure, which has important numerical ramifications and needs to be handled with care. The schemes presented calculate the non-local integral operator via a fast Fourier transform (FFT), and an explicit/implicit operator splitting scheme of the local/global operators is performed. Both schemes will be of 2nd order on a regular Levy measure, but the singularity degrades convergence to lie in between 1st and 2nd order depending on the singularity strength. On the logarithmically transformed PIDE, the schemes are proven to be consistent, monotone and stable in $L^infty$, hence convergent by Barles-Perthame Souganidis.</p>
89

An adaptive isogeometric finite element analysis

Johannessen, Kjetil André January 2009 (has links)
<p>In this thesis we will explore the possibilities of making a finite element solver for partial differential equations using the isogeometric framework established by Hughes et al. Whereas general B-splines and NURBS only allow for tensor product refinement, a new technology called T-splines will open for true local refinement. We will give an introduction into T-splines along with B-splines and NURBS on which they are built, presenting as well a refinement algorithm which will preserve the exact geometry of the T-spline and allow for more control points in the mesh. For the solver we will apply a residual-based a posteriori error estimator to identify elements which contribute the most to the error, which in turn allows for a fully automatic adaptive refinement scheme. The performance of the T-splines is shown to be superior on problems which contains singularities when compared with more traditional splines. Moreover the T-splines along with a posteriori error estimators are shown to have a very positive effect on badly parametrized models, as it seem to make the solution grid independent of the original parametrization.</p>
90

Analysis of Longitudinal Data with Missing Values. : Methods and Applications in Medical Statistics.

Dragset, Ingrid Garli January 2009 (has links)
<p>Missing data is a concept used to describe the values that are, for some reason, not observed in datasets. Most standard analysis methods are not feasible for datasets with missing values. The methods handling missing data may result in biased and/or imprecise estimates if methods are not appropriate. It is therefore important to employ suitable methods when analyzing such data. Cardiac surgery is a procedure suitable for patients suffering from different types of heart diseases. It is a physical and psychical demanding surgical operation for the patients, although the mortality rate is low. Health-related quality of life (HRQOL) is a popular and widespread measurement tool to monitor the overall situation of patients undergoing cardiac surgery, especially in elderly patients with naturally limited life expectancies [Gjeilo, 2009]. There has been a growing attention to possible differences between men and women with respect to HRQOL after cardiac surgery. The literature is not consistent regarding this topic. Gjeilo et al. [2008] studied HRQOL in patients before and after cardiac surgery with emphasis on differences between men and women. In the period from September 2004 to September 2005, 534 patients undergoing cardiac surgery at St Olavs Hospital were included in the study. HRQOL were measured by the self-reported questionnaires Short-Form 36 (SF-36) and the Brief Pain Inventory (BPI) before surgery and at six and twelve months follow-up. The SF-36 reflects health-related quality of life measuring eight conceptual domains of health [Loge and Kaasa, 1998]. Some of the patients have not responded to all questions, and there are missing values in the records for about 41% of the patients. Women have more missing values than men at all time points. The statistical analyses performed in Gjeilo et al. [2008] employ the complete-case method, which is the most common method to handle missing data until recent years. The complete-case method discards all subjects with unobserved data prior to the analyses. It makes standard statistical analyses accessible and is the default method to handle missing data in several statistical software packages. The complete-case method gives correct estimates only if data are missing completely at random without any relation to other observed or unobserved measurements. This assumption is seldom met, and violations can result in incorrect estimates and decreased efficiency. The focus of this paper is on improved methods to handle missing values in longitudinal data, that is observations of the same subjects at multiple occasions. Multiple imputation and imputation by expectation maximization are general methods that can be applied with many standard analysis methods and several missing data situations. Regression models can also give correct estimates and are available for longitudinal data. In this paper we present the theory of these approaches and application to the dataset introduced above. The results are compared to the complete-case analyses published in Gjeilo et al. [2008], and the methods are discussed with respect to their properties of handling missing values in this setting. The data of patients undergoing cardiac surgery are analyzed in Gjeilo et al. [2008] with respect to gender differences at each of the measurement occasions; Presurgery, six months, and twelve months after the operation. This is done by a two-sample Student's t-test assuming unequal variances. All patients observed at the relevant occasion is included in the analyses. Repeated measures ANOVA are used to determine gender differences in the evolution of the HRQOL-variables. Only patients with fully observed measurements at all three occasions are included in the ANOVA. The methods of expectation maximization (EM) and multiple imputation (MI) are used to obtain plausible complete datasets including all patients. EM gives a single imputed dataset that can be analyzed similar to the complete-case analysis. MI gives multiple imputed datasets where all dataset must be analyzed sepearately and their estimates combined according to a technique called Rubin's rules. Results of both Student's t-tests and repeated measures ANOVA can be performed by these imputation methods. The repeated measures ANOVA can be expressed as a regression equation that describes the HRQOL-score improvement in time and the variation between subjects. The mixed regression models (MRM) are known to model longitudinal data with non-responses, and can further be extended from the repeated measures ANOVA to fit data more sufficiently. Several MRM are fitted to the data of cardiac surgery patients to display their properties and advantages over ANOVA. These models are alternatives to the imputation analyses when the aim is to determine gender differences in improvement of HRQOL after surgery. The imputation methods and mixed regression models are assumed to handle missing data in an adequate way, and gives similar analysis results for all methods. These results differ from the complete-case method results for some of the HRQOL-variables when examining the gender differences in improvement of HRQOL after surgery.</p>

Page generated in 0.1162 seconds