• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 295
  • 64
  • Tagged with
  • 359
  • 356
  • 340
  • 339
  • 251
  • 198
  • 105
  • 48
  • 37
  • 36
  • 36
  • 36
  • 36
  • 36
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

A comparison of accuracy and computational efficiency between the Finite Element Method and the Isogeometric Analysis for two-dimensional Poisson Problems

Larsen, Per Ståle January 2009 (has links)
For small error the isogeometric analysis is more efficient than the finite element method. The condition number is lower in the isogeometric analysis than the finite element method. The isogeometric analysis produce general and robust implementation methods. The isogeometric analysis basis has higher continuity than the finite element method basis, and is more suitable to represent different geometries.
222

Noncommutative Gröbner bases in Polly Cracker cryptosystems

Helde, Andreas January 2009 (has links)
We present the noncommutative version of the Polly Cracker cryptosystem, which is more promising than the commutative version. This is partly because many of the ideals in a free (noncommutative) algebra have an infinite Gröbner basis, which can be used as the public key in the cryptosystem. We start with a short brief of the commutative case which ends with the conclusion that the existence of "intelligent" linear algebra attacks ensures that such cryptosystems are left insecure. Further, we see that it is hard to prove that noncommutative ideals have an infinite reduced Gröbner basis for all admissible orders. Nevertheless, in chapter 4 we consider some ideals for which it seems infeasible to realize a finite Gröbner basis. These are considered further in a cryptographic setting, and there will be shown that one class of ideals seems more promising than the others with respect to encountering attacks on the cryptosystem. In fact, at the end of this thesis we are proposing a way of constructing a cryptosystem based on this class of ideals, such that any linear algebra attack will not be successful. However, many of the results are on experimental level, so there remains a serious amount of research in order to conclude that we have found a secure cryptosystem.
223

Minimal Surfaces in Sub-Riemannian Geometries with Applications to Perceptual Completion

Viddal, Per Martin January 2009 (has links)
A preliminary study of the papers ``A Cortical Based Model of Perceptual Completion in the Roto-Translation Space'' and ``Minimal Surfaces in the Roto-Translation Group with Applications to a Neuro-Biological Image Completion Model'' is done. The first one, written by Citti and Sarti, describe a perceptual completion model where a part of the visual cortex is modelled using a sub-Riemannian geometry on the Lie group SE(2). The second one, written by Hladky and Pauls, describe a model which completes the interior of a circular hole by spanning the lifted boundary by a minimal surface, presuming such a surface exists. These surfaces are solutions of occluded visual data as described by Citti and Sarti. Based on the models above, we propose a new model. The lifted boundary of an arbitrary hole is spanned by a surface consisting of geodesics between points with matching Dirichlet boundary values. All the three models are based on the sub-Riemannian geometry for the roto-translational space introduced by Citti and Sarti. The basic theory of sub-Riemannian geometries, including the derivation of some flows and operators in this degenerate space, is described. The models are implemented, and numerical results are presented.
224

Numerical solution of non-local PDEs arising in Finance.

Johnsen, Håkon Berg January 2009 (has links)
It is a well known fact that the value of an option on an asset following a Levy jump-process, can be found by solving a Partial Integro-Differential Equation (PIDE). In this project, two new schemes are presented to solve these kinds of PIDEs when the underlying Levy process is of infinite activity. The infinite activity jump-process leads to a singular Levy measure, which has important numerical ramifications and needs to be handled with care. The schemes presented calculate the non-local integral operator via a fast Fourier transform (FFT), and an explicit/implicit operator splitting scheme of the local/global operators is performed. Both schemes will be of 2nd order on a regular Levy measure, but the singularity degrades convergence to lie in between 1st and 2nd order depending on the singularity strength. On the logarithmically transformed PIDE, the schemes are proven to be consistent, monotone and stable in $L^infty$, hence convergent by Barles-Perthame Souganidis.
225

An adaptive isogeometric finite element analysis

Johannessen, Kjetil André January 2009 (has links)
In this thesis we will explore the possibilities of making a finite element solver for partial differential equations using the isogeometric framework established by Hughes et al. Whereas general B-splines and NURBS only allow for tensor product refinement, a new technology called T-splines will open for true local refinement. We will give an introduction into T-splines along with B-splines and NURBS on which they are built, presenting as well a refinement algorithm which will preserve the exact geometry of the T-spline and allow for more control points in the mesh. For the solver we will apply a residual-based a posteriori error estimator to identify elements which contribute the most to the error, which in turn allows for a fully automatic adaptive refinement scheme. The performance of the T-splines is shown to be superior on problems which contains singularities when compared with more traditional splines. Moreover the T-splines along with a posteriori error estimators are shown to have a very positive effect on badly parametrized models, as it seem to make the solution grid independent of the original parametrization.
226

Analysis of Longitudinal Data with Missing Values. : Methods and Applications in Medical Statistics.

Dragset, Ingrid Garli January 2009 (has links)
Missing data is a concept used to describe the values that are, for some reason, not observed in datasets. Most standard analysis methods are not feasible for datasets with missing values. The methods handling missing data may result in biased and/or imprecise estimates if methods are not appropriate. It is therefore important to employ suitable methods when analyzing such data. Cardiac surgery is a procedure suitable for patients suffering from different types of heart diseases. It is a physical and psychical demanding surgical operation for the patients, although the mortality rate is low. Health-related quality of life (HRQOL) is a popular and widespread measurement tool to monitor the overall situation of patients undergoing cardiac surgery, especially in elderly patients with naturally limited life expectancies [Gjeilo, 2009]. There has been a growing attention to possible differences between men and women with respect to HRQOL after cardiac surgery. The literature is not consistent regarding this topic. Gjeilo et al. [2008] studied HRQOL in patients before and after cardiac surgery with emphasis on differences between men and women. In the period from September 2004 to September 2005, 534 patients undergoing cardiac surgery at St Olavs Hospital were included in the study. HRQOL were measured by the self-reported questionnaires Short-Form 36 (SF-36) and the Brief Pain Inventory (BPI) before surgery and at six and twelve months follow-up. The SF-36 reflects health-related quality of life measuring eight conceptual domains of health [Loge and Kaasa, 1998]. Some of the patients have not responded to all questions, and there are missing values in the records for about 41% of the patients. Women have more missing values than men at all time points. The statistical analyses performed in Gjeilo et al. [2008] employ the complete-case method, which is the most common method to handle missing data until recent years. The complete-case method discards all subjects with unobserved data prior to the analyses. It makes standard statistical analyses accessible and is the default method to handle missing data in several statistical software packages. The complete-case method gives correct estimates only if data are missing completely at random without any relation to other observed or unobserved measurements. This assumption is seldom met, and violations can result in incorrect estimates and decreased efficiency. The focus of this paper is on improved methods to handle missing values in longitudinal data, that is observations of the same subjects at multiple occasions. Multiple imputation and imputation by expectation maximization are general methods that can be applied with many standard analysis methods and several missing data situations. Regression models can also give correct estimates and are available for longitudinal data. In this paper we present the theory of these approaches and application to the dataset introduced above. The results are compared to the complete-case analyses published in Gjeilo et al. [2008], and the methods are discussed with respect to their properties of handling missing values in this setting. The data of patients undergoing cardiac surgery are analyzed in Gjeilo et al. [2008] with respect to gender differences at each of the measurement occasions; Presurgery, six months, and twelve months after the operation. This is done by a two-sample Student's t-test assuming unequal variances. All patients observed at the relevant occasion is included in the analyses. Repeated measures ANOVA are used to determine gender differences in the evolution of the HRQOL-variables. Only patients with fully observed measurements at all three occasions are included in the ANOVA. The methods of expectation maximization (EM) and multiple imputation (MI) are used to obtain plausible complete datasets including all patients. EM gives a single imputed dataset that can be analyzed similar to the complete-case analysis. MI gives multiple imputed datasets where all dataset must be analyzed sepearately and their estimates combined according to a technique called Rubin's rules. Results of both Student's t-tests and repeated measures ANOVA can be performed by these imputation methods. The repeated measures ANOVA can be expressed as a regression equation that describes the HRQOL-score improvement in time and the variation between subjects. The mixed regression models (MRM) are known to model longitudinal data with non-responses, and can further be extended from the repeated measures ANOVA to fit data more sufficiently. Several MRM are fitted to the data of cardiac surgery patients to display their properties and advantages over ANOVA. These models are alternatives to the imputation analyses when the aim is to determine gender differences in improvement of HRQOL after surgery. The imputation methods and mixed regression models are assumed to handle missing data in an adequate way, and gives similar analysis results for all methods. These results differ from the complete-case method results for some of the HRQOL-variables when examining the gender differences in improvement of HRQOL after surgery.
227

Parametrization of multi-dimensional Markov chains for rock type modeling

Nerhus, Steinar January 2009 (has links)
A parametrization of a multidimensional Markov chain model (MDMC) is studied with the goal of capturing texture in training images. The conditional distribution function of each row in the image, given the previous rows, is described as a one-dimensional Markov random field (MRF) that depends only on information in the immediately preceding rows. Each of these conditional distribution functions is then an element of a Markov chain that is used to describe the entire image. The parametrization is based on the cliques in the MRF, using different parameters for different clique types with different colors, and for how many rows backward we can trace the same clique type with the same color. One of the advantages with the MDMC model is that we are able to calculate the normalizing constant very efficiently thanks to the forward-backward algorithm. When the normalizing constant can be calculated we are able to use a numerical optimization routine from R to estimate model parameters through maximum likelihood, and we can use the backward iterations of the forward-backward algorithm to draw realizations from the model. The method is tested on three different training images, and the results show that the method is able to capture some of the texture in all images, but that there is room for improvements. It is reasonable to believe that we can get better results if we change the parametrization. We also see that the result changes if we use the columns, instead of the rows, as the one-dimensional MRF. The method was only tested on images with two colors, and we suspect that it will not work for images with more colors, unless there are no correlation between the colors, due to the choice of parametrization.
228

An empirical study of the maximum pseudo-likelihood for discrete Markov random fields.

Fauske, Johannes January 2009 (has links)
In this text we will look at two parameter estimation methods for Markov random fields on a lattice. They are maximum pseudo-likelihood estimation and maximum general pseudo-likelihood estimation, which we abbreviate MPLE and MGPLE. The idea behind them is that by maximizing an approximation of the likelihood function, we avoid computing cumbersome normalising constants. In MPLE we maximize the product of the conditional distributions for each variable given all the other variables. In MGPLE we use a compromise between pseudo-likelihood and the likelihood function as the approximation. We evaluate and compare the performance of MPLE and MGPLE on three different spatial models, which we have generated observations of. We are specially interested to see what happens with the quality of the estimates when the number of observations increases. The models we use are the Ising model, the extended Ising model and the Sisim model. All the random variables in the models have two possible states, black or white. For the Ising and extended Ising model we have one and three parameters respectively. For Sisim we have $13$ parameters. The quality of both methods get better when the number of observations grow, and MGPLE gives better results than MPLE. However certain parameter combinations of the extended Ising model give worse results.
229

Numerical Methods for Optical Interference Filters

Marthinsen, Håkon January 2009 (has links)
We present the physics behind general optical interference filters and the design of dielectric anti-reflective filters. These can be anti-reflective at a single wavelength or in an interval. We solve the first case exactly for single and multiple layers and then present how the second case can be solved through the minimisation of an objective function. Next, we present several optimisation methods that are later used to solve the design problem. Finally, we test the different optimisation methods on a test problem and then compare the results with those obtained by the OpenFilters computer programme.
230

Identity Protection, Secrecy and Authentication in Protocols with compromised Agents

Båtstrand, Anders Lindholm January 2009 (has links)
The design of security protocols is given an increasing level of academic interest, as an increasing number of important tasks are done over the Internet. Among the fields being researched is formal methods for modeling and verification of security protocols. One such method is developed by Cremers and Mauw. This is the method we have chosen to focus on in this paper. The model by Cremers and Mauw specifies a mathematical way to represent security protocols and their execution. It then defines conditions the protocols can fulfill, which is called security requirements. These typically states that in all possible executions, given a session in which all parties are honest, certain mathematical statements hold. Our aim is to extend the security requirements already defined in the model to allow some parties in the session to be under control of an attacker, and to add a new definition of identity protection. This we have done by slightly extending the model, and stating a new set of security requirements.

Page generated in 0.0336 seconds