• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 102
  • 3
  • Tagged with
  • 105
  • 105
  • 105
  • 105
  • 105
  • 53
  • 48
  • 4
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Statistical Methods for Multiple Testing in Genome-Wide Association Studies

Halle, Kari Krizak January 2012 (has links)
In Genome-Wide Association Studies (GWAS) the aim is to look for associationbetween genetic markers and phenotype (disease). For each genetic marker weperform an hypothesis test. Since the number of markers is high (in the order of hundred thousands), we use multiple hypothesis tests. One popular strategy in multippel testing is to estimate an effective number of independent tests, and then use methods based on independent tests to control the total type I error. The focus of this thesis has been to study different methods for estimating the effective number of independent tests. The methods are applied to a large data set on bipolar disorder and schizophrenia in Norwegian individuals from the TOP study at the University of Oslo and Oslo University Hospital (OUS). A key featureof these methods is the correlation between the genetic markers. The methodsconsidered in this thesis are based on either haplotype or genotype correlation andone focus of this thesis has been to study the difference between haplotype andgenotype correlation.
62

Prediction of Lithology/Fluid Classes from Petrophysical and Elastic Observations

Straume, Elisabeth January 2012 (has links)
The objective of this study is to classify lithology/fluid(LF) variables along depth profiles. The classification is done by a Bayesian inversion method to obtain the posterior probability density functions(PDFs) for the LF classes at every depth, given data in form of petrophysical variables or elastic properties. In this way we determine the most probable lithology/fluid profile. A stationary Markov chain prior model will be used to model the continuity of the LF classes a priori. The likelihood relates the LF classes to data. A statistical rock-physics forward model is used to relate the petrophysical variables to elastic attributes. This will be done for synthetic test data inspired by a North Sea sandstone reservoir and for real test data in form of a well log from the North Sea. Data for the synthetic case is either the petrophysical variables or the elastic properties. For the real data is only the elastic properties considered.
63

Gryphon - a Module for Time Integration of Partial Differential Equations in FEniCS

Skare, Knut Erik January 2012 (has links)
This thesis aims to implement time integrators in the FEniCS framework. More specifically, the thesis focuses on selecting suitable time integrators, implement these and verify that the implementation works by applying them to various relevant test problems. This work resulted in a module for FEniCS, named Gryphon. The thesis is divided into four parts.The first part builds a theoretical framework which will motivate why singly diagonally implicit Runge-Kutta methods with an explicit first stage (ESDIRKs) should be considered for solving stiff ordinary differential equations (ODEs). It will also be shown how an ESDIRK method can be utilized to solve time dependent partial differential equations (PDEs) by solving the semidiscretized system arising from first applying a finite element method. We will restrict our attention to PDEs which either give rise to a pure ODE system or a DAE (differential-algebraic equation) system of index 1.The second part discusses the implementation of Gryphon, focusing on why such a module is useful and how the source code is structured.The third part is devoted to numerical experiments on the ESDIRK solvers implemented in Gryphon. The experiments will establish convergence and give some run-time statistics for various ESDIRK schemes. We will also see that L-stability is a favorable trait when working with stiff equations, by comparing an ESDIRK method to the trapezoidal rule. It will also be verified that the step size selectors implemented in Gryphon behaves as expected. As test problems we consider the heat equation, the Fisher-Kolmogorov equation, the Gray-Scott equations, the Fitzhugh-Nagumo equations and the Cahn-Hilliard equations.The fourth part is a user manual for Gryphon. All the parameters which can be changed by the user are explained. The manual also includes example code for solving the heat equation, the Gray-Scott equations and the Cahn-Hilliard equation, to get the reader starting on solving their own problems.
64

Anonymity in Network Connections for Mobile Communication

Henriksen, Ragne Elisabeth January 2012 (has links)
This thesis summarizes an existing protocol, that we have chosen to call the Token Key Agreement protocol. It then goes on to introduce two new protocols we have chosen to name the Symmetric Key Agreement protocol and the Asymmetric Key Agreement protocol. We are working within the UC framework, and as such introduce ideal functionalities and protocol descriptions for the protocols. For the first protocol we also introduce a simulated adversary. Further, the paper includes an overview of the security offered by the three protocols.
65

The Smart-Vercauteren Fully Homomorphic Encryption Scheme

Klungre, Vidar January 2012 (has links)
We give a review of the Smart-Vercauteren fully homomorphic encryp-tion scheme presented in 2010. The scheme follows Craig Gentry’sblueprint of first defining a somewhat homomorphic encryption scheme,and prove that it is bootstrappable. This is then used to create the fullyhomomorphic scheme. Compared to the original paper by Smart andVercauteren, we give a more comprehensive background, and explainsthe concepts of the scheme more in detail. This text is therefore wellsuited for readers who find Smart and Vercauteren’s paper too brief.
66

Numerical modelling of marine icing on offshore structures and vessels

Hansen, Eirik Schrøder January 2012 (has links)
A numerical model for predicting icing on offshore structures and vessels has been developed and implemented. The model calculates the icing caused by freezing sea spray, and focuses on two distinct sources of spray - spray from droplets blowing off whitecaps on the sea surface, and spray from waves colliding with the vessel. The implementations of both wind-induced and wave-induced sea spray are based on existing theoretical models, and are combined with a thermodynamic model for the icing process. The model may be used to calculate icing on reference objects or structures on the vessel. In addition, algorithms have been developed so that the model can be applied to polygon-based vessel geometries, calculating the icing distribution over the entire vessel. The model has been applied to meteorological observations and hindcast data for locations in the Norwegian Sea and Barents Sea, in particular at the locations of the Norne field and the undeveloped Skrugard and Shtokman fields. The results indicate that the icing will be comparable at the locations of Skrugard and Shtokman, and that both the frequency and severity of icing events will be far greater for these two locations than for Norne. Although conditions at Shtokman are colder than at Skrugard, the higher winds and waves near Skrugard will increase the available sea spray in the model, thus making the number of severe icing events more similar for the two locations.
67

Applying hybrid methods to reduce nonphysical cycles in the flux field

Haugland, Christine Marie Øvrebø January 2012 (has links)
In this thesis we present the theoretical background for the two-point flux-approximation method; (TPFA), mimetic discretisation methods, and the multipoint flux approximation method; (MPFA). Theoretical arguments concerning monotonicity and the fact that loss of monotonicity may lead to oscillations and nonphysical cycles in the flux field are also discussed. TPFA is only consistent for $mathbf{K}$-orthogonal grids. Multipoint flux approximation methods and mimetic discretisation methods are consistent, even for grids that are not K-orthogonal, but sometimes they lead to solutions containing cycles in the flux field. These cycles may cause problems for some transport solvers and diminish the efficiency of others, and to try to cure this problem, we present two hybrid methods. The first is a hybrid mimetic method applying TPFA in the vertical direction and mimetic discretisation in the plane. The second hybrid method is the hybrid MPFA method applying TPFA in the vertical direction and MPFA in the plane. We present results comparing the accuracy of the methods and the number of cycles obtained by the different methods. The results obtained shows that the hybrid methods are more accurate than TPFA, and for specific cases they have less cycles than the original full methods.
68

Coherent Plane-Wave Compounding in Medical Ultrasound Imaging : Quality Investigation of 2D B-mode Images of Stationary and Moving Objects.

Øvland, Ragnhild January 2012 (has links)
Coherent plane-wave compounding is the coherent summation of several successive plane waves incident at different angles. This thesis presents results from simulations and in vitro and in vivo measurements of stationary and moving objects, with focus on loss of resolution and contrast due to object motion. Resolution and contrast results for several angle selections, angle sequences and object velocities with and without motion correction have been compared.It is shown that using a subset of plane-wave tilt angles by decimating the optimal selection introduces grating lobes which degrades the image contrast, while imaging with a lower maximum tilt angle degrades the lateral resolution. The contrast loss for decimation factor 2 was more significant for simulations than for in vitro measurements. While the contrast went from -40 to -30 dB for the simulations, a decimation factor of 4 was needed to degrade the contrast significantly for the measurements. Decimating the angle selection by a factor of 2 doubles the achievable frame rate. A reduction in maximum angle from 13.7 to 8.2 deg., which corresponds to an increase in transmit F-number from 2.1 to 3.5, gives less than 0.3 mm degradation of lateral resolution. The lateral resolution is of the order of 1 mm. This reduction in maximum angle increases the frame rate by a factor of 1.2.Axial point scatterer velocity leads to considerably worse image quality than for stationary scatterers, while the effect of lateral scatterer velocities is limited. The degree of contrast and resolution loss due to object motion is dependent on the selection of plane waves which constitute a frame, and the sequence in which the plane waves are transmitted. Using a subset of the optimal angle selection leads to improvement in image quality for an axial velocity of 10.0 cm/s for decimation factor 4, but not for decimation factor 2, even though the total scatterer movement per frame is reduced by the reduction of transmitted plane-waves. The loss of quality due to motion was less for fewer tilt angles, but the total image quality was still worse for many of these sets of angles due to grating lobes.The unwanted effects of motion for in vivo-measurements were not seen to the same extent as for simulated point scatterers, and working with the coherent plane-wave compound seems promising for moving objects.
69

Realized GARCH: Evidence in ICE Brent Crude Oil Futures Front Month Contracts

Solibakke, Sindre January 2012 (has links)
This paper extends standard GARCH models of volatility with realized measures for the realized GARCH framework. A key feature of the realized GARCH framework is the measurement equation that relates the observed realized measure to latent volatility. We pay special attention to linear and log-linear realized GARCH models. Moreover, the framework enhance the joint modeling of returns and realized measures of volatility. An empirical application with ICE Brent Crude Oil future front month contracts shows that a realized GARCH specification improves the empirical fit substantially relative to a standard GARCH model. The estimates give weak evidence for a skewed student's t distribution for the standardized error term and the leverage function shows a clear negative asymmetry between today's return and tomorrow's volatility.
70

Positive Partial Transpose States in Multipartite Quantum Systems

Garberg, Øyvind Steensgaard January 2012 (has links)
In this master thesis I study the extremal positive partial transpose (PPT) states of the three qubit $(2times2times2)$ system using numerical methods. Using two algorithms which locate PPT states of a specified rank and extremal PPT states respectively, I have located numerical examples of extremal PPT states with a variety of ranks. These numerical results confirm the analytical result that all PPT states of rank less than four are separable. I also derive an upper limit on the ranks of extremal PPT states. The extremal PPT states of lowest rank, the rank four states, were studied in more detail. These states were confirmed to be biseparable in accordance with both previous analytical and numerical results. The range and kernel of these states were examined for product vectors, but none were found. In an attempt to parametrize the SL$otimes$SL equivalence classes of these extremal rank four states I have studied an analytical method to construct such states based on unextendible product bases (UPBs). This method can be used to create PPT states from a single equivalence class where, by design, the kernel of all states contain a UPB and the range contains no product vectors. All states where the range is not spanned by a basis of product vectors are necessarily entangled. I also present a numerical method for creating extremal rank four states that are symmetric under various combinations of partial transposes. Numerical examination of these states reveal no product vectors in neither range nor kernel. The existence of rank four states with and without product vectors in their kernel implies the existence of at least two equivalence classes. To get a better impression of these equivalence classes I construct quantities that are invariant under SL$otimes$SL transformations and must therefore have the same value for all states in the same equivalence class. Calculating the values of these invariants for all the rank four extremal states I have generated gives a seemingly continuous range of values. This indicates that there is an infinite number of equivalence classes likely described by one or more continuous variables. The invariants also revealed an interesting set of states that may belong to a single equivalence class, where one invariant is zero and the others have identical values. This was the only equivalence class where more than one of my states were included. There is obviously something special about this class, but I do not know what it is.

Page generated in 0.0594 seconds