• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 295
  • 64
  • Tagged with
  • 359
  • 356
  • 340
  • 339
  • 251
  • 198
  • 105
  • 48
  • 37
  • 36
  • 36
  • 36
  • 36
  • 36
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

The Smart-Vercauteren Fully Homomorphic Encryption Scheme

Klungre, Vidar January 2012 (has links)
We give a review of the Smart-Vercauteren fully homomorphic encryp-tion scheme presented in 2010. The scheme follows Craig Gentry’sblueprint of first defining a somewhat homomorphic encryption scheme,and prove that it is bootstrappable. This is then used to create the fullyhomomorphic scheme. Compared to the original paper by Smart andVercauteren, we give a more comprehensive background, and explainsthe concepts of the scheme more in detail. This text is therefore wellsuited for readers who find Smart and Vercauteren’s paper too brief.
332

Applying hybrid methods to reduce nonphysical cycles in the flux field

Haugland, Christine Marie Øvrebø January 2012 (has links)
In this thesis we present the theoretical background for the two-point flux-approximation method; (TPFA), mimetic discretisation methods, and the multipoint flux approximation method; (MPFA). Theoretical arguments concerning monotonicity and the fact that loss of monotonicity may lead to oscillations and nonphysical cycles in the flux field are also discussed. TPFA is only consistent for $mathbf{K}$-orthogonal grids. Multipoint flux approximation methods and mimetic discretisation methods are consistent, even for grids that are not K-orthogonal, but sometimes they lead to solutions containing cycles in the flux field. These cycles may cause problems for some transport solvers and diminish the efficiency of others, and to try to cure this problem, we present two hybrid methods. The first is a hybrid mimetic method applying TPFA in the vertical direction and mimetic discretisation in the plane. The second hybrid method is the hybrid MPFA method applying TPFA in the vertical direction and MPFA in the plane. We present results comparing the accuracy of the methods and the number of cycles obtained by the different methods. The results obtained shows that the hybrid methods are more accurate than TPFA, and for specific cases they have less cycles than the original full methods.
333

Realized GARCH: Evidence in ICE Brent Crude Oil Futures Front Month Contracts

Solibakke, Sindre January 2012 (has links)
This paper extends standard GARCH models of volatility with realized measures for the realized GARCH framework. A key feature of the realized GARCH framework is the measurement equation that relates the observed realized measure to latent volatility. We pay special attention to linear and log-linear realized GARCH models. Moreover, the framework enhance the joint modeling of returns and realized measures of volatility. An empirical application with ICE Brent Crude Oil future front month contracts shows that a realized GARCH specification improves the empirical fit substantially relative to a standard GARCH model. The estimates give weak evidence for a skewed student's t distribution for the standardized error term and the leverage function shows a clear negative asymmetry between today's return and tomorrow's volatility.
334

Efficient Calculation of Optimal Decisions in Graphical Models

Lilleborge, Marie January 2012 (has links)
We present a method for finding the optimal decision on Random Variables in a graphical model. Upper and lower bounds on the exact value for each decision are used to reduce the complexity of the algorithm, while we still ensure that the decision chosen actually represents the exact optimal choice. Since the highest lower bound value is also a lower bound on the value of the optimal decision, we rule out any candidate with an upper bound of lower value than the highest lower bound. By this strategy, we try to reduce the number of candidates to a number we can afford to do exact calculations on.We generate five Bayesian Networks with corresponding value functions, and apply our strategy to these. The bounds on the values are obtained by use of an available computer program, where the complexity is controlled by an input constant. We study the number of decisions accepted for different values of this input constant. From the first Network, we learn that the bounds does not work well unless we split the calculations into parts for different groups of the nodes. We observe that this splitting works well on the next three Networks, while the last Network illustrates how the method fails when we add more edges to the graph. We realize that our improved strategy is successful on sparse graphs, while the method is unsuccessful when we increase the density of edges among the nodes.
335

Wavelets and irregular time series

Andreassen, Børge Solli January 2012 (has links)
In this thesis we study time series containing pressure measurements from a three phase flow pipeline at the Ekofisk oil field. The pipeline transports a mixture of oil, water and gas from $15$ wells for approximately 2.5km to a production facility. Our aim is to develop techniques that allow the selection and (to some extent) prediction of "non-standard" behavior in the system (sharp pressure changes and other type of instabilities). To advice this aim we perform a scalewise decomposition of the input signal/time series and investigate the behavior of each scale separately. We introduce the Sliding Window Wavelet Transform (SWWT) method. The method evaluate the variability on different scales within the time interval of a characteristic length (a window) and then trace these characteristics as the window slides in time.We use the discrete wavelet transform (DWT) in order to obtain the scalewise decomposition within the window. Using orthonormal discrete wavelets, we show that the variability of such sequences can be decomposed into their corresponding scales. Based on this, a thresholding algorithm is applied, characterizing the state of the system any given time. The results we find are promising and we show that different parameters in the thresholding algorithm extracts different types of special events. We also show that in some cases, this approach allows to predict special events before they really occur.While we investigate one particular system in this thesis, the procedures developed can be applied to other complicated systems where instability in system parameters is important.
336

On the Hunter-Saxton equation

Nordli, Anders Samuelsen January 2012 (has links)
The Cauchy problem for a two-component Hunter-Saxton equation, begin{align*}(u_t+uu_x)_x&=frac{1}{2}u_x^2+frac{1}{2}rho^2,rho_t+(urho)_x) &= 0,end{align*}on $mathbb{R}times[0,infty)$ is studied. Conservative and dissipative weak solutions are defined and shown to exist globally. This is done by explicitly solving systems of ordinary differential equation in the Lagrangian coordinates, and using these solutions to construct semigroups of conservative and dissipative solutions.
337

Statistical Methods for Calculating the Risk of Collision Between Petroleum Wells

Loeng, Bjørn Erik January 2012 (has links)
In this thesis we explore several statistical methods for addressing the risk of collision between two petroleum wells. Such a collision is a potentially dangerous but rare event that can occur in situations with directional drilling. In order to extend the usual approach of only considering the two closest points in the two wells in the collision risk calculations, we obtain a joint statistical distribution for the position coordinates of all the survey points in two neighboring wells.The common practice in the petroleum industry today is to use the two closest points in a hypothesis test, in order to make a conclusion on whether we should drill as planned based on the collision risk. We suggest a more accurate version of the hypothesis test, which turns out to be more conservative than the original test.As an alternative measure of the collision risk, we estimate the probability of collision. This is done in two different ways, namely by considering only the two closest points and by considering the whole wells. In the latter case, we use the joint distribution for all the survey points. For some well pair cases, the collision probability is much larger when we consider all the survey points in two wells, than when we only consider the two single closest points.We estimate the probability values by using Monte Carlo simulation methods. Since a well collision is considered to be a rare event, we introduce two methods in order to increase the accuracy in the situations where the original Monte Carlo method need an inconveniently large number of samples. These methods give accurate results even when the collision probability is very small.
338

The Performance of Market Risk Measures on High and Low Risk Portfolios in the Norwegian and European Markets.

Bang, Christian Preben January 2012 (has links)
A basic overview of mathematical finance and pricing theory is given. The Black-Scholes model and the LIBOR Market Model are explained, and their assumptionsare discussed and tested on historical data. The normality of log-returns of stocksand forward rates is tested for different time periods, and is found to be varyinggreatly over time. The models are calibrated using the Exponentially WeightedMoving Average (EWMA) method and implemented to perform a backtest againsthistorical data of two risk measures, Value at Risk and Expected Shortfall. Thebacktesting is done on five portfolios of varying risk, in the European and Norwegianmarkets. Three unleveraged portfolios consisting of bonds and stocks in differentproportions, and two leveraged portfolios consisting of stocks and interest rate capsrespectively are considered.The performance of the risk measures is found to be not satisfactory for all portfolios, but performance is better for riskier portfolios and assets. Variation ofperformance over different time periods is found. The periods of worst performanceare those of turbulent market conditions, notably in late 2008. These periods arefound to loosely correspond to the time periods in which log-returns of equity andforward rates are least normal.A sensitivity analysis of performances to the weighting parameter in the EWMAis done. The sensitivity is found to be substantial for all portfolios except for theportfolios holding stocks in the Norwegian market.
339

Parameter estimation in a Markov mesh model by reversible jump MCMC simulation

Norstein, Johanne January 2012 (has links)
We have a model for simulating facies values in a rock. We can use the model to find facies structures in a 2-dimensional area, which we can use to find properties of a rock in a petroleum reservoir. The model is a Markov mesh model, with a conditional probability distribution for the facies values, with a set of parameters. By using a training image with known facies values, we can simulate the parameters in the model, and then simulate facies values for a new area. In this text, we simulate the parameters by using a Reversible jump Markov chain Monte Carlo algorithm. This lets us simulate not only the values of the parameters, but also which parameters that should be present in the model. We use the Metropolis-Hastings algorithm in the simulations. We use the model with the simulated parameters to make new images with the Markov mesh model. The images should have similar visual appearance as the training image. We are able to make images with some similar qualities as the training image, even though we are not convinced that the parameter values converge.
340

Constrained Hydrogel swelling in Biological Sensors : A Finite Element Method Approach

Sveinsson, Hrafn Mar January 2012 (has links)
Material models has been developed for anionic and/or cationic hydrogels, with a simulation framework implemented in MATLAB and the finite element software ABAQUS. The geometry of the simulations is a hemispheroidal hydrogel, divided into a core with a shell, covalently attached to an optical fiber. The material models have been used to estimate the chemical parameters of poly-acrylamide hydrogels containing anionic or cationic monomer groups. Simulations comparing free and constrained swelling has been conducted in order to determine the effect of the geometrical constriction to the optical fiber. Constrained hydrogel swelling featuring shells with different properties than the core was also investigated.The aim of the study was to validate the material models and examine the effects of geometrical constrictions together with shell-impregnation. The anionic material model was shown to reproduce experimental swelling data, while the cationic material model only reproduced the data for ionic strength greater than 100 mM. Restricting the hydrogel to an optical fiber resulted in decreased change in volume and an increase in the axial swelling. The model was able to reproduce reported reduction in the swelling for an impregnated anionic hydrogel by using a neutral shell in the simulations, but failed to recreate the shape of the swelling curve. With the reduction of swelling as a basis, a new method for estimating thin-layer properties has been developed.

Page generated in 0.0186 seconds