• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 102
  • 3
  • Tagged with
  • 105
  • 105
  • 105
  • 105
  • 105
  • 53
  • 48
  • 4
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

On the Mapper Algorithm : A study of a new topological method for data analysis

Stovner, Roar Bakken January 2012 (has links)
Mapper is an algorithm for describing high-dimensional datasets in terms of simple geometric objects. We give a new definition of Mapper, with which we are able to prove that Mapper is a functor and that Mapper is a homotopy equivalence for certain "nice" input data. To establish these results we describe the statistical theory of functorial clustering and the topological machinery of homotopy colimits. At the end of the document we show, by means of numerical experiments, that the functoriality of Mapper is useful in applications.
12

Waves in Excitable Media

Theisen, Bjørn Bjørge January 2012 (has links)
This thesis is dedicated to the study of Barkley's equation, a stiff diffusion-reaction equation describing waves in excitable media. Several numerical solution methods will be derived and studied, range from the simple explicit Euler method to more complex integrating factor schemes. A C++ application with guided user interface created for performing several of the numerical experiments in this thesis will also be described.
13

Ensemble Kalman Filter on the Brugge Field

Vo, Paul Vuong January 2012 (has links)
The purpose of modeling a petroleum reservoir consists of finding the underlying reservoir properties based on production data, seismic and other available data. In recent years, progress in technology has made it possible to extract large amount of data from the reservoir frequently. Hence, mathematical models that can rapidly characterize the reservoir as new data become available gained much interest. In this thesis we present a formulation of the first order Hidden Markov Model (HMM) that fits into the description of a reservoir model under production. We use a recursive technique that gives the theoretical solution to the reservoir characterization problem. Further, we introduce the Kalman Filter which serves as the exact solution when certain assumptions about the HMM are made. However, these assumptions are not valid when describing the process of a reservoir under production. Thus, we introduce the Ensemble Kalman Filter (EnKF) which has been shown to give an approximate solution to the reservoir characterization problem. However, the EnKF is depending on multiple realizations from the reservoir model which we obtain from the reservoir production simulator Eclipse. When the number of realizations are kept small for computational purposes, the EnKF has been shown to possibly give unreliable results. Hence, we apply a shrinkage regression technique (DR-EnKF) and a localization technique (Loc-EnKF) that are able to correct the traditional EnKF. Both the traditional EnKF and these corrections are tested on a synthetic reservoir case called the Brugge Field. The results indicate that the traditional EnKF suffers from ensemble collapse when the ensemble size is small. This results in small and unreliable prediction uncertainty in the model variables. The DR-EnKF improves the EnKF in terms of root mean squared error (RMSE) for a small ensemble size, while the Loc-EnKF makes considerable improvements compared to the EnKF and produces model variables that seems reasonable.
14

A Framework for Constructing and Evaluating Probabilistic Forecasts of Electricity Prices : A Case Study of the Nord Pool Market

Stenshorne, Kim January 2011 (has links)
A framework for a 10-day ahead probabilistic forecast based on a deterministic model is proposed. The framework is demonstrated on the system price of the Nord Pool electricity market. The framework consists of a two-component mixture model for the error terms (ET) generated by the deterministic model. The components assume the dynamics of “balanced” or “unbalanced” ET respectively. The label of the ET originates from a classification of prices according to their relative difference for consecutive hours. The balanced ET are modeled by a seemingly unrelated model (SUR). For the unbalanced ET we only outline a model. The SUR generates a 240-dimensional Gaussian distribution for the balanced ET. The resulting probabilistic forecast is evaluated by four point-evaluation methods, the Talagrand diagram and the energy score. The probabilistic forecast outperforms the deterministic model both by the standards of point and probabilistic evaluation. The evaluations were performed at four intervals in 2008 consisting of 20 days each. The Talagrand diagram diagnoses the forecasts as under-dispersed and biased. The energy score finds the optimal length of training period and set of explanatory variables of the SUR model to change with time. The proposed framework demonstrates the possibility of constructing a probabilistic forecast based on a deterministic model and that such forecasts can be evaluated in a probabilistic setting. This shows that the implementation and evaluation of probabilistic forecasts as a scenario generating tools in stochastic optimization are possible.
15

Analysis of dominance hierarchies using generalized mixed models

Kristiansen, Thomas January 2011 (has links)
This master’s thesis investigates how well a generalized mixed model fits different dominance data sets. The data sets mainly represent disputes between individuals in a closed group, and the model to be used is an adjusted, intransitive extension of the Bradley-Terry model. Two approaches of model fitting are applied; a frequentist and a Bayesian one. The model is fitted to the data sets both with and without random effects (RE) added. The thesis investigates the relationship between the use of random effects and the accuracy, significance and reliability of the regression coefficients and whether or not the random effects affect the statistical significance of a term modelling intransitivity. The results of the analysis in general suggest that models including random effects better explain the data than models without REs. In general, regression coefficients that appear to be significant in the model excluding REs, seem to remain significant when REs are taken into account. However the underlying variance of the regression coefficients have a clear tendency to increase as REs are included, indicating that the estimates obtained may be less reliable than what is obtained otherwise. Further, data sets fitting to transitive models without REs taken into account also, in general, seem to remain transitive when REs are taken into account.
16

Sequential value information for Markov random field

Sneltvedt, Tommy January 2011 (has links)
Sequential value information for Markov random field.
17

Decoding of Algebraic Geometry Codes

Slaatsveen, Anna Aarstrand January 2011 (has links)
Codes derived from algebraic curves are called algebraic geometry (AG) codes. They provide a way to correct errors which occur during transmission of information. This paper will concentrate on the decoding of algebraic geometry codes, in other words, how to find errors. We begin with a brief overview of some classical result in algebra as well as the definition of algebraic geometry codes. Then the theory of cyclic codes and BCH codes will be presented. We discuss the problem of finding the shortest linear feedback shift register (LFSR) which generates a given finite sequence. A decoding algorithm for BCH codes is the Berlekamp-Massey algorithm. This algorithm has complexity O(n^2) and provides a general solution to the problem of finding the shortest LFSR that generates a given sequence (which usually has running time O(n^3)). This algorithm may also be used for AG codes. Further we proceed with algorithms for decoding AG codes. The first algorithm for decoding algebraic geometry codes which we discuss is the so called basic decoding algorithm. This algorithm depends on the choice of a suitable divisor F. By creating a linear system of equation from the bases of spaces with prescribed zeroes and allowed poles we can find an error-locator function which contains all the error positions among its zeros. We find that this algorithm can correct up to (d* - 1 - g)/2 errors and have a running time of O(n^3). From this algorithm two other algorithms which improve on the error correcting capability are developed. The first algorithm developed from the basic algorithm is the modified algorithm. This algorithm depends on a restriction on the divisors which are used to build the code and an increasing sequence of divisors F1, ... , Fs. This gives rise to an algorithm which can correct up to (d*-1)/2 -S(H) errors and have a complexity of O(n^4). The correction rate of this algorithm is larger than the rate for the basic algorithm but it runs slower. The extended modified algorithm is created by the use of what we refer to as special divisors. We choose the divisors in the sequence of the modified algorithm to have certain properties so that the algorithm runs faster. When s(E) is the Clifford's defect of a set E of special divisor, the extended modified algorithm corrects up to (d*-1)/2 -s(E) which is an improvement from the basic algorithm. The running time of the algorithm is O(n^3). The last algorithm we present is the Sudan-Guruswami list decoding algorithm. This algorithm searches for all possible code words within a certain distance from the received word. We show that AG codes are (e,b)-decodable and that the algorithm in most cases has a a higher correction rate than the other algorithms presented here.
18

Lévy Processes and Path Integral Methods with Applications in the Energy Markets

Oshaug, Christian A. J. January 2011 (has links)
The objective of this thesis was to explore methods for valuation of derivatives in energy markets. One aim was to determine whether the Normal inverse Gaussian distributions would be better suited for modelling energy prices than normal distributions. Another aim was to develop working implementations of Path Integral methods for valuing derivatives, based on some one-factor model of the underlying spot price. Energy prices are known to display properties like mean-reversion, periodicity, volatility clustering and extreme jumps. Periodicity and trend are modelled as a deterministic function of time, while mean-reversion effects are modelled with auto-regressive dynamics. It is established that the Normal inverse Gaussian distributions are superior to the normal distributions for modelling the residuals of an auto-regressive energy price model. Volatility clustering and spike behaviour are not reproduced with the models considered here. After calibrating a model to fit real energy data, valuation of derivatives is achieved by propagating probability densities forward in time, applying the Path Integral methodology. It is shown how this can be implemented for European options and barrier options, under the assumptions of a deterministic mean function, mean-reversion dynamics and Normal inverse Gaussian distributed residuals. The Path Integral methods developed compares favourably to Monte Carlo simulations in terms of execution time. The derivative values obtained by Path Integrals are sometimes outside of the Monte Carlo confidence intervals, and the relative error may thus be too large for practical applications. Improvements of the implementations, with a view to minimizing errors, can be subject to further research.
19

Numerical Solution of Stochastic Differential Equations by use of Path Integration : A study of a stochastic Lotka-Volterra model

Halvorsen, Gaute January 2011 (has links)
Some theory of real and stochastic analysis in order to introduce the Path Integration method in terms of stochastic operators. A theorem presenting sufficient conditions for convergence of the Path Integration method is then presented. The solution of a stochastic Lotka-Volterra model of a prey-predator relationship is then discussed, with and without the predator being harvested. And finally, an adaptive algorithm designed to solve the stochastic Lotka-Volterra model well, is presented.
20

Analysis of portfolio risk and the LIBOR Market Model

Helgesen, Ole Thomas January 2011 (has links)
This master thesis focuses on interest rate modeling and portfolio risk analysis. The LIBOR Market Model is the interest rate model chosen to simulate the forward rates in the Norwegian and American market, two very different markets in terms of size and liquidity. On the other hand, the Norwegian market is highly dependent on the American market and the correlation can be seen clearly when the data sets are compared in the preliminary analysis. The data sets are from the time between 2000 and the early 2011. Risk estimates are found by Monte Carlo simulations, in particular Value at Risk and Expected shortfall, the two most commonly used risk measures. Interest rate modeling and risk analysis requires parameter estimates from historical data which means that the Financial Crisis will have a strong effect. Two different approaches are studied: Exponentially Weighted Moving Averages and (equally weighted) Floating Averages. The main idea is to cancel out trend and capture the true volatility and correlation. Risk is estimated in several different markets, first an imaginary stable market is assumed. In the next steps the Norwegian and the American market are analyzed. The volatility and correlation varies. Finally we look at a swap depending on both Norwegian and American interest rates. In order to check the risk estimates, the actual losses of the test portfolios are compared to the Value at Risk and the Expected Shortfall. The majority of the losses larger than the risk estimates occur between 2007 and 2009 which confirms, not surprisingly, that the risk measures were unable to predict the Financial Crisis. The portfolios have a short time horizon, 1 day or 5 days, and the EWMA procedure weighs the recent observations heavier, thus it performs better than the Floating Averages procedure. However, both procedures consistently underestimate the risk. Still the risk estimates can be used as triggers in investment strategies. In the final part of this thesis such investment strategies are tested. Plotting the cumulative losses and testing the strategies shows that the risk estimates can be used with success in investment strategies. However the strategies are very sensitive to the choice of the risk threshold. Nonetheless, even though the model underestimates risk, the backtesting and the plots also tell us that the estimates are fairly proportional to the real losses. The risk estimates are therefore useful indicators of the development of the exposure of any financial position, which justifies why they are the most commonly used risk measures in financial markets today.

Page generated in 0.0836 seconds