• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 227
  • 24
  • Tagged with
  • 251
  • 251
  • 251
  • 251
  • 251
  • 198
  • 53
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

A Framework for Constructing and Evaluating Probabilistic Forecasts of Electricity Prices : A Case Study of the Nord Pool Market

Stenshorne, Kim January 2011 (has links)
A framework for a 10-day ahead probabilistic forecast based on a deterministic model is proposed. The framework is demonstrated on the system price of the Nord Pool electricity market. The framework consists of a two-component mixture model for the error terms (ET) generated by the deterministic model. The components assume the dynamics of “balanced” or “unbalanced” ET respectively. The label of the ET originates from a classification of prices according to their relative difference for consecutive hours. The balanced ET are modeled by a seemingly unrelated model (SUR). For the unbalanced ET we only outline a model. The SUR generates a 240-dimensional Gaussian distribution for the balanced ET. The resulting probabilistic forecast is evaluated by four point-evaluation methods, the Talagrand diagram and the energy score. The probabilistic forecast outperforms the deterministic model both by the standards of point and probabilistic evaluation. The evaluations were performed at four intervals in 2008 consisting of 20 days each. The Talagrand diagram diagnoses the forecasts as under-dispersed and biased. The energy score finds the optimal length of training period and set of explanatory variables of the SUR model to change with time. The proposed framework demonstrates the possibility of constructing a probabilistic forecast based on a deterministic model and that such forecasts can be evaluated in a probabilistic setting. This shows that the implementation and evaluation of probabilistic forecasts as a scenario generating tools in stochastic optimization are possible.
202

Analysis of dominance hierarchies using generalized mixed models

Kristiansen, Thomas January 2011 (has links)
This master’s thesis investigates how well a generalized mixed model fits different dominance data sets. The data sets mainly represent disputes between individuals in a closed group, and the model to be used is an adjusted, intransitive extension of the Bradley-Terry model. Two approaches of model fitting are applied; a frequentist and a Bayesian one. The model is fitted to the data sets both with and without random effects (RE) added. The thesis investigates the relationship between the use of random effects and the accuracy, significance and reliability of the regression coefficients and whether or not the random effects affect the statistical significance of a term modelling intransitivity. The results of the analysis in general suggest that models including random effects better explain the data than models without REs. In general, regression coefficients that appear to be significant in the model excluding REs, seem to remain significant when REs are taken into account. However the underlying variance of the regression coefficients have a clear tendency to increase as REs are included, indicating that the estimates obtained may be less reliable than what is obtained otherwise. Further, data sets fitting to transitive models without REs taken into account also, in general, seem to remain transitive when REs are taken into account.
203

Sequential value information for Markov random field

Sneltvedt, Tommy January 2011 (has links)
Sequential value information for Markov random field.
204

Decoding of Algebraic Geometry Codes

Slaatsveen, Anna Aarstrand January 2011 (has links)
Codes derived from algebraic curves are called algebraic geometry (AG) codes. They provide a way to correct errors which occur during transmission of information. This paper will concentrate on the decoding of algebraic geometry codes, in other words, how to find errors. We begin with a brief overview of some classical result in algebra as well as the definition of algebraic geometry codes. Then the theory of cyclic codes and BCH codes will be presented. We discuss the problem of finding the shortest linear feedback shift register (LFSR) which generates a given finite sequence. A decoding algorithm for BCH codes is the Berlekamp-Massey algorithm. This algorithm has complexity O(n^2) and provides a general solution to the problem of finding the shortest LFSR that generates a given sequence (which usually has running time O(n^3)). This algorithm may also be used for AG codes. Further we proceed with algorithms for decoding AG codes. The first algorithm for decoding algebraic geometry codes which we discuss is the so called basic decoding algorithm. This algorithm depends on the choice of a suitable divisor F. By creating a linear system of equation from the bases of spaces with prescribed zeroes and allowed poles we can find an error-locator function which contains all the error positions among its zeros. We find that this algorithm can correct up to (d* - 1 - g)/2 errors and have a running time of O(n^3). From this algorithm two other algorithms which improve on the error correcting capability are developed. The first algorithm developed from the basic algorithm is the modified algorithm. This algorithm depends on a restriction on the divisors which are used to build the code and an increasing sequence of divisors F1, ... , Fs. This gives rise to an algorithm which can correct up to (d*-1)/2 -S(H) errors and have a complexity of O(n^4). The correction rate of this algorithm is larger than the rate for the basic algorithm but it runs slower. The extended modified algorithm is created by the use of what we refer to as special divisors. We choose the divisors in the sequence of the modified algorithm to have certain properties so that the algorithm runs faster. When s(E) is the Clifford's defect of a set E of special divisor, the extended modified algorithm corrects up to (d*-1)/2 -s(E) which is an improvement from the basic algorithm. The running time of the algorithm is O(n^3). The last algorithm we present is the Sudan-Guruswami list decoding algorithm. This algorithm searches for all possible code words within a certain distance from the received word. We show that AG codes are (e,b)-decodable and that the algorithm in most cases has a a higher correction rate than the other algorithms presented here.
205

Lévy Processes and Path Integral Methods with Applications in the Energy Markets

Oshaug, Christian A. J. January 2011 (has links)
The objective of this thesis was to explore methods for valuation of derivatives in energy markets. One aim was to determine whether the Normal inverse Gaussian distributions would be better suited for modelling energy prices than normal distributions. Another aim was to develop working implementations of Path Integral methods for valuing derivatives, based on some one-factor model of the underlying spot price. Energy prices are known to display properties like mean-reversion, periodicity, volatility clustering and extreme jumps. Periodicity and trend are modelled as a deterministic function of time, while mean-reversion effects are modelled with auto-regressive dynamics. It is established that the Normal inverse Gaussian distributions are superior to the normal distributions for modelling the residuals of an auto-regressive energy price model. Volatility clustering and spike behaviour are not reproduced with the models considered here. After calibrating a model to fit real energy data, valuation of derivatives is achieved by propagating probability densities forward in time, applying the Path Integral methodology. It is shown how this can be implemented for European options and barrier options, under the assumptions of a deterministic mean function, mean-reversion dynamics and Normal inverse Gaussian distributed residuals. The Path Integral methods developed compares favourably to Monte Carlo simulations in terms of execution time. The derivative values obtained by Path Integrals are sometimes outside of the Monte Carlo confidence intervals, and the relative error may thus be too large for practical applications. Improvements of the implementations, with a view to minimizing errors, can be subject to further research.
206

Numerical Solution of Stochastic Differential Equations by use of Path Integration : A study of a stochastic Lotka-Volterra model

Halvorsen, Gaute January 2011 (has links)
Some theory of real and stochastic analysis in order to introduce the Path Integration method in terms of stochastic operators. A theorem presenting sufficient conditions for convergence of the Path Integration method is then presented. The solution of a stochastic Lotka-Volterra model of a prey-predator relationship is then discussed, with and without the predator being harvested. And finally, an adaptive algorithm designed to solve the stochastic Lotka-Volterra model well, is presented.
207

Betydning av feilspesifisert underliggende hasard for estimering av regresjonskoeffisienter og avhengighet i frailty-modeller / Effect of Baseline Hazard Misspecification on Regression Estimates and Dependence in Frailty Models

Mortensen, Bjørnar Tumanjan January 2007 (has links)
Med levetidsdata for et stort antall familier kan man bruke frailty-modeller til å finne risikofaktorer og avhengighet innad i familien. En måte å gjøre dette på er å anta en realistisk fordeling for frailty-variabelen og en fordeling for den underliggende hasarden. Det er ikke gjort noen store undersøkelser om betydningen av feilspesifisert underliggende hasard i frailty-modeller tidligere. Grunnen til dette er at det har vært vanlig å anta en ikke-parametrisk underliggende hasard. Dette er mulig for enkle frailty-modeller, men for frailty-modeller med ulik grad av korrelasjon innen en familie blir dette straks svært vanskelig. Derfor er det interessant å undersøke betydningen av feilspesifisert underliggende hasard. I hele denne oppgaven antar vi at den underliggende hasarden er Weibullfordelt. Frailty-fordelingen antas å være enten gamma- eller stablefordelt. Vi simulerer data der den underliggende hasarden er enten Gompertzfordelt, badekarformet eller log-logistisk fordelt. Basert på sannsynlighetsmaksimeringsestimatoren for avhengigheten og regresjonsparametrene undersøker vi betydningen av feilspesifisert underliggende hasard. Simuleringene viser at dersom det er et stor variasjon i levetidene og et stort sprik mellom virkelig og tilpasset underliggende hasard, underestimeres både risikofakorene og avhengigheten i relativt stor grad. Dette gjelder både når frailty-variabelen er stablefordelt og når den er gammafordelt. Enda mer alvorlig er det dersom også frailty-fordelingen er feilspesifisert.
208

Bandwith selection based on a special choice of the kernel

Oksavik, Thomas January 2007 (has links)
We investigate methods of bandwidth selection in kernel density estimation for a wide range of kernels, both conventional and non-conventional.
209

Parallel Multiple Proposal MCMC Algorithms

Austad, Haakon Michael January 2007 (has links)
We explore the variance reduction achievable through parallel implementation of multi-proposal MCMC algorithms and use of control variates. Implemented sequentially multi-proposal MCMC algorithms are of limited value, but they are very well suited for parallelization. Further, discarding the rejected states in an MCMC sampler can intuitively be interpreted as a waste of information. This becomes even more true for a multi-proposal algorithm where we discard several states in each iteration. By creating an alternative estimator consisting of a linear combination of the traditional sample mean and zero mean random variables called control variates we can improve on the traditional estimator. We present a setting for the multi-proposal MCMC algorithm and study it in two examples. The first example considers sampling from a simple Gaussian distribution, while for the second we design the framework for a multi-proposal mode jumping algorithm for sampling from a distribution with several separated modes. We find that the variance reduction achieved from our control variate estimator in general increases as the number of proposals in our sampler increase. For our Gaussian example we find that the benefit from parallelization is small, and that little is gained from increasing the number of proposals. The mode jumping example however is very well suited for parallelization and we get a relative variance reduction pr time of roughly 80% with 16 proposals in each iteration.
210

Kombinasjonen av eksplisitt og implisitt løser for simulering av den elektriske aktiviteten i hjertet. / Using a Combination of an Explicit and Implicit Solver for the Numerical Simulation of Electrical Activity in the Heart.

Kaarby, Martin January 2007 (has links)
Å skape realistiske simuleringer av et ECG-signal på en datamaskin kan være til stor nytte når man ønsker å forstå sammenhengen mellom det observerte ECG-signalet og hjertets tilstand. For å kunne få en realistisk simulering trengs en god matematisk modell. En populær modell ble utviklet av Winslow et al. i 1999, kalt Winslow-modellen. Denne modellenbestår av et sett av 31 ordinære differesialligninger som beskriver de elektrokjemiske reaksonene som skjer i en hjertecelle. Av erfaring vet man at kall til dette systemet er en tung operasjon for en datamaskin, slik at effektiviteten til enløser avhenger stort sett kun av antall slike kall. Før å øke effektiviteten er det derfor viktig å begrense dette tallet.Studerer vi løsningen av Winslow-modellen litt nærmere, ser vi at den begynner med en trasient fase hvor eksplisitte løsere vanligvis er billigere enn implisitte. Ideen er derfor å starte med en eksplisitt løser, og senere bytte over til implisitt, når den transiente fasen er over og problemet blir for stivt for den eksplisitte løseren. Denne tilnærmingen har vist seg å kunne minke antall kall til Winslow-modellen med rundt 25%, samtidig som at nøyaktigheten i løsningen er bevart.

Page generated in 0.0665 seconds