• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 295
  • 64
  • Tagged with
  • 359
  • 356
  • 340
  • 339
  • 251
  • 198
  • 105
  • 48
  • 37
  • 36
  • 36
  • 36
  • 36
  • 36
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Waves in Excitable Media

Theisen, Bjørn Bjørge January 2012 (has links)
This thesis is dedicated to the study of Barkley's equation, a stiff diffusion-reaction equation describing waves in excitable media. Several numerical solution methods will be derived and studied, range from the simple explicit Euler method to more complex integrating factor schemes. A C++ application with guided user interface created for performing several of the numerical experiments in this thesis will also be described.
292

Ensemble Kalman Filter on the Brugge Field

Vo, Paul Vuong January 2012 (has links)
The purpose of modeling a petroleum reservoir consists of finding the underlying reservoir properties based on production data, seismic and other available data. In recent years, progress in technology has made it possible to extract large amount of data from the reservoir frequently. Hence, mathematical models that can rapidly characterize the reservoir as new data become available gained much interest. In this thesis we present a formulation of the first order Hidden Markov Model (HMM) that fits into the description of a reservoir model under production. We use a recursive technique that gives the theoretical solution to the reservoir characterization problem. Further, we introduce the Kalman Filter which serves as the exact solution when certain assumptions about the HMM are made. However, these assumptions are not valid when describing the process of a reservoir under production. Thus, we introduce the Ensemble Kalman Filter (EnKF) which has been shown to give an approximate solution to the reservoir characterization problem. However, the EnKF is depending on multiple realizations from the reservoir model which we obtain from the reservoir production simulator Eclipse. When the number of realizations are kept small for computational purposes, the EnKF has been shown to possibly give unreliable results. Hence, we apply a shrinkage regression technique (DR-EnKF) and a localization technique (Loc-EnKF) that are able to correct the traditional EnKF. Both the traditional EnKF and these corrections are tested on a synthetic reservoir case called the Brugge Field. The results indicate that the traditional EnKF suffers from ensemble collapse when the ensemble size is small. This results in small and unreliable prediction uncertainty in the model variables. The DR-EnKF improves the EnKF in terms of root mean squared error (RMSE) for a small ensemble size, while the Loc-EnKF makes considerable improvements compared to the EnKF and produces model variables that seems reasonable.
293

Gruppediskusjoner rundt en kraftplattform : En kvalitativ studie om bruk av språket for å avdekke og korrigere elevers misoppfatninger og alternative forestillinger i mekanikk. / Group Discussions around a Force Platform : A qualitative study of the use of language to reveal and correct pupils' alternative conceptions in mechanics

Sjøvik, Vegard Aas January 2010 (has links)
Oppgaven tar for seg elevers gruppediskusjoner med utgangspunkt i en kraftplattform og en heis som arena. Jeg har utviklet og prøvd ut et undervisnings opplegg hvor elever bruker en elektronisk kraftplattform som gir brukeren en grafisk fremstilling av normalkraft i sanntid. Undervisningsopplegget som prøves ut er laget for Fysikk 1 og dekker kompetansemål innen hovedområdet klassisk fysikk. Elevene gjør også bruk av muntlige og digitale ferdigheter, noen av de grunnleggende ferdighetene som også er en del av fagkompetansen i fysikk. Opplegget er prøvd ut i en klasse i videregående skole, hvor elevene arbeider i gruppe á fem elever. De gjorde eksperimenter i en heis og diskuterte målingene de gjorde i sammenheng med det de opplevde på kroppen. Resultatene er diskutert i lys av kjente forestillinger i mekanikk og kjente teorier om språkets betydning for læring. Resultatene fra undersøkelsen viser at kraftplattformen har et godt potensial for undervisning i mekanikk fordi den kobler sammen ulike representasjonsformer i fysikk. Målingene presenteres i sanntid og elevene kan se teori og eksperimentelle resultater i sammenheng med egne opplevelser av fenomener med krefter og bevegelse.
294

A Framework for Constructing and Evaluating Probabilistic Forecasts of Electricity Prices : A Case Study of the Nord Pool Market

Stenshorne, Kim January 2011 (has links)
A framework for a 10-day ahead probabilistic forecast based on a deterministic model is proposed. The framework is demonstrated on the system price of the Nord Pool electricity market. The framework consists of a two-component mixture model for the error terms (ET) generated by the deterministic model. The components assume the dynamics of “balanced” or “unbalanced” ET respectively. The label of the ET originates from a classification of prices according to their relative difference for consecutive hours. The balanced ET are modeled by a seemingly unrelated model (SUR). For the unbalanced ET we only outline a model. The SUR generates a 240-dimensional Gaussian distribution for the balanced ET. The resulting probabilistic forecast is evaluated by four point-evaluation methods, the Talagrand diagram and the energy score. The probabilistic forecast outperforms the deterministic model both by the standards of point and probabilistic evaluation. The evaluations were performed at four intervals in 2008 consisting of 20 days each. The Talagrand diagram diagnoses the forecasts as under-dispersed and biased. The energy score finds the optimal length of training period and set of explanatory variables of the SUR model to change with time. The proposed framework demonstrates the possibility of constructing a probabilistic forecast based on a deterministic model and that such forecasts can be evaluated in a probabilistic setting. This shows that the implementation and evaluation of probabilistic forecasts as a scenario generating tools in stochastic optimization are possible.
295

Analysis of dominance hierarchies using generalized mixed models

Kristiansen, Thomas January 2011 (has links)
This master’s thesis investigates how well a generalized mixed model fits different dominance data sets. The data sets mainly represent disputes between individuals in a closed group, and the model to be used is an adjusted, intransitive extension of the Bradley-Terry model. Two approaches of model fitting are applied; a frequentist and a Bayesian one. The model is fitted to the data sets both with and without random effects (RE) added. The thesis investigates the relationship between the use of random effects and the accuracy, significance and reliability of the regression coefficients and whether or not the random effects affect the statistical significance of a term modelling intransitivity. The results of the analysis in general suggest that models including random effects better explain the data than models without REs. In general, regression coefficients that appear to be significant in the model excluding REs, seem to remain significant when REs are taken into account. However the underlying variance of the regression coefficients have a clear tendency to increase as REs are included, indicating that the estimates obtained may be less reliable than what is obtained otherwise. Further, data sets fitting to transitive models without REs taken into account also, in general, seem to remain transitive when REs are taken into account.
296

Sequential value information for Markov random field

Sneltvedt, Tommy January 2011 (has links)
Sequential value information for Markov random field.
297

Decoding of Algebraic Geometry Codes

Slaatsveen, Anna Aarstrand January 2011 (has links)
Codes derived from algebraic curves are called algebraic geometry (AG) codes. They provide a way to correct errors which occur during transmission of information. This paper will concentrate on the decoding of algebraic geometry codes, in other words, how to find errors. We begin with a brief overview of some classical result in algebra as well as the definition of algebraic geometry codes. Then the theory of cyclic codes and BCH codes will be presented. We discuss the problem of finding the shortest linear feedback shift register (LFSR) which generates a given finite sequence. A decoding algorithm for BCH codes is the Berlekamp-Massey algorithm. This algorithm has complexity O(n^2) and provides a general solution to the problem of finding the shortest LFSR that generates a given sequence (which usually has running time O(n^3)). This algorithm may also be used for AG codes. Further we proceed with algorithms for decoding AG codes. The first algorithm for decoding algebraic geometry codes which we discuss is the so called basic decoding algorithm. This algorithm depends on the choice of a suitable divisor F. By creating a linear system of equation from the bases of spaces with prescribed zeroes and allowed poles we can find an error-locator function which contains all the error positions among its zeros. We find that this algorithm can correct up to (d* - 1 - g)/2 errors and have a running time of O(n^3). From this algorithm two other algorithms which improve on the error correcting capability are developed. The first algorithm developed from the basic algorithm is the modified algorithm. This algorithm depends on a restriction on the divisors which are used to build the code and an increasing sequence of divisors F1, ... , Fs. This gives rise to an algorithm which can correct up to (d*-1)/2 -S(H) errors and have a complexity of O(n^4). The correction rate of this algorithm is larger than the rate for the basic algorithm but it runs slower. The extended modified algorithm is created by the use of what we refer to as special divisors. We choose the divisors in the sequence of the modified algorithm to have certain properties so that the algorithm runs faster. When s(E) is the Clifford's defect of a set E of special divisor, the extended modified algorithm corrects up to (d*-1)/2 -s(E) which is an improvement from the basic algorithm. The running time of the algorithm is O(n^3). The last algorithm we present is the Sudan-Guruswami list decoding algorithm. This algorithm searches for all possible code words within a certain distance from the received word. We show that AG codes are (e,b)-decodable and that the algorithm in most cases has a a higher correction rate than the other algorithms presented here.
298

Lévy Processes and Path Integral Methods with Applications in the Energy Markets

Oshaug, Christian A. J. January 2011 (has links)
The objective of this thesis was to explore methods for valuation of derivatives in energy markets. One aim was to determine whether the Normal inverse Gaussian distributions would be better suited for modelling energy prices than normal distributions. Another aim was to develop working implementations of Path Integral methods for valuing derivatives, based on some one-factor model of the underlying spot price. Energy prices are known to display properties like mean-reversion, periodicity, volatility clustering and extreme jumps. Periodicity and trend are modelled as a deterministic function of time, while mean-reversion effects are modelled with auto-regressive dynamics. It is established that the Normal inverse Gaussian distributions are superior to the normal distributions for modelling the residuals of an auto-regressive energy price model. Volatility clustering and spike behaviour are not reproduced with the models considered here. After calibrating a model to fit real energy data, valuation of derivatives is achieved by propagating probability densities forward in time, applying the Path Integral methodology. It is shown how this can be implemented for European options and barrier options, under the assumptions of a deterministic mean function, mean-reversion dynamics and Normal inverse Gaussian distributed residuals. The Path Integral methods developed compares favourably to Monte Carlo simulations in terms of execution time. The derivative values obtained by Path Integrals are sometimes outside of the Monte Carlo confidence intervals, and the relative error may thus be too large for practical applications. Improvements of the implementations, with a view to minimizing errors, can be subject to further research.
299

Numerical Solution of Stochastic Differential Equations by use of Path Integration : A study of a stochastic Lotka-Volterra model

Halvorsen, Gaute January 2011 (has links)
Some theory of real and stochastic analysis in order to introduce the Path Integration method in terms of stochastic operators. A theorem presenting sufficient conditions for convergence of the Path Integration method is then presented. The solution of a stochastic Lotka-Volterra model of a prey-predator relationship is then discussed, with and without the predator being harvested. And finally, an adaptive algorithm designed to solve the stochastic Lotka-Volterra model well, is presented.
300

Betydning av feilspesifisert underliggende hasard for estimering av regresjonskoeffisienter og avhengighet i frailty-modeller / Effect of Baseline Hazard Misspecification on Regression Estimates and Dependence in Frailty Models

Mortensen, Bjørnar Tumanjan January 2007 (has links)
Med levetidsdata for et stort antall familier kan man bruke frailty-modeller til å finne risikofaktorer og avhengighet innad i familien. En måte å gjøre dette på er å anta en realistisk fordeling for frailty-variabelen og en fordeling for den underliggende hasarden. Det er ikke gjort noen store undersøkelser om betydningen av feilspesifisert underliggende hasard i frailty-modeller tidligere. Grunnen til dette er at det har vært vanlig å anta en ikke-parametrisk underliggende hasard. Dette er mulig for enkle frailty-modeller, men for frailty-modeller med ulik grad av korrelasjon innen en familie blir dette straks svært vanskelig. Derfor er det interessant å undersøke betydningen av feilspesifisert underliggende hasard. I hele denne oppgaven antar vi at den underliggende hasarden er Weibullfordelt. Frailty-fordelingen antas å være enten gamma- eller stablefordelt. Vi simulerer data der den underliggende hasarden er enten Gompertzfordelt, badekarformet eller log-logistisk fordelt. Basert på sannsynlighetsmaksimeringsestimatoren for avhengigheten og regresjonsparametrene undersøker vi betydningen av feilspesifisert underliggende hasard. Simuleringene viser at dersom det er et stor variasjon i levetidene og et stort sprik mellom virkelig og tilpasset underliggende hasard, underestimeres både risikofakorene og avhengigheten i relativt stor grad. Dette gjelder både når frailty-variabelen er stablefordelt og når den er gammafordelt. Enda mer alvorlig er det dersom også frailty-fordelingen er feilspesifisert.

Page generated in 0.0184 seconds