• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 315
  • 113
  • Tagged with
  • 428
  • 426
  • 381
  • 340
  • 251
  • 198
  • 105
  • 79
  • 78
  • 78
  • 76
  • 76
  • 76
  • 48
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Decoding of Algebraic Geometry Codes

Slaatsveen, Anna Aarstrand January 2011 (has links)
Codes derived from algebraic curves are called algebraic geometry (AG) codes. They provide a way to correct errors which occur during transmission of information. This paper will concentrate on the decoding of algebraic geometry codes, in other words, how to find errors. We begin with a brief overview of some classical result in algebra as well as the definition of algebraic geometry codes. Then the theory of cyclic codes and BCH codes will be presented. We discuss the problem of finding the shortest linear feedback shift register (LFSR) which generates a given finite sequence. A decoding algorithm for BCH codes is the Berlekamp-Massey algorithm. This algorithm has complexity O(n^2) and provides a general solution to the problem of finding the shortest LFSR that generates a given sequence (which usually has running time O(n^3)). This algorithm may also be used for AG codes. Further we proceed with algorithms for decoding AG codes. The first algorithm for decoding algebraic geometry codes which we discuss is the so called basic decoding algorithm. This algorithm depends on the choice of a suitable divisor F. By creating a linear system of equation from the bases of spaces with prescribed zeroes and allowed poles we can find an error-locator function which contains all the error positions among its zeros. We find that this algorithm can correct up to (d* - 1 - g)/2 errors and have a running time of O(n^3). From this algorithm two other algorithms which improve on the error correcting capability are developed. The first algorithm developed from the basic algorithm is the modified algorithm. This algorithm depends on a restriction on the divisors which are used to build the code and an increasing sequence of divisors F1, ... , Fs. This gives rise to an algorithm which can correct up to (d*-1)/2 -S(H) errors and have a complexity of O(n^4). The correction rate of this algorithm is larger than the rate for the basic algorithm but it runs slower. The extended modified algorithm is created by the use of what we refer to as special divisors. We choose the divisors in the sequence of the modified algorithm to have certain properties so that the algorithm runs faster. When s(E) is the Clifford's defect of a set E of special divisor, the extended modified algorithm corrects up to (d*-1)/2 -s(E) which is an improvement from the basic algorithm. The running time of the algorithm is O(n^3). The last algorithm we present is the Sudan-Guruswami list decoding algorithm. This algorithm searches for all possible code words within a certain distance from the received word. We show that AG codes are (e,b)-decodable and that the algorithm in most cases has a a higher correction rate than the other algorithms presented here.
242

Lévy Processes and Path Integral Methods with Applications in the Energy Markets

Oshaug, Christian A. J. January 2011 (has links)
The objective of this thesis was to explore methods for valuation of derivatives in energy markets. One aim was to determine whether the Normal inverse Gaussian distributions would be better suited for modelling energy prices than normal distributions. Another aim was to develop working implementations of Path Integral methods for valuing derivatives, based on some one-factor model of the underlying spot price. Energy prices are known to display properties like mean-reversion, periodicity, volatility clustering and extreme jumps. Periodicity and trend are modelled as a deterministic function of time, while mean-reversion effects are modelled with auto-regressive dynamics. It is established that the Normal inverse Gaussian distributions are superior to the normal distributions for modelling the residuals of an auto-regressive energy price model. Volatility clustering and spike behaviour are not reproduced with the models considered here. After calibrating a model to fit real energy data, valuation of derivatives is achieved by propagating probability densities forward in time, applying the Path Integral methodology. It is shown how this can be implemented for European options and barrier options, under the assumptions of a deterministic mean function, mean-reversion dynamics and Normal inverse Gaussian distributed residuals. The Path Integral methods developed compares favourably to Monte Carlo simulations in terms of execution time. The derivative values obtained by Path Integrals are sometimes outside of the Monte Carlo confidence intervals, and the relative error may thus be too large for practical applications. Improvements of the implementations, with a view to minimizing errors, can be subject to further research.
243

Numerical Solution of Stochastic Differential Equations by use of Path Integration : A study of a stochastic Lotka-Volterra model

Halvorsen, Gaute January 2011 (has links)
Some theory of real and stochastic analysis in order to introduce the Path Integration method in terms of stochastic operators. A theorem presenting sufficient conditions for convergence of the Path Integration method is then presented. The solution of a stochastic Lotka-Volterra model of a prey-predator relationship is then discussed, with and without the predator being harvested. And finally, an adaptive algorithm designed to solve the stochastic Lotka-Volterra model well, is presented.
244

Betydning av feilspesifisert underliggende hasard for estimering av regresjonskoeffisienter og avhengighet i frailty-modeller / Effect of Baseline Hazard Misspecification on Regression Estimates and Dependence in Frailty Models

Mortensen, Bjørnar Tumanjan January 2007 (has links)
Med levetidsdata for et stort antall familier kan man bruke frailty-modeller til å finne risikofaktorer og avhengighet innad i familien. En måte å gjøre dette på er å anta en realistisk fordeling for frailty-variabelen og en fordeling for den underliggende hasarden. Det er ikke gjort noen store undersøkelser om betydningen av feilspesifisert underliggende hasard i frailty-modeller tidligere. Grunnen til dette er at det har vært vanlig å anta en ikke-parametrisk underliggende hasard. Dette er mulig for enkle frailty-modeller, men for frailty-modeller med ulik grad av korrelasjon innen en familie blir dette straks svært vanskelig. Derfor er det interessant å undersøke betydningen av feilspesifisert underliggende hasard. I hele denne oppgaven antar vi at den underliggende hasarden er Weibullfordelt. Frailty-fordelingen antas å være enten gamma- eller stablefordelt. Vi simulerer data der den underliggende hasarden er enten Gompertzfordelt, badekarformet eller log-logistisk fordelt. Basert på sannsynlighetsmaksimeringsestimatoren for avhengigheten og regresjonsparametrene undersøker vi betydningen av feilspesifisert underliggende hasard. Simuleringene viser at dersom det er et stor variasjon i levetidene og et stort sprik mellom virkelig og tilpasset underliggende hasard, underestimeres både risikofakorene og avhengigheten i relativt stor grad. Dette gjelder både når frailty-variabelen er stablefordelt og når den er gammafordelt. Enda mer alvorlig er det dersom også frailty-fordelingen er feilspesifisert.
245

Bandwith selection based on a special choice of the kernel

Oksavik, Thomas January 2007 (has links)
We investigate methods of bandwidth selection in kernel density estimation for a wide range of kernels, both conventional and non-conventional.
246

Parallel Multiple Proposal MCMC Algorithms

Austad, Haakon Michael January 2007 (has links)
We explore the variance reduction achievable through parallel implementation of multi-proposal MCMC algorithms and use of control variates. Implemented sequentially multi-proposal MCMC algorithms are of limited value, but they are very well suited for parallelization. Further, discarding the rejected states in an MCMC sampler can intuitively be interpreted as a waste of information. This becomes even more true for a multi-proposal algorithm where we discard several states in each iteration. By creating an alternative estimator consisting of a linear combination of the traditional sample mean and zero mean random variables called control variates we can improve on the traditional estimator. We present a setting for the multi-proposal MCMC algorithm and study it in two examples. The first example considers sampling from a simple Gaussian distribution, while for the second we design the framework for a multi-proposal mode jumping algorithm for sampling from a distribution with several separated modes. We find that the variance reduction achieved from our control variate estimator in general increases as the number of proposals in our sampler increase. For our Gaussian example we find that the benefit from parallelization is small, and that little is gained from increasing the number of proposals. The mode jumping example however is very well suited for parallelization and we get a relative variance reduction pr time of roughly 80% with 16 proposals in each iteration.
247

Kombinasjonen av eksplisitt og implisitt løser for simulering av den elektriske aktiviteten i hjertet. / Using a Combination of an Explicit and Implicit Solver for the Numerical Simulation of Electrical Activity in the Heart.

Kaarby, Martin January 2007 (has links)
Å skape realistiske simuleringer av et ECG-signal på en datamaskin kan være til stor nytte når man ønsker å forstå sammenhengen mellom det observerte ECG-signalet og hjertets tilstand. For å kunne få en realistisk simulering trengs en god matematisk modell. En populær modell ble utviklet av Winslow et al. i 1999, kalt Winslow-modellen. Denne modellenbestår av et sett av 31 ordinære differesialligninger som beskriver de elektrokjemiske reaksonene som skjer i en hjertecelle. Av erfaring vet man at kall til dette systemet er en tung operasjon for en datamaskin, slik at effektiviteten til enløser avhenger stort sett kun av antall slike kall. Før å øke effektiviteten er det derfor viktig å begrense dette tallet.Studerer vi løsningen av Winslow-modellen litt nærmere, ser vi at den begynner med en trasient fase hvor eksplisitte løsere vanligvis er billigere enn implisitte. Ideen er derfor å starte med en eksplisitt løser, og senere bytte over til implisitt, når den transiente fasen er over og problemet blir for stivt for den eksplisitte løseren. Denne tilnærmingen har vist seg å kunne minke antall kall til Winslow-modellen med rundt 25%, samtidig som at nøyaktigheten i løsningen er bevart.
248

Security Analysis of the NTRUEncrypt Public Key Encryption Scheme

Sakshaug, Halvor January 2007 (has links)
The public key cryptosystem NTRUEncrypt is analyzed with a main focus on lattice based attacks. We give a brief overview of NTRUEncrypt and the padding scheme NAEP. We propose NTRU-KEM, a key encapsulation method using NTRU, and prove it secure. We briefly cover some non-lattice based attacks but most attention is given to lattice attacks on NTRUEncrypt. Different lattice reduction techniques, alterations to the NTRUEncrypt lattice and breaking times for optimized lattices are studied.
249

THE INVESTIGATION OF APPROPRIATE CONTROL ALGORITHMS FOR THE SPEED CONTROL OF WIND TURBINE HYDROSTATIC SYSTEMS / THE INVESTIGATION OF APPROPRIATE CONTROL ALGORITHMS FOR THE SPEED CONTROL OF WIND TURBINE HYDROSTATIC SYSTEMS

Gulstad, Magnus Johan January 2007 (has links)
This report consists of two chapters. The first is concerned with a new approach to pipe flow modelling and the second has to do with the simulation of the hydrostatic system which will be applied to a wind turbine.For the pipe flow model, the main focus has been to create a flow model which accounts for the frequency dependent friction, i.e. the fluid friction which occurs at non-steady conditions. The author is convinced that the solution to this problem lies in the velocity profile, as the friction is a direct result of the shear stresses in the pipe. At the same time, it is possible to keep track of the velocity profile in the pipe as the pressure evolves in time and space.The new model utilizes the continuity equation for pipe flow and the equation of motion for axisymmetrical flow of a Newtonian fluid to find both a pressure distribution in the pipe and velocity profiles throughout the pipe. There are uncertainties whether the approach to find these velocity profiles done in the new model is correct.The modelling of the hydrostatic transmission to a wind power turbine is done using SIMULINK software. The design of the system and basics of the modelling are described inthe second chapter. The motor speed is regulated using a PID-controller and the generator torque is varied based on the pressure drop over the hydraulic motor. The PID-controller for motor speed seems be of good-enough quality and speed deviations are within acceptablelimits.Simulation results are given for one certain case with an initial rotor torque of 20kNm and an additional step torque of 20kNm.
250

Comparison of ACER and POT Methods for estimation of Extreme Values

Dahlen, Kai Erik January 2010 (has links)
Comparison of the performance of the ACER and POT methods for prediction of extreme values from heavy tailed distributions. To be able to apply the ACER method to heavy tailed data the ACER method was first modified to assume that the underlying extreme value distribution would be a Fréchet distribution, not a Gumbel distribution as assumed earlier. These two methods have then been tested with a wide range of synthetic and real world data sets to compare their preformance in estimation of these extreme values. I have found the ACER method seem to consistently perform better in the terms of accuracy compared to the asymptotic POT method.

Page generated in 0.0674 seconds