• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 295
  • 64
  • Tagged with
  • 359
  • 356
  • 340
  • 339
  • 251
  • 198
  • 105
  • 48
  • 37
  • 36
  • 36
  • 36
  • 36
  • 36
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Bandwith selection based on a special choice of the kernel

Oksavik, Thomas January 2007 (has links)
We investigate methods of bandwidth selection in kernel density estimation for a wide range of kernels, both conventional and non-conventional.
302

Parallel Multiple Proposal MCMC Algorithms

Austad, Haakon Michael January 2007 (has links)
We explore the variance reduction achievable through parallel implementation of multi-proposal MCMC algorithms and use of control variates. Implemented sequentially multi-proposal MCMC algorithms are of limited value, but they are very well suited for parallelization. Further, discarding the rejected states in an MCMC sampler can intuitively be interpreted as a waste of information. This becomes even more true for a multi-proposal algorithm where we discard several states in each iteration. By creating an alternative estimator consisting of a linear combination of the traditional sample mean and zero mean random variables called control variates we can improve on the traditional estimator. We present a setting for the multi-proposal MCMC algorithm and study it in two examples. The first example considers sampling from a simple Gaussian distribution, while for the second we design the framework for a multi-proposal mode jumping algorithm for sampling from a distribution with several separated modes. We find that the variance reduction achieved from our control variate estimator in general increases as the number of proposals in our sampler increase. For our Gaussian example we find that the benefit from parallelization is small, and that little is gained from increasing the number of proposals. The mode jumping example however is very well suited for parallelization and we get a relative variance reduction pr time of roughly 80% with 16 proposals in each iteration.
303

Kombinasjonen av eksplisitt og implisitt løser for simulering av den elektriske aktiviteten i hjertet. / Using a Combination of an Explicit and Implicit Solver for the Numerical Simulation of Electrical Activity in the Heart.

Kaarby, Martin January 2007 (has links)
Å skape realistiske simuleringer av et ECG-signal på en datamaskin kan være til stor nytte når man ønsker å forstå sammenhengen mellom det observerte ECG-signalet og hjertets tilstand. For å kunne få en realistisk simulering trengs en god matematisk modell. En populær modell ble utviklet av Winslow et al. i 1999, kalt Winslow-modellen. Denne modellenbestår av et sett av 31 ordinære differesialligninger som beskriver de elektrokjemiske reaksonene som skjer i en hjertecelle. Av erfaring vet man at kall til dette systemet er en tung operasjon for en datamaskin, slik at effektiviteten til enløser avhenger stort sett kun av antall slike kall. Før å øke effektiviteten er det derfor viktig å begrense dette tallet.Studerer vi løsningen av Winslow-modellen litt nærmere, ser vi at den begynner med en trasient fase hvor eksplisitte løsere vanligvis er billigere enn implisitte. Ideen er derfor å starte med en eksplisitt løser, og senere bytte over til implisitt, når den transiente fasen er over og problemet blir for stivt for den eksplisitte løseren. Denne tilnærmingen har vist seg å kunne minke antall kall til Winslow-modellen med rundt 25%, samtidig som at nøyaktigheten i løsningen er bevart.
304

Security Analysis of the NTRUEncrypt Public Key Encryption Scheme

Sakshaug, Halvor January 2007 (has links)
The public key cryptosystem NTRUEncrypt is analyzed with a main focus on lattice based attacks. We give a brief overview of NTRUEncrypt and the padding scheme NAEP. We propose NTRU-KEM, a key encapsulation method using NTRU, and prove it secure. We briefly cover some non-lattice based attacks but most attention is given to lattice attacks on NTRUEncrypt. Different lattice reduction techniques, alterations to the NTRUEncrypt lattice and breaking times for optimized lattices are studied.
305

THE INVESTIGATION OF APPROPRIATE CONTROL ALGORITHMS FOR THE SPEED CONTROL OF WIND TURBINE HYDROSTATIC SYSTEMS / THE INVESTIGATION OF APPROPRIATE CONTROL ALGORITHMS FOR THE SPEED CONTROL OF WIND TURBINE HYDROSTATIC SYSTEMS

Gulstad, Magnus Johan January 2007 (has links)
This report consists of two chapters. The first is concerned with a new approach to pipe flow modelling and the second has to do with the simulation of the hydrostatic system which will be applied to a wind turbine.For the pipe flow model, the main focus has been to create a flow model which accounts for the frequency dependent friction, i.e. the fluid friction which occurs at non-steady conditions. The author is convinced that the solution to this problem lies in the velocity profile, as the friction is a direct result of the shear stresses in the pipe. At the same time, it is possible to keep track of the velocity profile in the pipe as the pressure evolves in time and space.The new model utilizes the continuity equation for pipe flow and the equation of motion for axisymmetrical flow of a Newtonian fluid to find both a pressure distribution in the pipe and velocity profiles throughout the pipe. There are uncertainties whether the approach to find these velocity profiles done in the new model is correct.The modelling of the hydrostatic transmission to a wind power turbine is done using SIMULINK software. The design of the system and basics of the modelling are described inthe second chapter. The motor speed is regulated using a PID-controller and the generator torque is varied based on the pressure drop over the hydraulic motor. The PID-controller for motor speed seems be of good-enough quality and speed deviations are within acceptablelimits.Simulation results are given for one certain case with an initial rotor torque of 20kNm and an additional step torque of 20kNm.
306

Comparison of ACER and POT Methods for estimation of Extreme Values

Dahlen, Kai Erik January 2010 (has links)
Comparison of the performance of the ACER and POT methods for prediction of extreme values from heavy tailed distributions. To be able to apply the ACER method to heavy tailed data the ACER method was first modified to assume that the underlying extreme value distribution would be a Fréchet distribution, not a Gumbel distribution as assumed earlier. These two methods have then been tested with a wide range of synthetic and real world data sets to compare their preformance in estimation of these extreme values. I have found the ACER method seem to consistently perform better in the terms of accuracy compared to the asymptotic POT method.
307

Analysis of portfolio risk and the LIBOR Market Model

Helgesen, Ole Thomas January 2011 (has links)
This master thesis focuses on interest rate modeling and portfolio risk analysis. The LIBOR Market Model is the interest rate model chosen to simulate the forward rates in the Norwegian and American market, two very different markets in terms of size and liquidity. On the other hand, the Norwegian market is highly dependent on the American market and the correlation can be seen clearly when the data sets are compared in the preliminary analysis. The data sets are from the time between 2000 and the early 2011. Risk estimates are found by Monte Carlo simulations, in particular Value at Risk and Expected shortfall, the two most commonly used risk measures. Interest rate modeling and risk analysis requires parameter estimates from historical data which means that the Financial Crisis will have a strong effect. Two different approaches are studied: Exponentially Weighted Moving Averages and (equally weighted) Floating Averages. The main idea is to cancel out trend and capture the true volatility and correlation. Risk is estimated in several different markets, first an imaginary stable market is assumed. In the next steps the Norwegian and the American market are analyzed. The volatility and correlation varies. Finally we look at a swap depending on both Norwegian and American interest rates. In order to check the risk estimates, the actual losses of the test portfolios are compared to the Value at Risk and the Expected Shortfall. The majority of the losses larger than the risk estimates occur between 2007 and 2009 which confirms, not surprisingly, that the risk measures were unable to predict the Financial Crisis. The portfolios have a short time horizon, 1 day or 5 days, and the EWMA procedure weighs the recent observations heavier, thus it performs better than the Floating Averages procedure. However, both procedures consistently underestimate the risk. Still the risk estimates can be used as triggers in investment strategies. In the final part of this thesis such investment strategies are tested. Plotting the cumulative losses and testing the strategies shows that the risk estimates can be used with success in investment strategies. However the strategies are very sensitive to the choice of the risk threshold. Nonetheless, even though the model underestimates risk, the backtesting and the plots also tell us that the estimates are fairly proportional to the real losses. The risk estimates are therefore useful indicators of the development of the exposure of any financial position, which justifies why they are the most commonly used risk measures in financial markets today.
308

Residuals and Functional Form in Accelerated Life Regression Models

Aaserud, Stein January 2011 (has links)
This thesis examines misspecifed log-location-scale regression models. Particularily how the models' Cox–Snell residuals can be used to infer the functional form of possibly misspecified covariates in the regression. Two different methods are considered. One is using a transformation of the expected value of the residuals. The second is based on estimating the hazard rate function of the residuals using the covariate order method. Simulations and computations in the statistical computing environment R are used to obtain relevant and illustrative results. The conclusion is that both methods are able to recover the functional form of a misspecified covariate, but the covariate order method is best when high levels of censoring are introduced. The Kullback–Leibler theory, applied to misspecified regression models, is a part of the basis for the investigations. The thesis shows that a theoretical approach to this theory is consistent with the methods used in R.
309

Statistical Modeling and Analysis of Repeated Measures, using the Linear Mixed Effects Model.

Østgård, Eirin Tangen January 2011 (has links)
Our main objective for this thesis is to present and discuss the linear mixed effects model and, in particular, the different possible covariance structures for the random effects and the residuals. The linear mixed effects model is widely used in biology and medical research.We use data from a diet intervention study where the aim was to investigate the difference between a diet rich in carbohydrates and a diet rich in fat and protein. Data from $32$ participants were available. A series of biomarkers were measured before and after both diets, giving repeated measurements from each participant across time and diet.We have studied different linear mixed effects models varying in covariance structure for the random effects and the residuals. Further, we have focused on a thorough treatment of statistical contrasts. The contrasts of interest in this study are estimates of the effect of the two diets and the difference in effect between the two diets, and is especially relevant to biologists and medical researchers. Statistically, there is no common agreement on how degrees of freedom should be calculated when testing contrasts. We will show that using different parameter coding for a between-subject factor in the same model, yield different results.The linear mixed effects model allows complex structures in correlated data to be modeled, and so it is important to look at the implied marginal variance-covariance matrix to understand the structure. We have calculated the empirical variance-covariance matrix of the data, and compared it to the estimated implied marginal variance-covariance matrix, in an attempt to get a more thorough understanding of the covariance structures for the random effects and the residuals.The estimated implied marginal variance-covariance matrix have also been used to estimate the intraclass correlations.Finally, we have fitted the linear mixed effects model using the Bayesian approach, integrated nested Laplace approximations (INLA), and compared the results to the results of the frequentist approach.
310

Statistical Analysis of Quantitative PCR Data

Lien, Tonje Gulbrandsen January 2011 (has links)
This thesis seeks to develop a better understanding of the analysis of gene expression to find the amount of transcript in a sample. The mainstream method used is called Polymerase Chain Reaction (PCR) and it exploits the DNA's ability to replicate. The comparative CT method estimate the starting fluorescence level f0 by assuming constant amplification in each PCR cycle, and it uses the fluorescence level which has risen above a certain threshold. We present a generalization of this method, where different threshold values can be used. The main aim of this thesis is to evaluate a new method called the Enzymological method. It estimates f0 by considering a cycle dependent amplification and uses a larger part of the fluorescence curves, than the two CT methods. All methods are tested on dilution series, where the dilution factors are known. In one of the datasets studied, the Clusterin dilution-dataset, we get better estimates from the Enzymological method compared to the two CT methods.

Page generated in 0.0177 seconds