• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 227
  • 24
  • Tagged with
  • 251
  • 251
  • 251
  • 251
  • 251
  • 198
  • 53
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Accurate discretizations of torqued rigid body dynamics

Gustafsson, Einar January 2010 (has links)
<p>This paper investigates the solution of the free rigid body equations of motion, as well as of the equations governing the torqued rigid body. We will consider two semi-exact methods for the solution of the free rigid body equations, and we discuss the use of both rotation matrices and quaternions to describe the motion of the body; our focus is on the quaternion formulation. The approach to which we give the most attention is based on the Magnus series expansion, and we derive numerical methods of order 2, 4, 6, and 8, which are optimal as they require a minimal number of commutators. The other approach uses Gaussian quadrature to approximate an elliptic integral of the third kind. Both methods rely on the exact solution of the Euler equation which involves the exact computation of the elliptic integral of the first kind. For the solution of the torqued rigid body equations, we divide the equations into two systems where one of them is the free rigid body equations; the solutions of these two systems are then combined in the Störmer-Verlet splitting scheme. We use these methods to solve the so-called marine vessel equations. Our numerical experiments suggest that the methods we present are robust and accurate numerical integrators of both the free and the torqued rigid body.</p>
2

Bandwidth Selection in Kernel Density Estimation

Kile, Håkon January 2010 (has links)
<p>In kernel density estimation, the most crucial step is to select a proper bandwidth (smoothing parameter). There are two conceptually different approaches to this problem: a subjective and an objective approach. In this report, we only consider the objective approach, which is based upon minimizing an error, defined by an error criterion. The most common objective bandwidth selection method is to minimize some squared error expression, but this method is not without its critics. This approach is said to not perform satisfactory in the tail(s) of the density, and to put too much weight on observations close to the mode(s) of the density. An approach which minimizes an absolute error expression, is thought to be without these drawbacks. We will provide a new explicit formula for the mean integrated absolute error. The optimal mean integrated absolute error bandwidth will be compared to the optimal mean integrated squared error bandwidth. We will argue that these two bandwidths are essentially equal. In addition, we study data-driven bandwidth selection, and we will propose a new data-driven bandwidth selector. Our new bandwidth selector has promising behavior with respect to the visual error criterion, especially in the cases of limited sample sizes.</p>
3

Analysis of the Transport Layer Security protocol

Firing, Tia Helene January 2010 (has links)
<p>In this master thesis we have presented a security analysis of the TLS protocol with particular emphasis on the recently discovered renegotiation attack. From our security proof we get that the Handshake protocol with renegotiation, including the fix from IETF, is secure, and hence not vulnerable to the renegotiation attack anymore. We have also analysed the Handshake protocol with session resumption, and the Application data protocol together with the Record protocol. Both of these protocols were deemed secure as well. All the security proofs are based on the UC (Universal Composability) security framework.</p>
4

Topology and Data

Brekke, Birger January 2010 (has links)
<p>In the last years, there has been done research in using topology as a new tool for studying data sets, typically high dimensional data. These studies have brought new methods for qualitative analysis, simplification, and visualization of high dimensional data sets. One good example, where these methods are useful, is in the study of microarray data (DNA data). To be able to use these methods, one needs to acquire knowledge of different topics in topology. In this paper we introduce simplicial homology, persistent homology, Mapper, and some simplicial complex constructions.</p>
5

Flow-times in an M/G/1 Queue under a Combined Preemptive/Non-preemptive Priority Discipline. : Scheduled Waiting Time on Single Track Railway Lines

Fatnes, Johan Narvestad January 2010 (has links)
<p>A priority based rule for use during the process of scheduling trains oper- ating on a single track railway line was proposed by the Norwegian railway operator and owner, Jernbaneverket. The purpose of this study is to inves- tigate the effect of the suggested scheduling rule on the scheduled waiting times suffered by trains operating on a segment of the railway line. It is shown that the scheduling rule, under certain limiting assumptions, can be studied in the setting of queuing theory and that it has properties in common with a theoretical priority discipline combining two well docu- mented priority rules. The main part of this study is the development and analysis of a threshold based, combined preemptive/non-preemptive priority discipline. Under the combined discipline, preemptions are allowed during the early stage of processing only. Theoretical expressions for flow-times of jobs passing through the queuing system are reached through detailed studies of the non-preemptive and the preemptive priority discipline. The relationship between the suggested priority based scheduling rule and the theoretical, combined priority discipline is finally illustrated by sim- ulations. When adjusted for actual time spent by trains on traversing the line segment, the steady state solution for flow-times obtained from queuing theory yields an accurate expression for the trains’ average scheduled wait- ing times. The scheduling problem can in fact be modeled accurately by an M/G/1 queue under the combined priority discipline.</p>
6

Parameter Estimation in Extreme Value Models with Markov Chain Monte Carlo Methods

Gausland, Eivind Blomholm January 2010 (has links)
<p>In this thesis I have studied how to estimate parameters in an extreme value model with Markov Chain Monte Carlo (MCMC) given a data set. This is done with synthetic Gaussian time series generated by spectral densities, called spectrums, with a "box" shape. Three different spectrums have been used. In the acceptance probability in the MCMC algorithm, the likelihood have been built up by dividing the time series into blocks consisting of a constant number of points. In each block, only the maximum value, i.e. the extreme value, have been used. Each extreme value will then be interpreted as independent. Since the time series analysed are generated the way they are, there exists theoretical values for the parameters in the extreme value model. When the MCMC algorithm is tested to fit a model to the generated data, the true parameter values are already known. For the first and widest spectrum, the method is unable to find estimates matching the true values for the parameters in the extreme value model. For the two other spectrums, I obtained good estimates for some block lengths, others block lengths gave poor estimates compared to the true values. Finally, it looked like an increasing block length gave more accurate estimates as the spectrum became more narrow banded. A final simulation on a time series generated by a narrow banded spectrum, disproved this hypothesis.</p>
7

The Expectation Propagation Algorithm for use in Approximate Bayesian Analysis of Latent Gaussian Models

Skar, Christian January 2010 (has links)
<p>Analyzing latent Gaussian models by using approximate Bayesian inference methods has proven to be a fast and accurate alternative to running time consuming Markov chain Monte Carlo simulations. A crucial part of these methods is the use of a Gaussian approximation, which is commonly found using an asymptotic expansion approximation. This study considered an alternative method for making a Gaussian approximation, the expectation propagation (EP) algorithm, which is known to be more accurate, but also more computationally demanding. By assuming that the latent field is a Gaussian Markov random field, specialized algorithms for factorizing sparse matrices was used to speed up the EP algorithm. The approximation methods were then compared both with regards to computational complexity and accuracy in the approximations. The expectation propagation algorithm was shown to provide some improvements in accuracy compared to the asymptotic expansion approximation when tested on a binary logistic regression model. However, tests of computational time requirement for computing approximations in simple examples show that the EP algorithm is as much as 15-20 times slower than the alternative method.</p>
8

Topology and Data

Brekke, Øyvind January 2010 (has links)
<p>Today there is an immense production of data, and the need for better methods to analyze data is ever increasing. Topology has many features and good ideas which seem favourable in analyzing certain datasets where statistics is starting to have problems. For example, we see this in datasets originating from microarray experiments. However, topological methods cannot be directly applied on finite point sets coming from such data, or atleast it will not say anything interesting. So, we have to modify the data sets in some way such that we can work on them with the topological machinery. This way of applying topology may be viewed as a kind of discrete version of topology. In this thesis we present some ways to construct simplicial complexes from a finite point cloud, in an attempt to model the underlying space. Together with simplicial homology and persistent homology and barcodes, we obtain a tool to uncover topological features in finite point clouds. This theory is tested with a Java software package called JPlex, which is an implementation of these ideas. Lastly, a method called Mapper is covered. This is also a method for creating simplicial complexes from a finite point cloud. However, Mapper is mostly used to create low dimensional simplicial complexes that can be easily visualized, and structures are then detected this way. An implementation of the Mapper method is also tested on a self made data set.</p>
9

A General Face Recognition System

Manikarnika, Achim Sanjay January 2006 (has links)
<p>In this project a real-time face detection and recognition system has been discussed and implemented. The main focus has been on the detection process which is the first and most important step before starting with the actual recognition. Computably intensive can give good results, but at the cost of the execution speed. The implemented algorithm which was done is project is build upon the work of Garcia, C. and Tziritas, but the algorithm accuracy is traded for faster speed. The program needs between 1-5 seconds on a standard workstation to analyze an image. On an image database with a lot of variety in the images, the system found 70-75% of the faces.</p>
10

Hilberttransformpar og negativ brytning / Hilbert Transform Relations and Negative Refraction

Lind-Johansen, Øyvind January 2006 (has links)
<p>I løpet av de siste årene har det blitt mulig å lage medier som har permittivitet $epsilon_r=chi_e+1$ og permeabilitet $mu_r=chi_m+1$ med simultant negative realdeler. I slike medier vil man få negativ brytning og dette kan utnyttes til å lage en linse som i prinsippet kan få ubegrenset høy oppløsning for en frekvens. La $chi=u+iv$ stå for enten $chi_e$ eller $chi_m$. Jeg viser at oppløsningen til linsa er gitt ved $-ln{(|chi+2|/2)}/d$ forutsatt at tykkelsen $d$ på linsa er noe mindre enn en bølgelengde. Vi ser ut fra dette at oppløsningen er uendelig hvis $u=-2$ og $v=0$ og at vi, for å få høyest mulig oppløsning, ønsker å minimere $|chi+2|=sqrt{(u+2)^2+v^2}$. Kausalitet fører til at $chi$ er element i rommet $H^2$ av analytiske og kvadratisk integrerbare funksjoner i det øvre halvplan. Det følger av dette at $u$ og $v$ er hilberttransformpar. Videre vet vi at $chi$ er hermitsk og at $v$ er positiv for positive argumenter som reflekterer passitivitetsprinsippet for elektromagnetiske medier. Tilsammen setter dette grenser for hvor høy oppløsningen kan bli på et frekvensintervall. Nylig er det funnet en parametrisering av imaginærdelen til slike funksjoner på et frekvensintervall, gitt at realdelen er konstant på intervallet. Jeg identifiserer disse funksjonene som et element i en større klasse hermitske $H^2$-funksjoner hvor imaginærdelen kan parametriseres. Spesielt er det interessant å finne absolutte nedre grenser for den $L^infty$-normen til $|chi+2|$ på intervallet. Det viser seg at ved å sette realdelen lik $x^2/b^2-(a^2+b^2)/(2b^2)-2$ på intervallet kan man omtrent halvere denne nedre grensa i forhold til tilfellet hvor realdelen er konstant lik $-2$.</p>

Page generated in 0.0917 seconds