• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 174
  • 24
  • Tagged with
  • 198
  • 198
  • 198
  • 198
  • 198
  • 198
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Electronic voting systems

Ødegård, Rune Steinsmo January 2006 (has links)
We present the cryptographic primitives needed in the construction of electronic voting systems based on homomorphic encryptions and on verifiable secret sharing. Then "The theory and implementation of an electronic voting system" by Ivan Damgård, Jens Groth and Gorm Salomonsen is presented as an example of electronic voting systems based on homomorphic encryptions, while "Multi-authority secret-ballot election with linear work" by Ronald Cramer, Matthew Franklin, Berry Schoenmakers and Moti Yung is presented as an example of electronic voting systems based on verifiable secret sharing. Moreover, the mathematical background for these systems are studied with particular emphasis on the security issues of the relevant sub-protocols. Comparing these two examples we find that the presented voting system based on verifiable secret sharing is more secure then the one based on homomorphic encryptions, both in regard to privacy and robustness. On the other hand, we find that the presented voting system based on homomorphic encryptions is more efficient then the one based on verifiable secret sharing.
112

Spatial variability in seismic depth tomography

Dumont-Kristiansen, Frédéric-Nicolas January 2007 (has links)
The location of a reflector or medium in the subsurface is correlated with the high wavenumbers or high frequencies in the velocity field. Indeed, the determination of the high frequencies of the velocity field both normally and laterally is the key step for improving seimic data and then get a better insight of the position of a reflector in the subsurface. This project focus on the velocity data processing part involved in seismic tomography. We describe, compare and implement several highpass operators based on finite-difference and the Hamming window in order to filter a seismic velocity dataset. In fact, we study their behaviour in the frequency domain by examining their spectrums. The main contribution of this project is to construct two dimensional anisotropic operators by rotating a one dimensional operator based on linear interpolation. We test all the operators on a synthetic seismic velocity dataset and compare the results obtained between the isotropic filtering method and the anisotropic filtering method. We show that anisotropic filters can be useful in certain geological circumstances. Finally we attempt to scale the different operators in order to fully incorporate them in the seismic tomography inversion problem by using a Bayesian method. We show that it is possible to decide the strength of the constraint in which we want to filter the seismic dataset by using a regularization parameter.
113

Applications of splitting Methods and exponential Integrators to an electro-chemical Heart Cell Model

Gjerald, Sjur January 2007 (has links)
In this thesis we discuss how a system of ordinary differential equations (ODE) describing electro-chemical processes in a heart cell can be solved by numerical methods. The system is stiff, and explicit numerical solvers are therefore slow. In order to overcome the stiffness, the system is split into a stiff and a non-stiff part. The split system is solved by a Strang splitting method and an exponential integrator, based on a commutator free Lie group method. We outline a theory for estimating the computational cost of a numerical method. The solvers for the split system are compared to implicit solvers for the entire system. The conclusion is that it is possible to take out two components which are responsible for the stiffness of the original system, but that more research needs to be done in order to make efficient methods which take advantage of the fact.
114

STABILITY STRUCTURES FOR ABELIAN AND TRIANGULATED CATEGORIES

Steine, Asgeir Bertelsen January 2007 (has links)
This thesis is intended to present some developments in the theory of algebraic stability. The main topics are stability for triangulated categories and the distinguished slopes of Hille and de la Pena for quiver representations.
115

Simple mechanical Systems with Symmetry

Sydnes, Lars January 2007 (has links)
We go through the basic theory of simple mehcanical systems on Riemannain manifolds with symmetry, in an attempt to understand some of the main features of configuration space reduction. As a part of this, we will look at some special cases for whom this works out well, and also indicate a direction of further development.
116

Hierarchical Ensemble Kalman Filter : for Observations of Production and 4-D Seismic Data

Sætrom, Jon January 2007 (has links)
Hierarchical Bayesian sequential Reservoir History matching, seismic inversion, Ensemble Kalman Filter,
117

Estimation of Resrvoir Properties by Joint Inversion of Seismic AVO and CSEM data

Holm, Andreas January 2007 (has links)
Porosity and water saturation in a horizontal top-reservoir are estimated from seismic AVO (Amplitude Versus Offset) data and Controlled Source Electromagnetic (CSEM) data jointly. A model connecting porosity and saturation to both AVO effects and to the phase shift of electromagnetic signals is constructed. In this model, Gassmann's equations, Archie's law, Zoeppritz' equations and ray-tracing is involved. We use a Bayesian approach to solve the inversion problem, and the solution is given as posterior distributions for the parameters of interest. We also investigate the noise levels in the two types of data, and how these affect the estimates of the reservoir properties. Gaussian assumptions and linearizations are made to ensure analytically tractable posterior distributions for porosity and saturation, and a Gibbs sampler is used to explore the joint posterior for porosity, saturation and noise levels. The method is applied to both synthetic data, and field data from the Troll gas field. The results from the joint inversion are compared to results from using seismic data exclusively and a clear improvement is found in the estimates of the synthetic case. The results from the Troll data are more ambiguous, probably caused by the problem of picking seismic data along the top-reservoir and inaccuracies in the fixed parameters in the geophysical forward model.
118

Security analysis of blind signatures and group signatures

Nordli, Børge January 2007 (has links)
We present the latest formal security definitions for blind signature schemes and for group signature schemes. We start by introducing theory about algorithms, probability distributions, distinguishers, protocol attacks and experiments, which is needed to understand the definitions. In the case of blind signatures, we define blindness and non-forgeability, and we present the blind Schnorr and Okamoto-Schnorr signature schemes and prove that the latter is secure. For group signatures, we define full-anonymity and full-non-forgeability (full-traceability). In the end, we sketch a secure general group signature scheme presented by Bellare, Micciancio and Warinschi.
119

In silico Investigation of Possible Mitotic Checkpoint Signalling Mechanisms

Kirkeby, Håkon January 2007 (has links)
The mitotic checkpoint is the major bio-chemical pathway acting to ensure stable genome content in cell division. A delay in chromosome segregation is enforced as long as at least one kinetochore is in lack of proper attachment to the mitotic spindle, something that prevents premature initiation of anaphase and uneven chromosome distribution. The backbone of the mitotic checkpoint control system is established as the production of a wait-anaphase signal at the unattached kinetochores. However, how this signal is able to support a functional checkpoint is unclear. To explore the performance of the wait-anaphase signal in terms of providing the mitotic checkpoint with high fidelity, a mathematical modelling framework is constructed that simulates the spatially distinct production of anaphase inhibitors, their diffusion in the cytoplasm and interference with the anaphase-promoting machinery. The model is used to analyse the performance of several different signalling mechanisms, with emphasis on testing the ability to maintain tight inhibition and allow rapid release of the anaphase promoter.
120

Bayesian Text Categorization

Næss, Arild Brandrud January 2007 (has links)
Natural language processing is an interdisciplinary field of research which studies the problems and possibilities of automated generation and understanding of natural human languages. Text categorization is a central subfield of natural language processing. Automatically assigning categories to digital texts has a wide range of applications in today’s information society—from filtering spam to creating web hierarchies and digital newspaper archives. It is a discipline that lends itself more naturally to machine learning than to knowledge engineering; statistical approaches to text categorization are therefore a promising field of inquiry. We provide a survey of the state of the art in text categorization, presenting the most widespread methods in use, and placing particular emphasis on support vector machines—an optimization algorithm that has emerged as the benchmark method in text categorization in the past ten years. We then turn our attention to Bayesian logistic regression, a fairly new, and largely unstudied method in text categorization. We see how this method has certain similarities to the support vector machine method, but also differs from it in crucial respects. Notably, Bayesian logistic regression provides us with a statistical framework. It can be claimed to be more modular, in the sense that it is more open to modifications and supplementations by other statistical methods; whereas the support vector machine method remains more of a black box. We present results of thorough testing of the BBR toolkit for Bayesian logistic regression on three separate data sets. We demonstrate which of BBR’s parameters are of importance; and we show that its results compare favorably to those of the SVMli ght toolkit for support vector machines. We also present two extensions to the BBR toolkit. One attempts to incorporate domain knowledge by way of the prior probability distributions of single words; the other tries to make use of uncategorized documents to boost learning accuracy.

Page generated in 0.0945 seconds