191 |
Kombinasjonen av eksplisitt og implisitt løser for simulering av den elektriske aktiviteten i hjertet. / Using a Combination of an Explicit and Implicit Solver for the Numerical Simulation of Electrical Activity in the Heart.Kaarby, Martin January 2007 (has links)
Å skape realistiske simuleringer av et ECG-signal på en datamaskin kan være til stor nytte når man ønsker å forstå sammenhengen mellom det observerte ECG-signalet og hjertets tilstand. For å kunne få en realistisk simulering trengs en god matematisk modell. En populær modell ble utviklet av Winslow et al. i 1999, kalt Winslow-modellen. Denne modellenbestår av et sett av 31 ordinære differesialligninger som beskriver de elektrokjemiske reaksonene som skjer i en hjertecelle. Av erfaring vet man at kall til dette systemet er en tung operasjon for en datamaskin, slik at effektiviteten til enløser avhenger stort sett kun av antall slike kall. Før å øke effektiviteten er det derfor viktig å begrense dette tallet.Studerer vi løsningen av Winslow-modellen litt nærmere, ser vi at den begynner med en trasient fase hvor eksplisitte løsere vanligvis er billigere enn implisitte. Ideen er derfor å starte med en eksplisitt løser, og senere bytte over til implisitt, når den transiente fasen er over og problemet blir for stivt for den eksplisitte løseren. Denne tilnærmingen har vist seg å kunne minke antall kall til Winslow-modellen med rundt 25%, samtidig som at nøyaktigheten i løsningen er bevart.
|
192 |
Security Analysis of the NTRUEncrypt Public Key Encryption SchemeSakshaug, Halvor January 2007 (has links)
The public key cryptosystem NTRUEncrypt is analyzed with a main focus on lattice based attacks. We give a brief overview of NTRUEncrypt and the padding scheme NAEP. We propose NTRU-KEM, a key encapsulation method using NTRU, and prove it secure. We briefly cover some non-lattice based attacks but most attention is given to lattice attacks on NTRUEncrypt. Different lattice reduction techniques, alterations to the NTRUEncrypt lattice and breaking times for optimized lattices are studied.
|
193 |
THE INVESTIGATION OF APPROPRIATE CONTROL ALGORITHMS FOR THE SPEED CONTROL OF WIND TURBINE HYDROSTATIC SYSTEMS / THE INVESTIGATION OF APPROPRIATE CONTROL ALGORITHMS FOR THE SPEED CONTROL OF WIND TURBINE HYDROSTATIC SYSTEMSGulstad, Magnus Johan January 2007 (has links)
This report consists of two chapters. The first is concerned with a new approach to pipe flow modelling and the second has to do with the simulation of the hydrostatic system which will be applied to a wind turbine.For the pipe flow model, the main focus has been to create a flow model which accounts for the frequency dependent friction, i.e. the fluid friction which occurs at non-steady conditions. The author is convinced that the solution to this problem lies in the velocity profile, as the friction is a direct result of the shear stresses in the pipe. At the same time, it is possible to keep track of the velocity profile in the pipe as the pressure evolves in time and space.The new model utilizes the continuity equation for pipe flow and the equation of motion for axisymmetrical flow of a Newtonian fluid to find both a pressure distribution in the pipe and velocity profiles throughout the pipe. There are uncertainties whether the approach to find these velocity profiles done in the new model is correct.The modelling of the hydrostatic transmission to a wind power turbine is done using SIMULINK software. The design of the system and basics of the modelling are described inthe second chapter. The motor speed is regulated using a PID-controller and the generator torque is varied based on the pressure drop over the hydraulic motor. The PID-controller for motor speed seems be of good-enough quality and speed deviations are within acceptablelimits.Simulation results are given for one certain case with an initial rotor torque of 20kNm and an additional step torque of 20kNm.
|
194 |
Comparison of ACER and POT Methods for estimation of Extreme ValuesDahlen, Kai Erik January 2010 (has links)
Comparison of the performance of the ACER and POT methods for prediction of extreme values from heavy tailed distributions. To be able to apply the ACER method to heavy tailed data the ACER method was first modified to assume that the underlying extreme value distribution would be a Fréchet distribution, not a Gumbel distribution as assumed earlier. These two methods have then been tested with a wide range of synthetic and real world data sets to compare their preformance in estimation of these extreme values. I have found the ACER method seem to consistently perform better in the terms of accuracy compared to the asymptotic POT method.
|
195 |
Numerical approximation of conformal mappingsLuteberget, Bjørnar Steinnes January 2010 (has links)
A general introduction to conformal maps and the Riemann mapping theorem is given. Three methods for numerically approximating conformal maps from arbitrary domains to the unit disc are presented: the Schwarz-Christoffel method, the geodesic algorithm and the circle packing method. Basic implementations of the geodesic algorithm and the circle packing method were made, and program code is presented. Applications of these numerical methods to problems in physics and mathematical research are briefly discussed.
|
196 |
Isogeometric Analysis and Degenerated MappingsRaknes, Siv Bente January 2011 (has links)
In this thesis we have given an introduction to isogeometric finite element analysis on linear elasticity problems in 2D using non uniform rational B-splines (NURBS) as basis functions. We have studied the theory of B-splines and have derived the equations needed to perform linear elasticity stress analysis. An isogeometric finite element solver has been programmed in MATLAB. We have also analyzed the effect degenerated mappings have on the derivatives of the basis functions. We started by looking at a quadrilateral collapsing to a triangle, considering different parameterizations and their impact on the derivatives. We found that the derivatives were no longer in H^1 and that our basis was not a proper basis for finite element analysis. Our solution to this problem is to form a new set of basis functions by summing the basis functions at the singular points. Further we have applied this approach on a circular surface and an infinite plate with a circular hole.
|
197 |
Evaluation of Modern Design Methods for use in Computer ExperimentsNesbakken, Anders January 2011 (has links)
We have compared the recently developed Multi-level binary replacement (MBR) design method for use in computer experiments, to the Latin hypercube design (LHD) and the Orthogonal array (OA) design. For means of comparison, we have suggested an algorithm for drawing permutations of the MBR design, so as to obtain what we have called a MBR based Latin hypercube design. In our comparison study, the main focus have been the design scores with respect to the root mean squared error (RMSE), Max and alias sum of squares criteria. We found that the MBR design generally performed good with respect to all criteria. It scored similarly to the OA design method and better than conventional Latin hypercube sampling. The score however varied with the number of samples and the set of design generators chosen for constructing the MBR design. The MBR design performed better for designs with a relatively high number of samples compared to the number of factors.
|
198 |
Sampling on QuasicrystalsGrepstad, Sigrid January 2011 (has links)
We prove that quasicrystals are universal sets of stable sampling in any dimension. Necessary and sufficient density conditions for stable sampling and interpolation sets in one dimension are studied in detail.
|
Page generated in 0.1391 seconds