• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 227
  • 24
  • Tagged with
  • 251
  • 251
  • 251
  • 251
  • 251
  • 198
  • 53
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Comparision of methods and software tools for availability assessment of production systems

Vesteraas, Astrid Hetland January 2008 (has links)
This thesis presents and considers several different methods for computation of availability and production availability for production system. It is assumed that the system handles flow of a fluid. The thesis presents two software programs for computation of reliability measures, MIRIAM Regina and Relex Reliability Studio, and several analytical methods, among them one especially adapted to computation of production availability. For the methods not able to compute production availability, a method is presented which makes it possible to estimate production availability from computation of availability. Among the methods, Relex and three of the analytical computation methods are made to compute availability of the system. The analytical methods considered are standard availability computation based on the structure function of the system and the definitions of availability and computation based on renewal and quasi renewal processes. Relex makes it possible to compute availability both by simulation and, if the system is simple enough, by analytical methods. The usefulness of the analytical methods is to an extent limited by the assumptions laid on the system. Relex makes it possible to take into account more features one would expect to have in a real life system, but for analytical methods to be employed in the computations, the system must be quite simple. Two methods especially made for computing production availability are presented. These are the software program MIRIAM Regina, which combines a sophisticated flow algorithm with Monte Carlo simulation, and a method based on using Markov chains for computing the probability distribution for flow through subsystems of the system under consideration, and then employing simple merging rules to compute the flow through the entire system. These methods are both very flexible, and makes it possible to take into account many different aspects of a real life system. The most important source of uncertainty in the results form a computation, lies in the relation between the real life system and the model system computations are made on. A model system will always be significantly simplified. When choosing a computation method and interpreting results, it is important to keep in mind all assumptions made regarding the system, both explicitly when making the model, and implicit in the computation method. Another source of uncertainty is uncertainty in the input data. A method for propagation of uncertainty through computations is presented and employed on some of the methods. For simulation, one will in addition have the uncertainty due to simulation being an way of making a statistical sample. The size of the sample, given by the number of simulation iterations done, will decide the accuracy of the result.
132

Optimization Analysis of Risk/Return of Large Equity Portfolios, Applying Option Strategies

Tønnessen, Torstein January 2008 (has links)
This is a study of the risk/return characteristics of large equity portfolios, consisting of different option contracts. In times when there is nervousness present in the financial market, and the future prospects of the market are highly uncertain, the importance of appropriate mathematical models is emphasized by traders. Tools to implement option strategies in different markets will be investigated, in order to minimize the risk and maximize the return of the portfolios. Using various options and statistical simulation methods, the risk/return characteristics of options will be studied. Monte Carlo simulation is used, in order to obtain a thorough understanding of the risk/return characteristics of various option strategies, applied on different indices. As the strategies investigated are meant to be applied by traders, the daily changes in portfolio value are the basis from which the risk of the strategies is estimated. There are various methods of estimating the risk available. In order to examine the risk characteristics, the probability of extreme outcomes is of interest. The Value-at-Risk and the expected shortfall are to be analyzed, when examining the risk of the strategies evaluated. Analyzing the properties of the expected return of the portfolios, the tails of the densities of the portfolio values, for the different strategies and indices, will be of interest.
133

Pricing a Bermudan Swaption using the LIBOR Market Model : A LSM approach

Anstensrud, Ole-Petter Bård January 2008 (has links)
This study will focus on the pricing of interest rate derivatives within the framework of the LIBOR Market Model. First we introduce the mathematical and financial foundations behind the basic theory. Then we give a rather rigouros introduction to the LIBOR Market Model and show how to calibrate the model to a real data set. We use the model to price a basic swaption contract before we choose to concentrate on a more exotic Bermudan swaption. We use the Least Squares Monte Carlo (LSM) algorithm to handle the early exercise features of the Bermuda swaption. All major results are vizualised and the C++ implementation code is enclosed in appendix B.
134

Product of Hyperfunctions with Disjoint Support

Eikrem, Kjersti Solberg January 2008 (has links)
We prove that if two hyperfunctions on the unit circle have disjoint support, then the convolution of their Fourier coefficients multiplied with a weight is zero when the weight goes to 1. We prove this by using the Fourier-Borel transform and the G-transform of analytic functionals. The proof is inspired by an article by Yngve Domar. In the end of his article he proves the existence of a translation-invariant subspace of a certain weighted l^p-space. This proof has similarities to our proof, so we compare them. We also look at other topics related to Domar's article, for example the existence of entire functions of order less than or equal to 1 under certain restrictions on the axes. We will see how the Beurling-Malliavin theorem gives some answers to this question. Finally, we prove that if two hyperfunctions on the real line have compact and disjoint support, then the convolution of their Fourier transforms multiplied with a weight is zero when the weight goes to 1.
135

A Parallel Implementation of Mortal Random Walkers in the Pore Network of a Sandstone

Nedrebø, Per Mathias January 2008 (has links)
Simulations of the nuclear magnetic resonance relaxation method is an important part of a digital laboratory developed by Numerical Rocks. The laboratory is used to model petrophysical properties and simulating fluid flow in the pore scale of reservoir rocks. The nuclear magnetic resonance relaxation method can be simulated on a computer using a method involving random walkers. This computer simulation can be parallelized to reduce computational time. The aim of this study has been to examine how overlapping boundaries affects speed-up and communication in a parallel simulation of random walkers. Several parallel algorithms have been proposed and implemented. It was found that an overlapping partitioning of the problem is recommended, and that communication decreases exponentially with increasing overlap. However, increased overlap resulted only in a small negative impact on memory usage and speed-up.
136

Effect of Safe Failures on the Reliability of Safety Instrumented Systems

Kvam, Eva January 2008 (has links)
Safety instrumented systems (SISs) are of prime importance to the process industry to avoid catastrophic consequences or even loss of human life. The dangerous situations that any equipment may face should be analysed in order to quantify the associated risk and to choose a design of the SIS that reduces the risk to a tolerable level. The safe failure fraction (SFF) is a parameter defined in the standards IEC 61508 and IEC 61511, and is used to determine the need for additional channels that can activate the safety function if a failure is present. The standards consider a high SFF as an indicator of a safe design, and by increasing SFF, one may allow a lower redundancy level for a SIS and therefore reduce costs. Safety engineers discuss the suitability of this parameter, and some argue that the negative effects of safe failures on the reliability are so significant that the parameter should not be used. For a safety shutdown valve installed to prevent overpressure, a safe failure is defined as a spurious closure where the source of high pressure is isolated. This thesis examines the effects of safe failures on the reliability of such systems by using a Markov model. According to IEC 61508 and IEC 61511 the system reliability of a safety shutdown system is measured by the probability of failure on demand (PFD). From the results it can be concluded that the time needed to restore the system back to initial state after a safe failure does not have a significant effect on PFD. A long restoration time after a safe failure in combination with a high frequency of safe failures is negative with respect to production downtime. The main contributor to PFD is the long run probability of being in a state where a dangerous undetected (DU) failure is present. DU failures are normally detected by function tests or sometimes upon demand, but they can also be revealed by a spurious closure. This effect is based on the assumption of perfect repair of safe failures, which means that all possible failure modes are detected and the failed items are repaired or replaced after restoration of safe failures. The ability to reveal DU failures is clearly dependent on the frequency of a DU failure and safe failure occurring in the same test interval. This thesis demonstrates that safe failures only have significant effect when the dangerous failure rate is high. Other parameters affect the PFD to a greater extent, and the importance of exact parameter estimation is crucial and more important than the positive effects of safe failures. The SFF must be close to 100% to have a significant effect on the PFD, and since it is always aimed at minimising the number of dangerous failures, the alternative is to add safe failures. This is probably not the intent of SFF and is negative with respect to production downtime. Safe failures does not justify a lower degree of redundancy. On the other hand, the positive effects of safe failures show a satisfactory reason for adopting a longer test interval. This is an optimisation of PFD and can reduce costs or even the frequency of dangerous situations during start-up and shutdown. This thesis demonstrates that the PFD is not affected by safe failures, and indicates no reason to be in doubt about this parameter as a measure of reliability. The SFF gives hardly any information and the choice of SIS architecture should not be based on SFF alone. An alternative parameter that considers different means of revealing DU failures seems to be a better choice.
137

Contrasting broadly adopted model-based portfolio risk measures with current market conditions

Koren, Øystein Sand January 2009 (has links)
The last two years have seen the most volatile financial markets for decades with steep losses in asset values and a deteriorating world economy. The insolvency of several banks and their negative impact on the economy has led to criticism of their risk management systems for not being adequate and lacking foresight. This thesis will study the performance of two broadly adopted portfolio risk measures before and during the current financial turbulence to examine their accuracy and reliability. The study will be carried out on a case portfolio consisting of American and European fixed income and equity. The portfolio uses a dynamic asset allocation scheme to maximize the ratio between expected return and portfolio risk. The market risk of the portfolio will be calculated on a daily basis using both Value-at-Risk (VaR) and expected shortfall (ES) in a Monte Carlo framework. These risk measures are then compared with prior measurements and the actual loss over the period. The results from the study indicate that the implemented risk model do not give totally reliable estimates, with more frequent and larger real losses than predicted. Nevertheless, the study sees a significant worsening in the performance of the risk measures during the current financial crisis from June 2007 to December 2008 compared with the previous years. This thesis argues that VaR and ES are useful risk measures, but that users should be well aware of the pitfalls in the underlying models and take appropriate precautions.
138

Empirical Interpolation with Application to Reduced Basis Approximations

Aanonsen, Tor Øvstedal January 2009 (has links)
Properties of the empirical interpolation (EI) method is investigated by solving selected model problems. Also, a more challenging example with deformed geometry is solved within the online/offline computational framework of the reduced basis method.
139

Multivariate Distributions Through Pair-Copula Construction: Theory and Applications

Nævestad, Markus January 2009 (has links)
It is often very difficult, particularly in higher dimensions, to find a good multivariate model that describes both marginal behavior and dependence structure of data efficiently. The copula approach to multivariate models has been found to fit this purpose particularly well, and since it is a relatively new concept in statistical modeling, it is under frequent development. In this thesis we focus on the decomposition of a multivariate model into pairwise copulas rather than the usual multivariate copula approach. We account for the theory behind the decomposition of a multivariate model into pairwise copulas, and apply the theory on both daily and intra day financial returns. The results are compared with the usual multivariate copula approach, and problems applying the theory are accounted for. The multivariate copula is rejected in favor of the pairwise decomposed model on daily returns with a level of significance less than 1%, while our decomposed models on intra day data does not lead to a rejection of the models with multivariate copulas. On daily returns a pairwise decomposition with Student copulas is preferable to multivariate copulas, while the decomposed models on intra day data need more development before outperforming multivariate copulas.
140

Computing Metrics on Riemannian Shape Manifolds : Geometric shape analysis made practical

Fonn, Eivind January 2009 (has links)
Shape analysis and recognition is a field ripe with creative solutions and innovative algorithms. We give a quick introduction to several different approaches, before basing our work on a representation introduced by Klassen et. al., considering shapes as equivalence classes of closed curves in the plane under reparametrization, and invariant under translation, rotation and scaling. We extend this to a definition for nonclosed curves, and prove a number of results, mostly concerning under which conditions on these curves the set of shapes become manifolds. We then motivate the study of geodesics on these manifolds as a means to compute a shape metric, and present two methods for computing such geodesics: the shooting method from Klassen et. al. and the ``direct'' method, new to this paper. Some numerical experiments are performed, which indicate that the direct method performs better for realistically chosen parameters, albeit not asymptotically.

Page generated in 0.0836 seconds