• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 481
  • 218
  • 85
  • 66
  • 34
  • 30
  • 26
  • 25
  • 13
  • 9
  • 8
  • 6
  • 4
  • 4
  • 2
  • Tagged with
  • 1088
  • 1088
  • 1088
  • 121
  • 121
  • 99
  • 99
  • 95
  • 79
  • 68
  • 67
  • 63
  • 63
  • 54
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

How to determine fair value for life insurance policies in a secondary market

Dedes, Vasilis January 2011 (has links)
In this study a methodological approach is presented on how transactions in the secondary market for life insurance policies can be fairly priced for both policyholders and life settlement companies. Monte Carlo simulation of mortality on a pool constructed based on actual data of 85 life settlement transactions shows that a realistic assumption for the range of offered prices is limited to 15% and 20% of the face amount of the policy, given a required return of 7%. The power of the proffered pricing approach is ensured by assessing and managing mortality risk along with the other pertinent risks using stress testing, where mortality risk appears to be analogous to some extent with systematic risk on other markets of assets.
272

Single and Twin-Heaps as Natural Data Structures for Percentile Point Simulation Algorithms

Hatzinger, Reinhold, Panny, Wolfgang January 1993 (has links) (PDF)
Sometimes percentile points cannot be determined analytically. In such cases one has to resort to Monte Carlo techniques. In order to provide reliable and accurate results it is usually necessary to generate rather large samples. Thus the proper organization of the relevant data is of crucial importance. In this paper we investigate the appropriateness of heap-based data structures for the percentile point estimation problem. Theoretical considerations and empirical results give evidence of the good performance of these structures regarding their time and space complexity. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
273

Metamodel-Based Probabilistic Design for Dynamic Systems with Degrading Components

Seecharan, Turuna Saraswati January 2012 (has links)
The probabilistic design of dynamic systems with degrading components is difficult. Design of dynamic systems typically involves the optimization of a time-invariant performance measure, such as Energy, that is estimated using a dynamic response, such as angular speed. The mechanistic models developed to approximate this performance measure are too complicated to be used with simple design calculations and lead to lengthy simulations. When degradation of the components is assumed, in order to determine suitable service times, estimation of the failure probability over the product lifetime is required. Again, complex mechanistic models lead to lengthy lifetime simulations when the Monte Carlo method is used to evaluate probability. Based on these problems, an efficient methodology is presented for probabilistic design of dynamic systems and to estimate the cumulative distribution function of the time to failure of a performance measure when degradation of the components is assumed. The four main steps include; 1) transforming the dynamic response into a set of static responses at discrete cycle-time steps and using Singular Value Decomposition to efficiently estimate a time-invariant performance measure that is based upon a dynamic response, 2) replacing the mechanistic model with an approximating function, known as a “metamodel” 3) searching for the best design parameters using fast integration methods such as the First Order Reliability Method and 4) building the cumulative distribution function using the summation of the incremental failure probabilities, that are estimated using the set-theory method, over the planned lifetime. The first step of the methodology uses design of experiments or sampling techniques to select a sample of training sets of the design variables. These training sets are then input to the computer-based simulation of the mechanistic model to produce a matrix of corresponding responses at discrete cycle-times. Although metamodels can be built at each time-specific column of this matrix, this method is slow especially if the number of time steps is large. An efficient alternative uses Singular Value Decomposition to split the response matrix into two matrices containing only design-variable-specific and time-specific information. The second step of the methodology fits metamodels only for the significant columns of the matrix containing the design variable-specific information. Using the time-specific matrix, a metamodel is quickly developed at any cycle-time step or for any time-invariant performance measure such as energy consumed over the cycle-lifetime. In the third step, design variables are treated as random variables and the First Order Reliability Method is used to search for the best design parameters. Finally, the components most likely to degrade are modelled using either a degradation path or a marginal distribution model and, using the First Order Reliability Method or a Monte Carlo Simulation to estimate probability, the cumulative failure probability is plotted. The speed and accuracy of the methodology using three metamodels, the Regression model, Kriging and the Radial Basis Function, is investigated. This thesis shows that the metamodel offers a significantly faster and accurate alternative to using mechanistic models for both probabilistic design optimization and for estimating the cumulative distribution function. For design using the First-Order Reliability Method to estimate probability, the Regression Model is the fastest and the Radial Basis Function is the slowest. Kriging is shown to be accurate and faster than the Radial Basis Function but its computation time is still slower than the Regression Model. When estimating the cumulative distribution function, metamodels are more than 100 times faster than the mechanistic model and the error is less than ten percent when compared with the mechanistic model. Kriging and the Radial Basis Function are more accurate than the Regression Model and computation time is faster using the Monte Carlo Simulation to estimate probability than using the First-Order Reliability Method.
274

Bulk electric system reliability simulation and application

Wangdee, Wijarn 19 December 2005 (has links)
Bulk electric system reliability analysis is an important activity in both vertically integrated and unbundled electric power utilities. Competition and uncertainty in the new deregulated electric utility industry are serious concerns. New planning criteria with broader engineering consideration of transmission access and consistent risk assessment must be explicitly addressed. Modern developments in high speed computation facilities now permit the realistic utilization of sequential Monte Carlo simulation technique in practical bulk electric system reliability assessment resulting in a more complete understanding of bulk electric system risks and associated uncertainties. Two significant advantages when utilizing sequential simulation are the ability to obtain accurate frequency and duration indices, and the opportunity to synthesize reliability index probability distributions which describe the annual index variability. <p>This research work introduces the concept of applying reliability index probability distributions to assess bulk electric system risk. Bulk electric system reliability performance index probability distributions are used as integral elements in a performance based regulation (PBR) mechanism. An appreciation of the annual variability of the reliability performance indices can assist power engineers and risk managers to manage and control future potential risks under a PBR reward/penalty structure. There is growing interest in combining deterministic considerations with probabilistic assessment in order to evaluate the system well-being of bulk electric systems and to evaluate the likelihood, not only of entering a complete failure state, but also the likelihood of being very close to trouble. The system well-being concept presented in this thesis is a probabilistic framework that incorporates the accepted deterministic N-1 security criterion, and provides valuable information on what the degree of the system vulnerability might be under a particular system condition using a quantitative interpretation of the degree of system security and insecurity. An overall reliability analysis framework considering both adequacy and security perspectives is proposed using system well-being analysis and traditional adequacy assessment. The system planning process using combined adequacy and security considerations offers an additional reliability-based dimension. Sequential Monte Carlo simulation is also ideally suited to the analysis of intermittent generating resources such as wind energy conversion systems (WECS) as its framework can incorporate the chronological characteristics of wind. The reliability impacts of wind power in a bulk electric system are examined in this thesis. Transmission reinforcement planning associated with large-scale WECS and the utilization of reliability cost/worth analysis in the examination of reinforcement alternatives are also illustrated.
275

Knotting statistics after a local strand passage in unknotted self-avoiding polygons in Z<sup>3</sup>

Szafron, Michael Lorne 15 April 2009 (has links)
We study here a model for a strand passage in a ring polymer about a randomly chosen location at which two strands of the polymer have been brought gcloseh together. The model is based on ¦-SAPs, which are unknotted self-avoiding polygons in Z^3 that contain a fixed structure ¦ that forces two segments of the polygon to be close together. To study this model, the Composite Markov Chain Monte Carlo (CMCMC) algorithm, referred to as the CMC ¦-BFACF algorithm, that I developed and proved to be ergodic for unknotted ¦-SAPs in my M. Sc. Thesis, is used. Ten simulations (each consisting of 9.6~10^10 time steps) of the CMC ¦-BFACF algorithm are performed and the results from a statistical analysis of the simulated data are presented. To this end, a new maximum likelihood method, based on previous work of Berretti and Sokal, is developed for obtaining maximum likelihood estimates of the growth constants and critical exponents associated respectively with the numbers of unknotted (2n)-edge ¦-SAPs, unknotted (2n)-edge successful-strand-passage ¦-SAPs, unknotted (2n)-edge failed-strand-passage ¦-SAPs, and (2n)-edge after-strand-passage-knot-type-K unknotted successful-strand-passage ¦-SAPs. The maximum likelihood estimates are consistent with the result (proved here) that the growth constants are all equal, and provide evidence that the associated critical exponents are all equal.<p> We then investigate the question gGiven that a successful local strand passage occurs at a random location in a (2n)-edge knot-type K ¦-SAP, with what probability will the ¦-SAP have knot-type Kf after the strand passage?h. To this end, the CMCMC data is used to obtain estimates for the probability of knotting given a (2n)-edge successful-strand-passage ¦-SAP and the probability of an after-strand-passage polygon having knot-type K given a (2n)-edge successful-strand-passage ¦-SAP. The computed estimates numerically support the unproven conjecture that these probabilities, in the n¨ limit, go to a value lying strictly between 0 and 1. We further prove here that the rate of approach to each of these limits (should the limits exist) is less than exponential.<p> We conclude with a study of whether or not there is a difference in the gsizeh of an unknotted successful-strand-passage ¦-SAP whose after-strand-passage knot-type is K when compared to the gsizeh of a ¦-SAP whose knot-type does not change after strand passage. The two measures of gsizeh used are the expected lengths of, and the expected mean-square radius of gyration of, subsets of ¦-SAPs. How these two measures of gsizeh behave as a function of a polygonfs length and its after-strand-passage knot-type is investigated.
276

Lattice Simulations of the SU(2)-Multi-Higgs Phase Transition

Wurtz, Mark Bryan 29 July 2009 (has links)
The Higgs boson has an important role in the theoretical formulation of the standard model of fundamental interactions. Symmetry breaking of the vacuum via the Higgs field allows the gauge bosons of the weak interaction and all fermions to acquire mass in a way that preserves gauge-invariance, and thus renormalizablility. The Standard Model can accommodate an arbitrary number of Higgs fields with appropriate charge assignments. To explore the effects of multiple Higgs particles, the SU(2)-multi-Higgs model is studied using lattice simulations, a non-perturbative technique in which the fields are placed on a discrete space-time lattice. The formalism and methods of lattice field theory are discussed in detail. Standard results for the SU(2)-Higgs model are reproduced via Monte Carlo simulations, in particular the single-Higgs phase structure, which has a region of analytic connection between the symmetric and Higgs phases. The phase structure of the SU(2)-multi-Higgs model is explored for the case of N >= 2 identical Higgs fields. There is no remaining region of analytic connection between the phases, at least when interactions between different Higgs flavours are omitted. An explanation of this result in terms of enhancement from overlapping phase transitions is explored for N = 2 by introducing an asymmetry in the hopping parameters of the Higgs fields.
277

Monte Carlo Methods for Multifactor Portfolio Credit Risk

Lee, Yi-hsi 08 February 2010 (has links)
This study develops a dynamic importance sampling method (DIS) for numerical simulations of rare events. The DIS method is flexible, fast, and accurate. The most importance is that it is very easy to implement. It could be applied to any multifactor copula models, which conduct by arbitrary independent random variables. First, the key common factor (KCF) is determined by the maximum value among the coefficients of factor loadings. Second, searching the indicator by the order statistics and applying the truncated sampling techniques, the probability of large losses (PLL) and the expected excess loss above threshold (EELAT) can be estimated precisely. Except for the assumption that the factor loadings of KCF do not exit zero elements, we do not impose any restrictions on the composition of the portfolio. The DIS method developed in this study can therefore be applied to a very wide range of credit risk models. Comparison of the numerical experiment between the method of Glasserman, Kang and Shahabuddin (2008) and the DIS method developed in this study, under the multifactor Gaussian copula model and the high market impact condition (the factor loadings of marketwide factor of 0.8), both variance reduction ratio and efficient ratio of the DIS model are much better than that of Glasserman et al. (2008)¡¦s. And both results approximate when the factor loadings of marketwide factor decreases to the range of 0.5 to 0.25. However, the DIS method is superior to the method of Glasserman et al. (2008) in terms of the practicability. Numerical simulation results demonstrate that the DIS method is not only feasible to the general market conditions, but also particularly to the high market impact condition, especially in credit contagion or market collapse environments. It is also noted that the numerical results indicate that the DIS estimators exit bounded relative error.
278

Effect of cumulative seismic damage and corrosion on life-cycle cost of reinforced concrete bridges

Kumar, Ramesh 15 May 2009 (has links)
Bridge design should take into account not only safety and functionality, but also the cost effectiveness of investments throughout a bridge life-cycle. This work presents a probabilistic approach to compute the life-cycle cost (LCC) of corroding reinforced concrete (RC) bridges in earthquake prone regions. The approach is developed by combining cumulative seismic damage and damage associated to corrosion due to environmental conditions. Cumulative seismic damage is obtained from a low-cycle fatigue analysis. Chloride-induced corrosion of steel reinforcement is computed based on Fick’s second law of diffusion. The proposed methodology accounts for the uncertainties in the ground motion parameters, the distance from source, the seismic demand on the bridge, and the corrosion initiation time. The statistics of the accumulated damage and the cost of repairs throughout the bridge life-cycle are obtained by Monte-Carlo simulation. As an illustration of the proposed approach, the effect of design parameters on the life-cycle cost of an example RC bridge is studied. The results are shown to be valuable in better estimating the condition of existing bridges (i.e., total accumulated damage at any given time) and, therefore, can help schedule inspection and maintenance programs. In addition, by taking into consideration the deterioration process over a bridge life-cycle, it is possible to make an estimate of the optimum design parameters by minimizing, for example, the expected cost throughout the life of the structure.
279

Net pay evaluation: a comparison of methods to estimate net pay and net-to-gross ratio using surrogate variables

Bouffin, Nicolas 02 June 2009 (has links)
Net pay (NP) and net-to-gross ratio (NGR) are often crucial quantities to characterize a reservoir and assess the amount of hydrocarbons in place. Numerous methods in the industry have been developed to evaluate NP and NGR, depending on the intended purposes. These methods usually involve the use of cut-off values of one or more surrogate variables to discriminate non-reservoir from reservoir rocks. This study investigates statistical issues related to the selection of such cut-off values by considering the specific case of using porosity () as the surrogate. Four methods are applied to permeability-porosity datasets to estimate porosity cut-off values. All the methods assume that a permeability cut-off value has been previously determined and each method is based on minimizing the prediction error when particular assumptions are satisfied. The results show that delineating NP and evaluating NGR require different porosity cut-off values. In the case where porosity and the logarithm of permeability are joint normally distributed, NP delineation requires the use of the Y-on-X regression line to estimate the optimal porosity cut-off while the reduced major axis (RMA) line provides the optimal porosity cut-off value to evaluate NGR. Alternatives to RMA and regression lines are also investigated, such as discriminant analysis and a data-oriented method using a probabilistic analysis of the porosity-permeability crossplots. Joint normal datasets are generated to test the ability of the methods to predict accurately the optimal porosity cut-off value for sampled sub datasets. These different methods have been compared to one another on the basis of the bias, standard error and robustness of the estimates. A set of field data has been used from the Travis Peak formation to test the performance of the methods. The conclusions of the study have been confirmed when applied to field data: as long as the initial assumptions concerning the distribution of data are verified, it is recommended to use the Y-on-X regression line to delineate NP while either the RMA line or discriminant analysis should be used for evaluating NGR. In the case where the assumptions on data distribution are not verified, the quadrant method should be used.
280

In-Jet Tracking Efficiency Analysis for the STAR Time Projection Chamber in Polarized Proton-Proton Collisions at sqrt(s) = 200GeV

Huo, Liaoyuan 2012 May 1900 (has links)
As one of the major mid-rapidity tracking devices of the STAR detector at the Relativistic Heavy-Ion Collider (RHIC), the Time Projection Chamber (TPC) plays an important role in measuring trajectory and energy of high energy charged particles in polarized proton-proton collision experiments. TPC's in-jet tracking efficiency represents the largest systematic uncertainty on jet energy scale at high transverse momentum, whose measurement contributes to the understanding of the spin structure of protons. The objective of this analysis is to get a better estimation of this systematic uncertainty, through methods of pure Monte-Carlo simulation and real- data embedding, in which simulated tracks are embedded into real-data events. Be- sides, simulated tracks are also embedded into Monte-Carlo events, to make a strict comparison for the uncertainty estimation. The result indicates that the unexplained part of the systematic uncertainty is reduced to 3.3%, from a previous quoted value of 5%. This analysis also suggests that future analysis, such as embedding jets into zero-bias real data and analysis with much higher event statistics, will benefit the understanding of the systematic uncertainty of the in-jet TPC tracking efficiency.

Page generated in 0.0875 seconds