• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 426
  • 217
  • 73
  • 66
  • 34
  • 29
  • 26
  • 24
  • 12
  • 9
  • 8
  • 6
  • 4
  • 4
  • 2
  • Tagged with
  • 1008
  • 1008
  • 1008
  • 120
  • 117
  • 98
  • 96
  • 83
  • 74
  • 65
  • 64
  • 61
  • 57
  • 53
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Image Analysis and Diffraction by the Myosin Lattice of Vertebrate Muscle

Yoon, Chunhong January 2008 (has links)
Closely packed myosin filaments are an example of a disordered biological array responsible for the contraction of muscle. X-ray fiber diffraction data is used to study these biomolecular assemblies but the inherent disorder in muscle makes interpretation of the diffraction data difficult. Limited knowledge of the precise nature of the myosin lattice disorder and its effects on X-ray diffraction data is currently limiting advances in studies on muscle structure and function. This thesis covers theoretical and computational efforts to incorporate the myosin lattice disorder in X-ray diffraction analysis. An automated image analysis program is developed to rapidly and accurately quantitate the disorder from electron micrographs of muscle cross-sections. The observed disorder is modelled as an antiferromagnetic Ising model and the model verified using Monte Carlo simulations. Theory and methods are developed for efficient calculation of cylindrically averaged X-ray diffraction from two-dimensional lattices that incorporate this disorder.
262

Significance Tests for the Measure of Raw Agreement

von Eye, Alexander, Mair, Patrick, Schauerhuber, Michael January 2006 (has links) (PDF)
Significance tests for the measure of raw agreement are proposed. First, it is shown that the measure of raw agreement can be expressed as a proportionate reduction-in-error measure, sharing this characteristic with Cohen's Kappa and Brennan and Prediger's Kappa_n. Second, it is shown that the coefficient of raw agreement is linearly related to Brennan and Prediger's Kappa_n. Therefore, using the same base model for the estimation of expected cell frequencies as Brennan and Prediger's Kappa_n, one can devise significance tests for the measure of raw agreement. Two tests are proposed. The first uses Stouffer's Z, a probability pooler. The second test is the binomial test. A data example analyzes the agreement between two psychiatrists' diagnoses. The covariance structure of the agreement cells in a rater by rater table is described. Simulation studies show the performance and power functions of the test statistics. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
263

A meta-analysis of Type I error rates for detecting differential item functioning with logistic regression and Mantel-Haenszel in Monte Carlo studies

Van De Water, Eva 12 August 2014 (has links)
Differential item functioning (DIF) occurs when individuals from different groups who have equal levels of a latent trait fail to earn commensurate scores on a testing instrument. Type I error occurs when DIF-detection methods result in unbiased items being excluded from the test while a Type II error occurs when biased items remain on the test after DIF-detection methods have been employed. Both errors create potential issues of injustice amongst examinees and can result in costly and protracted legal action. The purpose of this research was to evaluate two methods for detecting DIF: logistic regression (LR) and Mantel-Haenszel (MH). To accomplish this, meta-analysis was employed to summarize Monte Carlo quantitative studies that used these methods in published and unpublished literature. The criteria employed for comparing these two methods were Type I error rates, the Type I error proportion, which was also the Type I error effect size measure, deviation scores, and power rates. Monte Carlo simulation studies meeting inclusion criteria, with typically 15 Type I error effect sizes per study, were compared to assess how the LR and MH statistical methods function to detect DIF. Studied variables included DIF magnitude, nature of DIF (uniform or non-uniform), number of DIF items, and test length. I found that MH was better at Type I error control while LR was better at controlling Type II error. This study also provides a valuable summary of existing DIF methods and a summary of the types of variables that have been manipulated in DIF simulation studies with LR and MH. Consequently, this meta-analysis can serve as a resource for practitioners to help them choose between LR and MH for DIF detection with regard to Type I and Type II error control, and can provide insight for parameter selection in the design of future Monte Carlo DIF studies.
264

Pricing barrier options with numerical methods / Candice Natasha de Ponte

De Ponte, Candice Natasha January 2013 (has links)
Barrier options are becoming more popular, mainly due to the reduced cost to hold a barrier option when compared to holding a standard call/put options, but exotic options are difficult to price since the payoff functions depend on the whole path of the underlying process, rather than on its value at a specific time instant. It is a path dependent option, which implies that the payoff depends on the path followed by the price of the underlying asset, meaning that barrier options prices are especially sensitive to volatility. For basic exchange traded options, analytical prices, based on the Black-Scholes formula, can be computed. These prices are influenced by supply and demand. There is not always an analytical solution for an exotic option. Hence it is advantageous to have methods that efficiently provide accurate numerical solutions. This study gives a literature overview and compares implementation of some available numerical methods applied to barrier options. The three numerical methods that will be adapted and compared for the pricing of barrier options are: • Binomial Tree Methods • Monte-Carlo Methods • Finite Difference Methods / Thesis (MSc (Applied Mathematics))--North-West University, Potchefstroom Campus, 2013
265

Pricing barrier options with numerical methods / Candice Natasha de Ponte

De Ponte, Candice Natasha January 2013 (has links)
Barrier options are becoming more popular, mainly due to the reduced cost to hold a barrier option when compared to holding a standard call/put options, but exotic options are difficult to price since the payoff functions depend on the whole path of the underlying process, rather than on its value at a specific time instant. It is a path dependent option, which implies that the payoff depends on the path followed by the price of the underlying asset, meaning that barrier options prices are especially sensitive to volatility. For basic exchange traded options, analytical prices, based on the Black-Scholes formula, can be computed. These prices are influenced by supply and demand. There is not always an analytical solution for an exotic option. Hence it is advantageous to have methods that efficiently provide accurate numerical solutions. This study gives a literature overview and compares implementation of some available numerical methods applied to barrier options. The three numerical methods that will be adapted and compared for the pricing of barrier options are: • Binomial Tree Methods • Monte-Carlo Methods • Finite Difference Methods / Thesis (MSc (Applied Mathematics))--North-West University, Potchefstroom Campus, 2013
266

Premiepensionens Marknadsrisk : En Monte Carlo-simulering av den allmänna pensionen

Sverresson, Carl-Petter, Östling, Christoffer January 2014 (has links)
A reforming trend is captured showing that countries are shifting from defined benefit pension systems towards defined contribution systems. The reforms have been justified through predictions that the defined benefit systems will not manage to provide good enough pensions to members in the future. The newer defined contribution pension plans often include individual financial accounts where individuals have the possibility to choose how a part of their pension savings should be invested. Sweden was early to introduce such a system, which at the moment provides more than 800 funds to choose from. The aim of this thesis is to capture the market risk associated with these individual investments and does so by using Monte Carlo simulations for six selected pension funds. The method produces forecasts of replacement ratios, pension as percentage of pre-retirement income, for two hypothetical individuals: one who starts to work right after elementary school and one individual who starts a five year education and after graduation starts to work. The results show a slightly lower replacement ratio for the educated individual, which also is associated with a higher probability of ending up with a low replacement ratio. The market risk also varies between the funds, which implies that the funds should be chosen with great care. The study ends with arguments for an increasing paternalism with a carefully considered fund offering, providing fewer funds to choose from than today.
267

Dosimetric verification of radiation therapy including intensity modulated treatments, using an amorphous-silicon electronic portal imaging device

Chytyk-Praznik, Krista January 2009 (has links)
Radiation therapy is continuously increasing in complexity due to technological innovation in delivery techniques, necessitating thorough dosimetric verification. Comparing accurately predicted portal dose images to measured images obtained during patient treatment can determine if a particular treatment was delivered correctly. The goal of this thesis was to create a method to predict portal dose images that was versatile and accurate enough to use in a clinical setting. All measured images in this work were obtained with an amorphous silicon electronic portal imaging device (a-Si EPID), but the technique is applicable to any planar imager. A detailed, physics-motivated fluence model was developed to characterize fluence exiting the linear accelerator head. The model was further refined using results from Monte Carlo simulations and schematics of the linear accelerator. The fluence incident on the EPID was converted to a portal dose image through a superposition of Monte Carlo-generated, monoenergetic dose kernels specific to the a-Si EPID. Predictions of clinical IMRT fields with no patient present agreed with measured portal dose images within 3% and 3 mm. The dose kernels were applied ignoring the geometrically divergent nature of incident fluence on the EPID. A computational investigation into this parallel dose kernel assumption determined its validity under clinically relevant situations. Introducing a patient or phantom into the beam required the portal image prediction algorithm to account for patient scatter and attenuation. Primary fluence was calculated by attenuating raylines cast through the patient CT dataset, while scatter fluence was determined through the superposition of pre-calculated scatter fluence kernels. Total dose in the EPID was calculated by convolving the total predicted incident fluence with the EPID-specific dose kernels. The algorithm was tested on water slabs with square fields, agreeing with measurement within 3% and 3 mm. The method was then applied to five prostate and six head-and-neck IMRT treatment courses (~1900 clinical images). Deviations between the predicted and measured images were quantified. The portal dose image prediction model developed in this thesis work has been shown to be accurate, and it was demonstrated to be able to verify patients’ delivered radiation treatments.
268

Microbeam design in radiobiological research

Hollis, Kevin John January 1995 (has links)
Recent work using low-doses of ionising radiations, both in vitro and in ViVO, has suggested that the responses of biological systems in the region of less than 1 Gray may not be predicted by simple extrapolation from the responses at higher doses. Additional experiments, using high-LET radiations at doses of much less than one alpha particle traversal per cell nucleus, have shown responses in a greater number of cells than have received a radiation dose. These findings, and increased concern over the effects of the exposure of the general population to low-levels of background radiation, for example due to radon daughters in the lungs, have stimulated the investigation of the response of mammalian cells to ionising radiations in the extreme low-dose region. In all broad field exposures to particulate radiations at low-dose levels an inherent dose uncertainty exists due to random counting statistics. This dose variation produces a range of values for the measured biological effect within the irradiated population, therefore making the elucidation of the dose-effect relationship extremely difficult. The use of the microbeam irradiation technique will allow the delivery of a controlled number of particles to specific targets within an individual cell with a high degree of accuracy. This approach will considerably reduce the level of variation of biological effect within the irradiated cell population and will allow low-dose responses of cellular systems to be determined. In addition, the proposed high spatial resolution of the microbeam developed will allow the investigation of the distribution of radiation sensitivity within the cell, to provide a better understanding of the mechanisms of radiation action. The target parameters for the microbeam at the Gray Laboratory are a spatial resolution of less than 1 urn and a detection efficiency of better than 99 %. The work of this thesis was to develop a method of collimation, in order to produce a microbeam of 3.5 MeV protons, and to develop a detector to be used in conjunction with the collimation system. In order to determine the optimum design of collimator necessary to produce a proton microbeam, a computer simulation based upon a Monte-Carlo simulation code, written by Dr S J Watts, was developed. This programme was then used to determine the optimum collimator length and the effects of misalignment and divergence of the incident proton beam upon the quality of the collimated beam produced. Designs for silicon collimators were produced, based upon the results of these simulations, and collimators were subsequently produced for us using techniques of micro-manufacturing developed in the semiconductor industry. Other collimator designs were also produced both in-house and commercially, using a range of materials. These collimators were tested to determine both the energy and spatial resolutions of the transmitted proton beam produced. The best results were obtained using 1.6 mm lengths of 1.5 µm diameter bore fused silica tubing. This system produced a collimated beam having a spatial resolution with 90 % of the transmitted beam lying within a diameter of 2.3 ± 0.9 µm and with an energy spectrum having 75 % of the transmitted protons within a Gaussian fit to the full-energy peak. Detection of the transmitted protons was achieved by the use of a scintillation transmission detector mounted over the exit aperture of the collimator. An approximately 10 urn thick ZnS(Ag) crystal was mounted between two 30 urn diameter optical fibres and the light emitted from the crystal transmitted along the fibres to two photomultiplier tubes. The signals from the tubes were analyzed, using coincidence counting techniques, by means of electronics designed by Dr B Vojnovic. The lowest counting inefficiencies obtained using this approach were a false positive count level of 0.8 ± 0.1 % and an uncounted proton level of 0.9 ± 0.3 %. The elements of collimation and detection were then combined in a rugged microbeam assembly, using a fused silica collimator having a bore diameter of 5 urn and a scintillator crystal having a thickness of - 15 µm. The microbeam produced by this initial assembly had a spatial resolution with 90 % of the transmitted protons lying within a diameter of 5.8 ± 1.6 µm, and counting inefficiencies of 0.27 ± 0.22 % and 1.7 ± 0.4 % for the levels of false positive and missed counts respectively. The detector system in this assembly achieves the design parameter of 99 % efficiency, however, the spatial resolution of the beam is not at the desired I urn level. The diameter of the microbeam beam produced is less than the nuclear diameter of many cell lines and so the beam may be used to good effect in the low-dose irradiation of single cells. In order to investigate the variation in sensitivity within a cell the spatial resolution of the beam would require improvement. Proposed methods by which this may be achieved are described.
269

Metamodel-Based Probabilistic Design for Dynamic Systems with Degrading Components

Seecharan, Turuna Saraswati January 2012 (has links)
The probabilistic design of dynamic systems with degrading components is difficult. Design of dynamic systems typically involves the optimization of a time-invariant performance measure, such as Energy, that is estimated using a dynamic response, such as angular speed. The mechanistic models developed to approximate this performance measure are too complicated to be used with simple design calculations and lead to lengthy simulations. When degradation of the components is assumed, in order to determine suitable service times, estimation of the failure probability over the product lifetime is required. Again, complex mechanistic models lead to lengthy lifetime simulations when the Monte Carlo method is used to evaluate probability. Based on these problems, an efficient methodology is presented for probabilistic design of dynamic systems and to estimate the cumulative distribution function of the time to failure of a performance measure when degradation of the components is assumed. The four main steps include; 1) transforming the dynamic response into a set of static responses at discrete cycle-time steps and using Singular Value Decomposition to efficiently estimate a time-invariant performance measure that is based upon a dynamic response, 2) replacing the mechanistic model with an approximating function, known as a “metamodel” 3) searching for the best design parameters using fast integration methods such as the First Order Reliability Method and 4) building the cumulative distribution function using the summation of the incremental failure probabilities, that are estimated using the set-theory method, over the planned lifetime. The first step of the methodology uses design of experiments or sampling techniques to select a sample of training sets of the design variables. These training sets are then input to the computer-based simulation of the mechanistic model to produce a matrix of corresponding responses at discrete cycle-times. Although metamodels can be built at each time-specific column of this matrix, this method is slow especially if the number of time steps is large. An efficient alternative uses Singular Value Decomposition to split the response matrix into two matrices containing only design-variable-specific and time-specific information. The second step of the methodology fits metamodels only for the significant columns of the matrix containing the design variable-specific information. Using the time-specific matrix, a metamodel is quickly developed at any cycle-time step or for any time-invariant performance measure such as energy consumed over the cycle-lifetime. In the third step, design variables are treated as random variables and the First Order Reliability Method is used to search for the best design parameters. Finally, the components most likely to degrade are modelled using either a degradation path or a marginal distribution model and, using the First Order Reliability Method or a Monte Carlo Simulation to estimate probability, the cumulative failure probability is plotted. The speed and accuracy of the methodology using three metamodels, the Regression model, Kriging and the Radial Basis Function, is investigated. This thesis shows that the metamodel offers a significantly faster and accurate alternative to using mechanistic models for both probabilistic design optimization and for estimating the cumulative distribution function. For design using the First-Order Reliability Method to estimate probability, the Regression Model is the fastest and the Radial Basis Function is the slowest. Kriging is shown to be accurate and faster than the Radial Basis Function but its computation time is still slower than the Regression Model. When estimating the cumulative distribution function, metamodels are more than 100 times faster than the mechanistic model and the error is less than ten percent when compared with the mechanistic model. Kriging and the Radial Basis Function are more accurate than the Regression Model and computation time is faster using the Monte Carlo Simulation to estimate probability than using the First-Order Reliability Method.
270

Estimation Of Expected Monetary Values Of Selected Turkish Oil Fields Using Two Different Risk Assessment Methods

Kaya, Egemen Tangut 01 January 2004 (has links) (PDF)
Most investments in the oil and gas industry involve considerable risk with a wide range of potential outcomes for a particular project. However, many economic evaluations are based on the &ldquo / most likely&rdquo / results of variables that could be expected without sufficient consideration given to other possible outcomes and it is well known that initial estimates of all these variables have uncertainty. The data is usually obtained during drilling of the initial oil well and the sources are geophysical (seismic surveys) for formation depths and areal extent of the reservoir trap, well logs for formation tops and bottoms, formation porosity, water saturation and possible permeable strata, core analysis for porosity and saturation data and DST (Drill-Stem Test) for possible oil production rates and samples for PVT (Pressure Volume Temperature) analysis to obtain FVF (Formation Volume Factor) and others. The question is how certain are the values of these variables and what is the probability of these values to occur in the reservoir to evaluate the possible risks. One of the most highly appreciable applications of the risk assessment is the estimation of volumetric reserves of hydrocarbon reservoirs. Monte Carlo and moment technique consider entire ranges of the variables of Original Oil in Place (OOIP) formula rather than deterministic figures. In the present work, predictions were made about how statistical distribution and descriptive statistics of porosity, thickness, area, water saturation, recovery factor, and oil formation volume factor affect the simulated OOIP values. The current work presents the case of two different oil fields in Turkey. It was found that both techniques produce similar results for 95%. The difference between estimated values increases as the percentages decrease from 50% and 5% probability.

Page generated in 0.0326 seconds