311 |
A Monte Carlo-based Model Of Gold Nanoparticle RadiosensitizationLechtman, Eli 10 January 2014 (has links)
The goal of radiotherapy is to operate within the therapeutic window - delivering doses of ionizing radiation to achieve locoregional tumour control, while minimizing normal tissue toxicity. A greater therapeutic ratio can be achieved by utilizing radiosensitizing agents designed to enhance the effects of radiation at the tumour. Gold nanoparticles (AuNP) represent a novel radiosensitizer with unique and attractive properties. AuNPs enhance local photon interactions, thereby converting photons into localized damaging electrons. Experimental reports of AuNP radiosensitization reveal this enhancement effect to be highly sensitive to irradiation source energy, cell line, and AuNP size, concentration and intracellular localization. This thesis explored the physics and some of the underlying mechanisms behind AuNP radiosensitization.
A Monte Carlo simulation approach was developed to investigate the enhanced photoelectric absorption within AuNPs, and to characterize the escaping energy and range of the photoelectric products. Simulations revealed a 10^3 fold increase in the rate of photoelectric absorption using low-energy brachytherapy sources compared to megavolt sources. For low-energy sources, AuNPs released electrons with ranges of only a few microns in the surrounding tissue. For higher energy sources, longer ranged photoelectric products travelled orders of magnitude farther.
A novel radiobiological model called the AuNP radiosensitization predictive (ARP) model was developed based on the unique nanoscale energy deposition pattern around AuNPs. The ARP model incorporated detailed Monte Carlo simulations with experimentally determined parameters to predict AuNP radiosensitization. This model compared well to in vitro experiments involving two cancer cell lines (PC-3 and SK-BR-3), two AuNP sizes (5 and 30 nm) and two source energies (100 and 300 kVp). The ARP model was then used to explore the effects of AuNP intracellular localization using 1.9 and 100 nm AuNPs, and 100 and 300 kVp source energies. The impact of AuNP localization was most significant for low-energy sources. At equal mass concentrations, AuNP size did not impact radiosensitization unless the AuNPs were localized in the nucleus. This novel predictive model of AuNP radiosensitization could help define the optimal use of AuNPs in potential clinical strategies by determining therapeutic AuNP concentrations, and recommending when active approaches to cellular accumulation are most beneficial.
|
312 |
Farm level economics of winter wheat production in the Canadian PrairiesYang, Danyi Unknown Date
No description available.
|
313 |
Stochastic collocation methods for aeroelastic system with uncertaintyDeng, Jian Unknown Date
No description available.
|
314 |
The Economics of Beneficial Management Practices Adoption on Representative Alberta Crop FarmsTrautman, Dawn E Unknown Date
No description available.
|
315 |
Dosimetric verification of radiation therapy including intensity modulated treatments, using an amorphous-silicon electronic portal imaging deviceChytyk-Praznik, Krista January 2009 (has links)
Radiation therapy is continuously increasing in complexity due to technological innovation in delivery techniques, necessitating thorough dosimetric verification. Comparing accurately predicted portal dose images to measured images obtained during patient treatment can determine if a particular treatment was delivered correctly. The goal of this thesis was to create a method to predict portal dose images that was versatile and accurate enough to use in a clinical setting. All measured images in this work were obtained with an amorphous silicon electronic portal imaging device (a-Si EPID), but the technique is applicable to any planar imager. A detailed, physics-motivated fluence model was developed to characterize fluence exiting the linear accelerator head. The model was further refined using results from Monte Carlo simulations and schematics of the linear accelerator. The fluence incident on the EPID was converted to a portal dose image through a superposition of Monte Carlo-generated, monoenergetic dose kernels specific to the a-Si EPID. Predictions of clinical IMRT fields with no patient present agreed with measured portal dose images within 3% and 3 mm. The dose kernels were applied ignoring the geometrically divergent nature of incident fluence on the EPID. A computational investigation into this parallel dose kernel assumption determined its validity under clinically relevant situations. Introducing a patient or phantom into the beam required the portal image prediction algorithm to account for patient scatter and attenuation. Primary fluence was calculated by attenuating raylines cast through the patient CT dataset, while scatter fluence was determined through the superposition of pre-calculated scatter fluence kernels. Total dose in the EPID was calculated by convolving the total predicted incident fluence with the EPID-specific dose kernels. The algorithm was tested on water slabs with square fields, agreeing with measurement within 3% and 3 mm. The method was then applied to five prostate and six head-and-neck IMRT treatment courses (~1900 clinical images). Deviations between the predicted and measured images were quantified. The portal dose image prediction model developed in this thesis work has been shown to be accurate, and it was demonstrated to be able to verify patients’ delivered radiation treatments.
|
316 |
A computational fluid dynamic approach and Monte Carlo simulation of phantom mixing techniques for quality control testing of gamma camerasYang, Qing January 2013 (has links)
In order to reduce the unnecessary radiation exposure for the clinical personnel, the optimization of procedures in the quality control test of gamma camera was investigated. A significant component of the radiation dose in performing the quality control testing is handling phantoms of radioactivity, especially the mixing to get a uniform activity concentration. Improving the phantom mixing techniques appeared to be a means of reducing radiation dose to personnel. However, this is difficult to perform without a continuous dynamic tomographic acquisition system to study mixing the phantom.
In the first part of this study a computational fluid dynamics model was investigated to simulate the mixing procedure. Mixing techniques of shaking and spinning were simulated using the computational fluid dynamics tool FLUENT. In the second part of this study a Siemens E.Cam gamma camera was simulated using the Monte Carlo software SIMIND. A series of validation experiments demonstrated the reliability of the Monte Carlo simulation. In the third part of this study the simulated the mixing data from FLUENT was used as the source distribution in SIMIND to simulate a tomographic acquisition of the phantom. The planar data from the simulation was reconstructed using filtered back projection to produce a tomographic data set for the activity distribution in the phantom. This completed the simulation routine for phantom mixing and verified the Proof-in-Concept that the phantom mixing problem can be studied using a combination of computational fluid dynamics and nuclear medicine radiation transport simulations.
|
317 |
Image Analysis and Diffraction by the Myosin Lattice of Vertebrate MuscleYoon, Chunhong January 2008 (has links)
Closely packed myosin filaments are an example of a disordered biological array responsible for the contraction of muscle. X-ray fiber diffraction data is used to study these biomolecular assemblies but the inherent disorder in muscle makes interpretation of the
diffraction data difficult. Limited knowledge of the precise nature of the myosin lattice disorder and its effects on X-ray diffraction data is currently limiting advances in studies on muscle structure and function.
This thesis covers theoretical and computational efforts to incorporate the myosin lattice disorder in X-ray diffraction analysis. An automated image analysis program is developed
to rapidly and accurately quantitate the disorder from electron micrographs of muscle cross-sections. The observed disorder is modelled as an antiferromagnetic Ising model and the model verified using Monte Carlo simulations. Theory and methods are developed for efficient calculation of cylindrically averaged X-ray diffraction from two-dimensional
lattices that incorporate this disorder.
|
318 |
Significance Tests for the Measure of Raw Agreementvon Eye, Alexander, Mair, Patrick, Schauerhuber, Michael January 2006 (has links) (PDF)
Significance tests for the measure of raw agreement are proposed. First, it is shown that the measure of raw agreement can be expressed as a proportionate reduction-in-error measure, sharing this characteristic with Cohen's Kappa and Brennan and Prediger's Kappa_n. Second, it is shown that the coefficient of raw agreement is linearly related to Brennan and Prediger's Kappa_n. Therefore, using the same base model for the estimation of expected cell frequencies as Brennan and Prediger's Kappa_n, one can devise significance tests for the measure of raw agreement. Two tests are proposed. The first uses Stouffer's Z, a probability pooler. The second test is the binomial test. A data example analyzes the agreement between two psychiatrists' diagnoses. The covariance structure of the agreement cells in a rater by rater table is described. Simulation studies show the performance and power functions of the test statistics. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
|
319 |
A meta-analysis of Type I error rates for detecting differential item functioning with logistic regression and Mantel-Haenszel in Monte Carlo studiesVan De Water, Eva 12 August 2014 (has links)
Differential item functioning (DIF) occurs when individuals from different groups who have equal levels of a latent trait fail to earn commensurate scores on a testing instrument. Type I error occurs when DIF-detection methods result in unbiased items being excluded from the test while a Type II error occurs when biased items remain on the test after DIF-detection methods have been employed. Both errors create potential issues of injustice amongst examinees and can result in costly and protracted legal action. The purpose of this research was to evaluate two methods for detecting DIF: logistic regression (LR) and Mantel-Haenszel (MH).
To accomplish this, meta-analysis was employed to summarize Monte Carlo quantitative studies that used these methods in published and unpublished literature. The criteria employed for comparing these two methods were Type I error rates, the Type I error proportion, which was also the Type I error effect size measure, deviation scores, and power rates. Monte Carlo simulation studies meeting inclusion criteria, with typically 15 Type I error effect sizes per study, were compared to assess how the LR and MH statistical methods function to detect DIF.
Studied variables included DIF magnitude, nature of DIF (uniform or non-uniform), number of DIF items, and test length. I found that MH was better at Type I error control while LR was better at controlling Type II error. This study also provides a valuable summary of existing DIF methods and a summary of the types of variables that have been manipulated in DIF simulation studies with LR and MH. Consequently, this meta-analysis can serve as a resource for practitioners to help them choose between LR and MH for DIF detection with regard to Type I and Type II error control, and can provide insight for parameter selection in the design of future Monte Carlo DIF studies.
|
320 |
Pricing barrier options with numerical methods / Candice Natasha de PonteDe Ponte, Candice Natasha January 2013 (has links)
Barrier options are becoming more popular, mainly due to the reduced cost to hold a
barrier option when compared to holding a standard call/put options, but exotic options
are difficult to price since the payoff functions depend on the whole path of the underlying
process, rather than on its value at a specific time instant.
It is a path dependent option, which implies that the payoff depends on the path followed by
the price of the underlying asset, meaning that barrier options prices are especially sensitive
to volatility.
For basic exchange traded options, analytical prices, based on the Black-Scholes formula,
can be computed. These prices are influenced by supply and demand. There is not always
an analytical solution for an exotic option. Hence it is advantageous to have methods that
efficiently provide accurate numerical solutions. This study gives a literature overview and
compares implementation of some available numerical methods applied to barrier options.
The three numerical methods that will be adapted and compared for the pricing of barrier
options are: • Binomial Tree Methods • Monte-Carlo Methods • Finite Difference Methods / Thesis (MSc (Applied Mathematics))--North-West University, Potchefstroom Campus, 2013
|
Page generated in 0.0525 seconds