471 |
Quantifying synergy value in mergers and acquisitionsDe Graaf, Albert 06 1900 (has links)
Mergers and acquisitions have been demonstrated to create synergies, but not in all cases.
Current research reveals that where synergies exist, these seem to accrue to the shareholders
of the selling companies. Given the limitations of our qualitative research design, we find that it
is important to quantify synergy before the acquisition, preferably by applying certain best
practices. In an attempt to enhance understanding of the phenomenon, we find that several
types of synergy exist and that their origins include efficiencies, such as economies of scale
and economies in innovative activity. We further find that the bid price is an important indicator
of success and that its maximum should not exceed the intrinsic value of the target, plus the
value of synergies between the bidder and target. We further find that best practices exist in
quantifying cost and revenue synergies and describe these separately per origin. / Management Accounting / M.Com. (Accounting)
|
472 |
A probabilistic comparison of times to flashover in a compartment with wooden and non-combustible linings considering variable fuel loadsStudhalter, Jakob January 2012 (has links)
Prescriptive fire safety codes regulate the use of combustible room linings to reduce fire risk. These regulations are based on classification systems which designate materials according to their relative hazard when exposed to a standard fire scenario. However, no quantitative data sets on the fire risk of wooden lining materials exist which take into account relevant uncertainties, such as movable fuel loads in compartments.
This work is a comparative risk analysis on the influence of wooden linings on the time to flashover in a compartment, considering uncertainties in the fuel load configuration. A risk model is set up for this purpose using B-RISK, a probabilistic fire design and research tool currently under development at BRANZ (Building Research Association of New Zealand) and the University of Canterbury. The risk model calculates fire spread in a compartment between fuel load items and from fuel load items to combustible linings. Multiple iterations are performed considering varying fuel load arrangements and input values sampled from distributions (Monte-Carlo simulation).
The functionality and applicability of the risk model is demonstrated, comparing the model with experiments from the literature. The model assumptions are described in detail. Some of the model inputs are defined as distributions in order to account for uncertainty. Parametric studies are conducted in order to analyse the sensitivity of the results to input parameters which cannot be described as distributions.
Probabilistic times to flashover are presented and discussed for an ISO 9705 compartment considering varying movable fuel loads and different lining configurations. The fuel load is typical for a hotel room occupancy. Effects of suppression measures are not considered. It is shown that flashover occurs approximately 60 seconds earlier if walls and ceiling are lined with wooden materials than if all linings are non-combustible. This value refers to the 5th percentiles of the time to flashover, i.e. in 5% of the cases flashover has occurred and in 95% of the cases flashover has not (yet) occurred. Referring to 50th percentiles (median values), the difference is approximately 180 seconds.
Furthermore it is shown that with wooden wall and ceiling linings in approximately 95% of
the iterations flashover occurs, whereas with non-combustible linings 86% of the iterations lead to flashover. After 900 seconds, in 90% of the iterations flashover occurs if walls and ceiling are lined with wooden materials, and in 77% of the iterations if the linings are non-combustible. Using different wooden lining materials (non-fire retardant plywood, fire retardant plywood, and MDF) has no significant effect on the probabilistic times to flashover. Varying the fuel load energy density has an influence only when all linings are non-combustible and when the fuel load energy density is relatively low (100–200 MJ/m2).
This work contains recommendations regarding the further development of B-RISK, the research into the fire risk connected with wooden room linings, and suggestions regarding the further development of prescriptive fire safety codes.
|
473 |
A Computer-Based Decision Tool for Prioritizing the Reduction of Airborne Chemical Emissions from Canadian Oil Refineries Using Estimated Health ImpactsGower, Stephanie Karen January 2007 (has links)
Petroleum refineries emit a variety of airborne substances which may be harmful to human health. HEIDI II (Health Effects Indicators Decision Index II) is a computer-based decision analysis tool which assesses airborne emissions from Canada's oil refineries for reduction, based on ordinal ranking of estimated health impacts. The model was designed by a project team within NERAM (Network for Environmental Risk Assessment and Management) and assembled with significant stakeholder consultation. HEIDI II is publicly available as a deterministic Excel-based tool which ranks 31 air pollutants based on predicted disease incidence or estimated DALYS (disability adjusted life years). The model includes calculations to account for average annual emissions, ambient concentrations, stack height, meteorology/dispersion, photodegradation, and the population distribution around each refinery. Different formulations of continuous dose-response functions were applied to nonthreshold-acting air toxics, threshold-acting air toxics, and nonthreshold-acting CACs (criteria air contaminants). An updated probabilistic version of HEIDI II was developed using Matlab code to account for parameter uncertainty and identify key leverage variables. Sensitivity analyses indicate that parameter uncertainty in the model variables for annual emissions and for concentration-response/toxicological slopes have the greatest leverage on predicted health impacts. Scenario analyses suggest that the geographic distribution of population density around a refinery site is an important predictor of total health impact. Several ranking metrics (predicted case incidence, simple DALY, and complex DALY) and ordinal ranking approaches (deterministic model, average from Monte Carlo simulation, test of stochastic dominance) were used to identify priority substances for reduction; the results were similar in each case. The predicted impacts of primary and secondary particulate matter (PM) consistently outweighed those of the air toxics. Nickel, PAH (polycyclic aromatic hydrocarbons), BTEX (benzene, toluene, ethylbenzene and xylene), sulphuric acid, and vanadium were consistently identified as priority air toxics at refineries where they were reported emissions. For many substances, the difference in rank order is indeterminate when parametric uncertainty and variability are considered.
|
474 |
Economic analysis and Monte Carlo simulation of community wind generation in rural western KansasHalling, Todd January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Anil Pahwa / Energy costs are rising, supplies of fossil fuels are diminishing, and environmental concerns surrounding power generation in the United States are at an all-time high. The United States is continuing to push all states for energy reform and where better for Kansas to look than wind energy? Kansas is second among all states in wind generation potential; however, the best wind generation sites are located predominantly in sparsely populated areas, creating energy transportation problems. Due to these issues interest in community wind projects has been increasing. To determine the economic potential of community wind generation a distribution system in rural western Kansas where interest in community wind exists was examined and a feasibility study based on historical data, economic factors, and current grid constraints was performed. Since the majority of the load in this area is from pivot-point irrigation systems, load distributions were created based on temperature ranges instead of a linear progression of concurrent days. To test the economic viability three rate structures were examined: flat energy rate, demand rate, and critical peak pricing. A Monte Carlo simulation was designed and run to simulate twenty-year periods based on the available historical data; twenty-year net present worth calculations were performed to ensure economic viability. A sensitivity analysis was then performed to examine the effects of change in turbine size and energy rate scale. Finally, an energy storage analysis was performed to examine the economic viability of various sizes of battery storage systems.
|
475 |
On estimating variances for Gini coefficients with complex surveys: theory and applicationHoque, Ahmed 29 September 2016 (has links)
Obtaining variances for the plug-in estimator of the Gini coefficient for inequality has preoccupied researchers for decades with the proposed analytic formulae often being regarded as being too cumbersome to apply, as well as usually based on the assumption of an iid structure. We examine several variance estimation techniques for a Gini coefficient estimator obtained from a complex survey, a sampling design often used to obtain sample data in inequality studies. In the first part of the dissertation, we prove that Bhattacharya’s (2007) asymptotic variance estimator when data arise from a complex survey is equivalent to an asymptotic variance estimator derived by Binder and Kovačević (1995) nearly twenty years earlier. In addition, to aid applied researchers, we also show how auxiliary regressions can be used to generate the plug-in Gini estimator and its asymptotic variance, irrespective of the sampling design.
In the second part of the dissertation, using Monte Carlo (MC) simulations with 36 data generating processes under the beta, lognormal, chi-square, and the Pareto distributional assumptions with sample data obtained under various complex survey designs, we explore two finite sample properties of the Gini coefficient estimator: bias of the estimator and empirical coverage probabilities of interval estimators for the Gini coefficient. We find high sensitivity to the number of strata and the underlying distribution of the population data. We compare the performance of two standard normal (SN) approximation interval estimators using the asymptotic variance estimators of Binder and Kovačević (1995) and Bhattacharya (2007), another SN approximation interval estimator using a traditional bootstrap variance estimator, and a standard MC bootstrap percentile interval estimator under a complex survey design. With few exceptions, namely with small samples and/or highly skewed distributions of the underlying population data where the bootstrap methods work relatively better, the SN approximation interval estimators using asymptotic variances perform quite well.
Finally, health data on the body mass index and hemoglobin levels for Bangladeshi women and children, respectively, are used as illustrations. Inequality analysis of these two important indicators provides a better understanding about the health status of women and children. Our empirical results show that statistical inferences regarding inequality in these well-being variables, measured by the Gini coefficients, based on Binder and Kovačević’s and Bhattacharya’s asymptotic variance estimators, give equivalent outcomes. Although the bootstrap approach often generates slightly smaller variance estimates in small samples, the hypotheses test results or widths of interval estimates using this method are practically similar to those using the asymptotic variance estimators.
Our results are useful, both theoretically and practically, as the asymptotic variance estimators are simpler and require less time to calculate compared to those generated by bootstrap methods, as often previously advocated by researchers. These findings suggest that applied researchers can often be comfortable in undertaking inferences about the inequality of a well-being variable using the Gini coefficient employing asymptotic variance estimators that are not difficult to calculate, irrespective of whether the sample data are obtained under a complex survey or a simple random sample design. / Graduate / 0534 / 0501 / 0463 / aahoque@gmail.com
|
476 |
Risk Estimation of Nonlinear Time Domain Dynamic Analyses of Large SystemsAzizsoltani, Hamoon, Azizsoltani, Hamoon January 2017 (has links)
A novel concept of multiple deterministic analyses is proposed to design safer and more damage-tolerant structures, particularly when excited by dynamic including seismic loading in time domain. Since the presence of numerous sources of uncertainty cannot be avoided or overlooked, the underlying risk is estimated to compare design alternatives. To generate the implicit performance functions explicitly, the basic response surface method is significantly improved. Then, several surrogate models are proposed. The advanced factorial design and Kriging method are used as the major building blocks. Using these basic schemes, seven alternatives are proposed. Accuracies of these schemes are verified using basic Monte Carlo simulations. After verifying all seven alternatives, the capabilities of the three most desirable schemes are compared using a case study. They correctly identified and correlated damaged states of structural elements in terms of probability of failure using only few hundreds of deterministic analyses. The modified Kriging method appears to be the best technique considering both efficiency and accuracy. Estimating the probability of failure, the post-Northridge seismic design criteria are found to be appropriate.
After verifying the proposed method, a Site-Specific seismic safety assessment method for nonlinear structural systems is proposed to generate a suite of ground excitation time histories. The information of risk is used to design a structure more damage-tolerant. The proposed procedure is verified and showcased by estimating risks associated with three buildings designed by professional experts in the Los Angeles area satisfying the post-Northridge design criteria for the overall lateral deflection and inter-story drift. The accuracy of the estimated risk is again verified using the Monte Carlo simulation technique. In all cases, the probabilities of collapse are found to be less than 10% when excited by the risk-targeted maximum considered earthquake ground motion satisfying the intent of the code. The spread in the reliability indexes for each building for both limit states cannot be overlooked, indicating the significance of the frequency contents. The inter story drift is found to be more critical than the overall lateral displacement. The reliability indexes for both limit states are similar only for few cases. The author believes that the proposed methodology is an alternative to the classical random vibration and simulation approaches. The proposed site-specific seismic safety assessment procedure can be used by practicing engineers for routine applications.
The proposed reliability methodology is not problem-specific. It is capable of handling systems with different levels of complexity and scalability, and it is robust enough for multi-disciplinary routine applications.
In order to show the multi-disciplinary application of the proposed methodology, the probability of failure of lead-free solders in Ball Grid Array 225 surface-mount packaging for a given loading cycle is estimated. The accuracy of the proposed methodology is verified with the help of Monte Carlo simulation. After the verification, probability of failure versus loading cycles profile is calculated. Such a comprehensive study of its lifetime behavior and the corresponding reliability analyses can be useful for sensitive applications.
|
477 |
The Noncommutative Standard Model : Construction Beyond Leading Order in Theta and Collider Phenomenology / Das Nichtkommutative StandardmodellKonstruktion jenseits der führenden Ordnung in Theta und Phänomenologie an TeilchenbeschleunigernAlboteanu, Ana Maria January 2007 (has links) (PDF)
Trotz seiner präzisen Übereinstimmung mit dem Experiment ist die Gültigkeit des Standardmodells (SM) der Elementarteilchenphysik bislang nur bis zu einer Energieskala von einigen hundert GeV gesichert. Abgesehen davon erweist sich schon das Einbinden der Gravitation in einer einheitlichen Beschreibung aller fundamentalen Wechselwirkungen als ein durch gewöhnliche Quantenfeldtheorie nicht zu lösendes Problem. Das Interesse an Quantenfeldtheorien auf einer nichtkommutativen Raumzeit wurde durch deren Vorhersage als niederenergetischer Limes von Stringtheorien erweckt. Unabhängig davon, kann die Nichtlokalität einer solchen Theorie den Rahmen zur Einbeziehung der Gravitation in eine vereinheitlichende Theorie liefern. Die Hoffnung besteht, dass die Energieskala Lambda_NC, ab der solche Effekte sichtbar werden können und für die es einerlei theoretischen Vorhersagen gibt, schon bei der nächsten Generation von Beschleunigern erreicht wird. Auf dieser Annahme beruht auch die vorliegende Arbeit, im Rahmen deren eine mögliche Realisierung von Quantenfeldtheorien auf nichtkommutativer Raumzeit auf ihre phänomenologischen Konsequenzen hin untersucht wurde. Diese Arbeit ist durch fehlende LHC (Large Hadron Collider) Studien für nichkommutative Quantenfeldtheorien motiviert. Im ersten Teil des Vorhabens wurde der hadronische Prozess pp-> Z gamma -> l+l- gamma am LHC sowie die Elektron-Positron Paarvernichtung in ein Z-Boson und ein Photon am ILC (International Linear Collider) auf nichtkommutative Signale hin untersucht. Die phänomenlogischen Untersuchungen wurden im Rahmen dieses Modells in erster Ordnung des nichtkommutativen Parameters Theta durchgeführt. Eine nichtkommutative Raumzeit führt zur Brechung der Rotationsinvarianz bezüglich der Strahlrichtung der einlaufenden Teilchen. Im differentiellen Wirkungsquerschnitt für Streuprozesse äussert sich dieses als eine azimuthale Abhängigkeit, die weder im SM noch in anderen Modellen jenseits des SM auftritt. Diese klare, f\"ur nichtkommutative Theorien typische Signatur kann benutzt werden, um nichtkommutative Modelle von anderen Modellen, die neue Physik beschreiben, zu unterscheiden. Auch hat es sich erwiesen, dass die azimuthale Abhängigkeit des Wirkungsquerschnittes am besten daf\"ur geeignet ist, um die Sensitivität des LHC und des ILC auf der nichtkommutativen Skala $\Lnc$ zu bestimmen. Im phänomenologischen Teil der Arbeit wurde herausgefunden, dass Messungen am LHC für den Prozess pp-> Z gamma-> l+l- gamma nur in bestimmten Fällen auf nichtkommutative Effekte sensitiv sind. Für diese Fälle wurde für die nichtkommutative Energieskala Lambda_NC eine Grenze von Lambda_NC > 1.2 TeV bestimmt. Diese ist um eine Größenordnung höher als die Grenzen, die von bisherigen Beschleunigerexperimenten hergeleitet wurden. Bei einem zukünftigen Linearbeschleuniger, dem ILC, wird die Grenze auf Lambda_NC im Prozess e^+e^- -> Z gamma -> l^+ l^- gamma wesentlich erhöht (bis zu 6 TeV). Abgesehen davon ist dem ILC gerade der für den LHC kaum zugängliche Parameterbereich der nichtkommutativen Theorie erschlossen, was die Komplementarität der beiden Beschleunigerexperimente hinsichtlich der nichtkommutativen Parameter zeigt. Der zweite Teil der Arbeit entwickelte sich aus der Notwendigkeit heraus, den Gültigkeitsbereich der Theorie zu höheren Energien hin zu erweitern. Dafür haben wir den neutralen Sektor des nichtkommutativen SM um die nächste Ordnung in Theta ergänzt. Es stellte sich wider Erwarten heraus, dass die Theorie dabei um einige freie Parameter erweitert werden muss. Die zusätzlichen Parameter sind durch die homogenen Lösungen der Eichäquivalenzbedingungen gegeben, welche Ambiguit\"aten der Seiberg-Witten Abbildungen darstellen. Die allgemeine Erwartung war, dass die Ambiguitäten Feldredefinitionen entsprechen und daher in den Streumatrixelementen verschwinden m\"ussen. In dieser Arbeit wurde jedoch gezeigt, dass dies ab der zweiten Ordnung in Theta nicht der Fall ist und dass die Nichteindeutigkeit der Seiberg-Witten Abbildungen sich durchaus in Observablen niederschlägt. Die Vermutung besteht, dass jede neue Ordnung in Theta neue Parameter in die Theorie einführt. Wie weit und in welche Richtung die Theorie auf nichtkommutativer Raumzeit entwickelt werden muss, kann jedoch nur das Experiment entscheiden. / Despite its precise agreement with the experiment, the validity of the standard model (SM) of elementary particle physics is ensured only up to a scale of several hundred GeV so far. Even more, the inclusion of gravity into an unifying theory poses a problem which cannot be solved by ordinary quantum field theory (QFT). String theory, which is the most popular ansatz for a unified theory, predicts QFT on noncommutative space-time as a low energy limit. Nevertheless, independently of the motivation given by string theory, the nonlocality inherent to noncommutative QFT opens up the possibility for the inclusion of gravity. There are no theoretical predictions for the energy scale Lambda_NC at which noncommutative effects arise and it can be assumed to lie in the TeV range, which is the energy range probed by the next generation of colliders. Within this work we study the phenomenological consequences of a possible realization of QFT on noncommutative space-time relying on this assumption. The motivation for this thesis was given by the gap in the range of phenomenological studies of noncommutative effects in collider experiments, due to the absence in the literature of Large Hadron Collider (LHC) studies regarding noncommutative QFTs. In the first part we thus performed a phenomenological analysis of the hadronic process pp -> Z gamma -> l^+l^- gamma at the LHC and of electron-positron pair annihilation into a Z boson and a photon at the International Linear Collider (ILC). The noncommutative extension of the SM considered within this work relies on two building blocks: the Moyal-Weyl star-product of functions on ordinary space-time and the Seiberg-Witten maps. The latter relate the ordinary fields and parameters to their noncommutative counterparts such that ordinary gauge transformations induce noncommutative gauge transformations. This requirement is expressed by a set of inhomogeneous differential equations (the gauge equivalence equations) which are solved by the Seiberg-Witten maps order by order in the noncommutative parameter Theta. Thus, by means of the Moyal-Weyl star-product and the Seiberg-Witten maps a noncommutative extension of the SM as an effective theory as expansion in powers of Theta can be achieved, providing the framework of our phenomenological studies. A consequence of the noncommutativity of space-time is the violation of rotational invariance with respect to the beam axis. This effect shows up in the azimuthal dependence of cross sections, which is absent in the SM as well as in other models beyond the SM. Thus, the azimuthal dependence of the cross section is a typical signature of noncommutativity and can be used in order to discriminate it against other new physics effects. We have found this dependence to be best suited for deriving the sensitivity bounds on the noncommutative scale Lambda_NC. By studying pp -> Z gamma -> l^+l^- gamma to first order in the noncommutative parameter Theta, we show in the first part of this work that measurements at the LHC are sensitive to noncommutative effects only in certain cases, giving bounds on the noncommutative scale of Lambda_NC > 1.2 TeV. Our result improved the bounds present in the literature coming from past and present collider experiments by one order of magnitude. In order to explore the whole parameter range of the noncommutativity, ILC studies are required. By means of e^+e^- -> Z gamma -> l^+l^- gamma to first order in Theta we have shown that ILC measurements are complementary to LHC measurements of the noncommutative parameters. In addition, the bounds on Lambda_NC derived from the ILC are significantly higher and reach Lambda_NC > 6 TeV. The second part of this work arose from the necessity to enlarge the range of validity of our model towards higher energies. Thus, we expand the neutral current sector of the noncommutative SM to second order in $\theta$. We found that, against the general expectation, the theory must be enlarged by additional parameters. The new parameters enter the theory as ambiguities of the Seiberg-Witten maps. The latter are not uniquely determined and differ by homogeneous solutions of the gauge equivalence equations. The expectation was that the ambiguities correspond to field redefinitions and therefore should vanish in scattering matrix elements. However, we proved that this is not the case, and the ambiguities do affect physical observables. Our conjecture is, that every order in Theta will introduce new parameters to the theory. However, only the experiment can decide to what extent efforts with still higher orders in Theta are reasonable and will also give directions for the development of theoretical models of noncommutative QFTs.
|
478 |
Superstructure Bridge Selection Based on Bridge Life-Cycle Cost AnalysisStefan Leonardo Leiva Maldonado (6853484) 14 August 2019 (has links)
<div>Life cycle cost analysis (LCCA) has been defined as a method to assess the total cost of a project. It is a simple tool to use when a single project has different alternatives that fulfill the original requirements. Different alternatives could differ in initial investment, operational and maintenance costs among other long term costs. The cost involved in building a bridge depends upon many different factors. Moreover, long-term cost needs to be considered to estimate the true overall cost of the project and determine its life-cycle cost. Without watchful consideration of the long-term costs and full life cycle costing, current investment decisions that look attractive could result in a waste of economic resources in the future. This research is focused on short and medium span bridges (between 30-ft and 130-ft) which represents 65\% of the NBI INDIANA bridge inventory. </div><div><br></div><div>Bridges are categorized in three different groups of span ranges. Different superstructure types are considered for both concrete and steel options. Types considered include: bulb tees, AASHTO prestressed beams, slab bridges, prestressed concrete box beams, steel beams, steel girders, folded plate girders and simply supported steel beams for dead load and continuous for live load (SDCL). A design plan composed of simply supported bridges and continuous spans arrangements was carried out. Analysis for short and medium span bridges in Indiana based on LCCA is presented for different span ranges and span configurations. </div><div><br></div><div>Deterministic and stochastic analysis were done for all the span ranges considered. Monte Carlo Simulations (MCS) were used and the categorization of the different superstructure alternatives was done based on stochastic dominance. First, second, almost first and almost second stochastic dominance rules were used to determined the efficient set for each span length and all span configurations. Cost-effective life cycle cost profiles for each superstructure type were proposed. Additionally, the top three cost-effective alternatives for superstructure types depending on the span length are presented as well as the optimum superstructure types set for both simply supported and continuous beams. Results will help designers to consider the most cost-effective bridge solution for new projects, resulting in cost savings for agencies involved.</div>
|
479 |
Contribution à l'étude du comportement dynamique d'un système d'engrenage en présence d'incertitudes / Contribution to the study of the dynamic behavior of a gear system in the presence of uncertaintiesGuerine, Ahmed 19 September 2016 (has links)
Dans le cadre de la présente thèse, on a procédé à l’étude du comportement dynamique d’un système d’engrenage comportant des paramètres incertains. Une des principales hypothèses faite dans l’utilisation des méthodes de prise en compte des incertitudes, est que le modèle est déterministe, c’est-à-dire que les paramètres utilisés dans le modèle ont une valeur définie et invariante. Par ailleurs, la connaissance du domaine de variation de la réponse dynamique du système dues aux incertitudes qui découle des coefficients d’amortissement, des raideurs d’engrènement, la présence de frottement entre les pièces, les défauts de montage et de fabrication ou l’inertie des pales dans le cas d’éolienne est essentielle. Pour cela, dans la première partie, on s’applique à décrire la réponse dynamique d’une transmission par engrenage comportant des paramètres modélisés par des variables aléatoires. Pour ce faire, nous utilisons la simulation de Monte Carlo, la méthode de perturbation et la méthode de projection sur un chaos polynomial. Dans la seconde partie,deux approches sont utilisées pour analyser le comportement dynamique d’un système d’engrenage d’éolienne : l’approche probabiliste et l’approche ensembliste basée sur la méthode d’analyse par intervalles. L'objectif consiste à comparer les deux approches pour connaitre leurs avantages et inconvénients en termes de précision et temps de calcul. / In the present work, the dynamic behavior of a gear system with uncertain parameters is studied. One of the principal hypotheses in the use of methods for taking into account uncertainties is that the model is deterministic, that is to say that parameters used in the model have a defined and fixed value. Furthermore, the knowledge of variation response of a gear system involving damping coefficients, mesh stiffness, friction coefficient, assembly defect, manufacturing defect or the input blades in the case of wind turbine is essential. In the first part, we investigate the dynamic response of a gear system with uncertain parameters modeled as random variables. A Monte Carlo simulation, a perturbation method and a polynomial chaos method are carried out. In the second part, two approaches are used to analyze the dynamic behavior of a wind turbine gear system : the probabilistic approach and the interval analysis method. The objective is to compare the two approaches to define their advantages and disadvantages in terms of precision and computation time.
|
480 |
Fator de aumento de dose em Radioterapia com nanopartículas: estudo por simulação Monte Carlo / Dose enhancement factor in radiation therapy with nanoparticles: a Monte Carlo simulation study.Santos, Vinicius Fernando dos 29 November 2017 (has links)
A incorporação de nanopartículas metálicas em tecidos tumorais tem sido estudada em Radioterapia devido ao aumento de dose que pode ser obtido no volume alvo do tratamento. Estudos indicam que nanopartículas de ouro (AuNP) estão entre as de maior viabilidade biológica para essas aplicações, devido ao baixo potencial tóxico. Além disso, estudos mostram que AuNP de alguns nanômetros até alguns micrômetros podem permear vasos sanguíneos que alimentam tumores, permitindo sua incorporação nas células tumorais. Desta forma, este trabalho visou estudar os fatores de aumento de dose obtidos em Radioterapia com AuNP incorporadas ao tecido tumoral utilizando feixes de ortovoltagem, de braquiterapia e de teleterapia. Este trabalho utilizou de uma metodologia computacional, através de simulação Monte Carlo com o código PENELOPE. Foram simulados feixes clínicos de 50, 80, 150 e 250 kVp, Ir-192 e 6 MV, e um modelo de célula tumoral com AuNPs incorporadas com diferentes concentrações de ouro. O modelo de células utilizado possui 13 µm de diâmetro externo máximo e 2 µm de diâmetro no núcleo. Dois modelos de incorporação de AuNPs foram implementados: modelo homogêneo e modelo heterogêneo. No modelo homogêneo, as AuNP foram distribuídas homogeneamente no núcleo e as células foram irradiadas nas diferentes energias estudadas para avaliar o fator de aumento de dose (DEF) em função da concentração de ouro na célula e da energia do feixe. No modelo heterogêneo, aglomerados de AuNPs foram simulados individualmente dentro da célula. Neste modelo foram utilizados somente os espetros de radiação que apresentaram os melhores desempenhos no modelo homogêneo. Foram avaliadas a fluência de partículas ejetadas nas AuNPs, o DEF, as distribuições de doses e os perfis de dose com aglomerados de 50 a 220 nm na célula. Os resultados obtidos para o modelo homogêneo mostram que os feixes de baixa energia são os que proporcionam maior DEF para uma mesma concentração de AuNP. Os maiores DEFs obtidos foram de 2,80; 2,99; 1,62 e 1,61, para os feixes de 50 kVp, 80 kVp, 150 kVp, 250 kVp, respectivamente, sendo a maior incerteza de 1,9% para o feixe de 250 kVp. Através dos resultados obtidos com o modelo heterogêneo foi possível concluir que os elétrons ejetados possuem maior influência no aumento local da dose. Os perfis de dose, extraídos das distribuições de doses, para os aglomerados simulados permitiram obter os alcances das isodoses de 50, 20 e 10% da dose no entorno das AuNPs. Através desses perfis de dose pode-se concluir que o aumento de dose é local, da ordem de alguns micrômetros, dependendo do tamanho das nanopartículas e da energia do feixe primário. Para o feixe de 50 kVp, o DEF encontrado para uma incorporação heterogênea de seis aglomerados de AuNPs, correspondendo a um modelo clínico real, foi de 1,79, com incerteza de 0,4%. Com base nos resultados obtidos pode-se concluir que as energias de ortovoltagem proporcionam maior fator de aumento de dose que feixes de megavoltagem utilizados em teleterapia convencional. Além disso, o reforço local de dose pode proporcionar um fator de radiossensibilização celular se as AuNPs forem incorporadas no núcleo das células, nas redondezas do DNA, proporcionando um maior potencial de controle tumoral. / The incorporation of metal nanoparticles into tumor tissues has been studied in radiation therapy given of the dose enhancement that can be obtained in the target volume of the treatment. Studies indicate that gold nanoparticles (AuNP) are among the highest biologically viable for such applications, due to their low toxic potential. In addition, studies show that AuNP from a few nanometers to a few micrometers can permeate blood vessels that feed tumors, allowing their incorporation into tumor cells. Hence, this study´s goal was to study the dose enhancement factors obtained in radiation therapy with AuNP incorporated in the tumor using orthovoltage, brachytherapy and teletherapy beams. This work used a computational methodology, through Monte Carlo simulation with the PENELOPE package. Clinical beams of 50, 80, 150 and 250 kVp, Ir-192 and 6 MV were simulated with a tumor cell model with incorporated AuNPs. The cell model has maximum outer diameter of 13 m and 2 m of nucleus diameter. Two models of AuNP incorporation were implemented: homogeneous model and heterogeneous model. In the homogeneous model the AuNP were distributed homogeneously in the nucleus and the cells were irradiated in the different beams studied to evaluate the dose enhancement factors (DEF) as a function of concentration of gold in the cell and radiation beam. In the heterogeneous model, clusters of AuNPs were simulated individually within the cell. In this model, the radiation spectra used was selected among those that presented the best performances in the homogeneous model. The fluence of particles ejected from the AuNPs, the DEFs, the dose distributions and dose profiles for clusters of 50 to 220 nm in the cell were evaluated. The results obtained for the homogeneous model show that lower energy beams provide the highest DEFs for the same concentration of AuNP. The highest DEFs obtained were 2.80; 2.99; 1.62 and 1.61, for the beams of 50 kVp, 80 kVp, 150 kVp, 250 kVp, respectively, with a maximun uncertainty of 1.9% for the 250 kVp beam. Through the results obtained with the heterogeneous model it was possible to conclude that the electrons ejected from he AuNPs have the major influence on the local dose enhancement. The dose profiles extracted from the dose distributions for the simulated clusters allowed the evaluation of the ranges for the 50, 20 and 10% isodoses in the surroundings of the AuNPs. Through these dose profiles, it can be concluded that the dose increase is local, in the order of a few micrometers, depending on the size of the nanoparticles and the energy of the primary beam. For the 50 kVp beam, the DEF found for a heterogeneous incorporation of six clusters of AuNPs, corresponding to an actual clinical model, was 1.79, with uncertainty of 0.4%. Based on the results obtained it can be concluded that kilovoltage energies provide a higher dose enhancement factor than megavoltage beams used in teletherapy. In addition, local dose enhancement may provide a cellular radiosensitization factor if the nanoparticles are incorporated in the nucleus of the cells, in the vicinity of the DNA, providing an enhanced potential for tumor control.
|
Page generated in 0.0417 seconds