Spelling suggestions: "subject:"montecarlo methods"" "subject:"contestarlo methods""
41 |
REEVALUATION OF THE AAPM TG-43 BRACHYTHERAPY DOSIMETRY PARAMETERS FOR AN <sup>125</sup>I SEED, AND THE INFLUENCE OF EYE PLAQUE DESIGN ON DOSE DISTRIBUTIONS AND DOSE-VOLUME HISTOGRAMSAryal, Prakash 01 January 2014 (has links)
The TG-43 dosimetry parameters of the AdvantageTM 125I model IAI-125A brachytherapy seed were studied. An investigation using modern MCNP radiation transport code with updated cross-section libraries was performed. Twelve different simulation conditions were studied for a single seed by varying the coating thickness, mass density, photon energy spectrum and cross-section library. The dose rate was found to be 6.3% lower at 1 cm in comparison to published results. New TG-43 dosimetry parameters are proposed.
The dose distribution for a brachytherapy eye plaque, model EP917, was investigated, including the effects of collimation from high-Z slots. Dose distributions for 26 slot designs were determined using Monte Carlo methods and compared between the published literature, a clinical treatment planning system, and physical measurements.
The dosimetric effect of the composition and mass density of the gold backing was shown to be less than 3%. Slot depth, width, and length changed the central axis (CAX) dose distributions by < 1% per 0.1 mm in design variation. Seed shifts in the slot towards the eye and shifts of the 125I-laden silver rod within the seed had the greatest impact on the CAX dose distribution, changing it by 14%, 9%, 4.3%, and 2.7% at 1, 2, 5, and 10 mm, respectively, from the inner scleral surface.
The measured, full plaque slot geometry delivered 2.4% ± 1.1% higher dose along the plaque’s CAX than the geometry provided by the manufacturer and 2.2%±2.3% higher than Plaque SimulatorTM (PS) treatment planning software (version 5.7.6). The D10 for the simulated tumor, inner sclera, and outer sclera for the measured slot plaque to manufacturer provided slot design was 9%, 10%, and 19% higher, respectively. In comparison to the measured plaque design, a theoretical plaque having narrow and deep slots delivered 30%, 37%, and 62% lower D10 doses to the tumor, inner sclera, and outer sclera, respectively. CAX doses at –1, 0, 1, and 2 mm were also lower by a factor of 2.6, 1.72, 1.50, and 1.39, respectively. The study identified substantial sensitivity of the EP917 plaque dose distributions to slot design.
|
42 |
Bayesian Variable Selection in Spatial Autoregressive ModelsCrespo Cuaresma, Jesus, Piribauer, Philipp 07 1900 (has links) (PDF)
This paper compares the performance of Bayesian variable selection approaches for spatial autoregressive models. We present two alternative approaches which can be implemented using Gibbs sampling methods in a straightforward way and allow us to deal with the problem of model uncertainty in spatial autoregressive models in a flexible and computationally efficient way. In a simulation study we show that the variable selection approaches tend to outperform existing Bayesian model averaging techniques both in terms of in-sample predictive performance and computational efficiency.
(authors' abstract) / Series: Department of Economics Working Paper Series
|
43 |
Anomalous diffusion and random walks on random fractalsNgoc Anh, Do Hoang 08 March 2010 (has links) (PDF)
The purpose of this research is to investigate properties of diffusion processes in porous media. Porous media are modelled by random Sierpinski carpets, each carpet is constructed by mixing two different generators with the same linear size. Diffusion on porous media is studied by performing random walks on random Sierpinski carpets and is characterized by the random walk dimension $d_w$.
In the first part of this work we study $d_w$ as a function of the ratio of constituents in a mixture. The simulation results show that the resulting $d_w$ can be the same as, higher or lower than $d_w$ of carpets made by a single constituent generator.
In the second part, we discuss the influence of static external fields on the behavior of diffusion. The biased random walk is used to model these phenomena and we report on many simulations with different field strengths and field directions. The results show that one structural feature of Sierpinski carpets called traps can have a strong influence on the observed diffusion properties.
In the third part, we investigate the effect of diffusion under the influence of external fields which change direction back and forth after a certain duration. The results show a strong dependence on the period of oscillation, the field strength and structural properties of the carpet.
|
44 |
Pricing and hedging asian options using Monte Carlo and integral transform techniquesChibawara, Trust 03 1900 (has links)
Thesis (MSc (Mathematics))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: In this thesis, we discuss and apply the Monte Carlo and integral transform methods in
pricing options. These methods have proved to be very e ective in the valuation of options
especially when acceleration techniques are introduced. By rst pricing European call
options we have motivated the use of these methods in pricing arithmetic Asian options
which have proved to be di cult to price and hedge under the BlackScholes framework.
The arithmetic average of the prices in this framework, is a sum of correlated lognormal
distributions whose distribution does not admit a simple analytic expression. However,
many approaches have been reported in the academic literature for pricing these options.
We provide a hedging strategy by manipulating the results by Geman and Yor [42] for
continuous xed strike arithmetic Asian call options. We then derive a double Laplace
transform formula for pricing continuous Asian call options following the approach by Fu
et al. [39]. By applying the multi-Laguerre and iterated Talbot inversion techniques for
Laplace transforms to the resulting pricing formula we obtain the option prices. Finally,
we discuss the shortcomings of using the Laplace transform in pricing options. / AFRIKAANSE OPSOMMING: In hierdie tesis bespreek ons Monte Carlo- en integraaltransform metodes om die pryse van
nansi ele opsies te bepaal. Hierdie metodes is baie e ektief, veral wanneer versnellingsmetodes
ingevoer word. Ons bepaal eers die pryse van Europese opsies as motivering, voordat
ons die bostaande metodes gebruik vir prysbepaling van Asiatiese opsies met rekenkundige
gemiddeldes, wat baie moeiliker is om te hanteer in die BlackScholes raamwerk. Die
rekenkundige gemiddelde van batepryse in hierdie raamwerk is 'n som van gekorreleerde
lognormale distribusies wie se distribusie nie oor 'n eenvoudige analitiese vorm beskik nie.
Daar is egter talle benaderings vir die prysbepaling van hierdie opsies in die akademiese
literatuur. Ons bied 'n verskansingsstrategie vir Asiatiese opsies in kontinue tyd met 'n
vaste trefprys aan deur die resultate van Geman en Yor [42] te manipuleer. Daarna volg
ons Fu et al. [39] om 'n dubbele Laplace transform formule vir die pryse af te lei. Deur
toepassing van multi-Laguerre en herhaalde Talbotinversie tegnieke vir Laplace transforms
op hierdie formule, bepaal ons dan die opsiepryse. Ons sluit af met 'n bespreking van die
tekortkominge van die gebruik van die Laplace transform vir prysbepaling.
|
45 |
Modélisation du smile de volatilité pour les produits dérivés de taux d'intérêt / Multi factor stochastic volatility for interest rates modelingPalidda, Ernesto 29 May 2015 (has links)
L'objet de cette thèse est l'étude d'un modèle de la dynamique de la courbe de taux d'intérêt pour la valorisation et la gestion des produits dérivées. En particulier, nous souhaitons modéliser la dynamique des prix dépendant de la volatilité. La pratique de marché consiste à utiliser une représentation paramétrique du marché, et à construire les portefeuilles de couverture en calculant les sensibilités par rapport aux paramètres du modèle. Les paramètres du modèle étant calibrés au quotidien pour que le modèle reproduise les prix de marché, la propriété d'autofinancement n'est pas vérifiée. Notre approche est différente, et consiste à remplacer les paramètres par des facteurs, qui sont supposés stochastiques. Les portefeuilles de couverture sont construits en annulant les sensibilités des prix à ces facteurs. Les portefeuilles ainsi obtenus vérifient la propriété d’autofinancement / This PhD thesis is devoted to the study of an Affine Term Structure Model where we use Wishart-like processes to model the stochastic variance-covariance of interest rates. This work was initially motivated by some thoughts on calibration and model risk in hedging interest rates derivatives. The ambition of our work is to build a model which reduces as much as possible the noise coming from daily re-calibration of the model to the market. It is standard market practice to hedge interest rates derivatives using models with parameters that are calibrated on a daily basis to fit the market prices of a set of well chosen instruments (typically the instrument that will be used to hedge the derivative). The model assumes that the parameters are constant, and the model price is based on this assumption; however since these parameters are re-calibrated, they become in fact stochastic. Therefore, calibration introduces some additional terms in the price dynamics (precisely in the drift term of the dynamics) which can lead to poor P&L explain, and mishedging. The initial idea of our research work is to replace the parameters by factors, and assume a dynamics for these factors, and assume that all the parameters involved in the model are constant. Instead of calibrating the parameters to the market, we fit the value of the factors to the observed market prices. A large part of this work has been devoted to the development of an efficient numerical framework to implement the model. We study second order discretization schemes for Monte Carlo simulation of the model. We also study efficient methods for pricing vanilla instruments such as swaptions and caplets. In particular, we investigate expansion techniques for prices and volatility of caplets and swaptions. The arguments that we use to obtain the expansion rely on an expansion of the infinitesimal generator with respect to a perturbation factor. Finally we have studied the calibration problem. As mentioned before, the idea of the model we study in this thesis is to keep the parameters of the model constant, and calibrate the values of the factors to fit the market. In particular, we need to calibrate the initial values (or the variations) of the Wishart-like process to fit the market, which introduces a positive semidefinite constraint in the optimization problem. Semidefinite programming (SDP) gives a natural framework to handle this constraint
|
46 |
Target Discrimination Against Clutter Based on Unsupervised Clustering and Sequential Monte Carlo TrackingJanuary 2016 (has links)
abstract: The radar performance of detecting a target and estimating its parameters can deteriorate rapidly in the presence of high clutter. This is because radar measurements due to clutter returns can be falsely detected as if originating from the actual target. Various data association methods and multiple hypothesis filtering approaches have been considered to solve this problem. Such methods, however, can be computationally intensive for real time radar processing. This work proposes a new approach that is based on the unsupervised clustering of target and clutter detections before target tracking using particle filtering. In particular, Gaussian mixture modeling is first used to separate detections into two Gaussian distinct mixtures. Using eigenvector analysis, the eccentricity of the covariance matrices of the Gaussian mixtures are computed and compared to threshold values that are obtained a priori. The thresholding allows only target detections to be used for target tracking. Simulations demonstrate the performance of the new algorithm and compare it with using k-means for clustering instead of Gaussian mixture modeling. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2016
|
47 |
Particle-Based Modeling of Reliability for Millimeter-Wave GaN Devices for Power Amplifier ApplicationsJanuary 2018 (has links)
abstract: In this work, an advanced simulation study of reliability in millimeter-wave (mm-wave) GaN Devices for power amplifier (PA) applications is performed by means of a particle-based full band Cellular Monte Carlo device simulator (CMC). The goal of the study is to obtain a systematic characterization of the performance of GaN devices operating in DC, small signal AC and large-signal radio-frequency (RF) conditions emphasizing on the microscopic properties that correlate to degradation of device performance such as generation of hot carriers, presence of material defects and self-heating effects. First, a review of concepts concerning GaN technology, devices, reliability mechanisms and PA design is presented in chapter 2. Then, in chapter 3 a study of non-idealities of AlGaN/GaN heterojunction diodes is performed, demonstrating that mole fraction variations and the presence of unintentional Schottky contacts are the main limiting factor for high current drive of the devices under study. Chapter 4 consists in a study of hot electron generation in GaN HEMTs, in terms of the accurate simulation of the electron energy distribution function (EDF) obtained under DC and RF operation, taking into account frequency and temperature variations. The calculated EDFs suggest that Class AB PAs operating at low frequency (10 GHz) are more robust to hot carrier effects than when operating under DC or high frequency RF (up to 40 GHz). Also, operation under Class A yields higher EDFs than Class AB indicating lower reliability. This study is followed in chapter 5 by the proposal of a novel π-Shaped gate contact for GaN HEMTs which effectively reduces the hot electron generation while preserving device performance. Finally, in chapter 6 the electro-thermal characterization of GaN-on-Si HEMTs is performed by means of an expanded CMC framework, where charge and heat transport are self-consistently coupled. After the electro-thermal model is validated to experimental data, the assessment of self-heating under lateral scaling is considered. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2018
|
48 |
Inférence bayésienne dans les modèles de croissance de plantes pour la prévision et la caractérisation des incertitudes / Bayesian inference in plant growth models for prediction and uncertainty assessmentChen, Yuting 27 June 2014 (has links)
La croissance des plantes en interaction avec l'environnement peut être décrite par des modèles mathématiques. Ceux-ci présentent des perspectives prometteuses pour un nombre considérable d'applications telles que la prévision des rendements ou l'expérimentation virtuelle dans le contexte de la sélection variétale. Dans cette thèse, nous nous intéressons aux différentes solutions capables d'améliorer les capacités prédictives des modèles de croissance de plantes, en particulier grâce à des méthodes statistiques avancées. Notre contribution se résume en quatre parties.Tout d'abord, nous proposons un nouveau modèle de culture (Log-Normal Allocation and Senescence ; LNAS). Entièrement construit dans un cadre probabiliste, il décrit seulement les processus écophysiologiques essentiels au bilan de la biomasse végétale afin de contourner les problèmes d'identification et d'accentuer l'évaluation des incertitudes. Ensuite, nous étudions en détail le paramétrage du modèle. Dans le cadre Bayésien, nous mettons en œuvre des méthodes Monte-Carlo Séquentielles (SMC) et des méthodes de Monte-Carlo par Chaînes de Markov (MCMC) afin de répondre aux difficultés soulevées lors du paramétrage des modèles de croissance de plantes, caractérisés par des équations dynamiques non-linéaires, des données rares et un nombre important de paramètres. Dans les cas où la distribution a priori est peu informative, voire non-informative, nous proposons une version itérative des méthodes SMC et MCMC, approche équivalente à une variante stochastique d'un algorithme de type Espérance-Maximisation, dans le but de valoriser les données d'observation tout en préservant la robustesse des méthodes Bayésiennes. En troisième lieu, nous soumettons une méthode d'assimilation des données en trois étapes pour résoudre le problème de prévision du modèle. Une première étape d'analyse de sensibilité permet d'identifier les paramètres les plus influents afin d'élaborer une version plus robuste de modèle par la méthode de sélection de modèles à l'aide de critères appropriés. Ces paramètres sélectionnés sont par la suite estimés en portant une attention particulière à l'évaluation des incertitudes. La distribution a posteriori ainsi obtenue est considérée comme information a priori pour l'étape de prévision, dans laquelle une méthode du type SMC telle que le filtrage par noyau de convolution (CPF) est employée afin d'effectuer l'assimilation de données. Dans cette étape, les estimations des états cachés et des paramètres sont mis à jour dans l'objectif d'améliorer la précision de la prévision et de réduire l'incertitude associée. Finalement, d'un point de vue applicatif, la méthodologie proposée est mise en œuvre et évaluée avec deux modèles de croissance de plantes, le modèle LNAS pour la betterave sucrière et le modèle STICS pour le blé d'hiver. Quelques pistes d'utilisation de la méthode pour l'amélioration du design expérimental sont également étudiées, dans le but d'améliorer la qualité de la prévision. Les applications aux données expérimentales réelles montrent des performances prédictives encourageantes, ce qui ouvre la voie à des outils d'aide à la décision en agriculture. / Plant growth models aim to describe plant development and functional processes in interaction with the environment. They offer promising perspectives for many applications, such as yield prediction for decision support or virtual experimentation inthe context of breeding. This PhD focuses on the solutions to enhance plant growth model predictive capacity with an emphasis on advanced statistical methods. Our contributions can be summarized in four parts. Firstly, from a model design perspective, the Log-Normal Allocation and Senescence (LNAS) crop model is proposed. It describes only the essential ecophysiological processes for biomass budget in a probabilistic framework, so as to avoid identification problems and to accentuate uncertainty assessment in model prediction. Secondly, a thorough research is conducted regarding model parameterization. In a Bayesian framework, both Sequential Monte Carlo (SMC) methods and Markov chain Monte Carlo (MCMC) based methods are investigated to address the parameterization issues in the context of plant growth models, which are frequently characterized by nonlinear dynamics, scarce data and a large number of parameters. Particularly, whenthe prior distribution is non-informative, with the objective to put more emphasis on the observation data while preserving the robustness of Bayesian methods, an iterative version of the SMC and MCMC methods is introduced. It can be regarded as a stochastic variant of an EM type algorithm. Thirdly, a three-step data assimilation approach is proposed to address model prediction issues. The most influential parameters are first identified by global sensitivity analysis and chosen by model selection. Subsequently, the model calibration is performed with special attention paid to the uncertainty assessment. The posterior distribution obtained from this estimation step is consequently considered as prior information for the prediction step, in which a SMC-based on-line estimation method such as Convolution Particle Filtering (CPF) is employed to perform data assimilation. Both state and parameter estimates are updated with the purpose of improving theprediction accuracy and reducing the associated uncertainty. Finally, from an application point of view, the proposed methodology is implemented and evaluated with two crop models, the LNAS model for sugar beet and the STICS model for winter wheat. Some indications are also given on the experimental design to optimize the quality of predictions. The applications to real case scenarios show encouraging predictive performances and open the way to potential tools for yield prediction in agriculture.
|
49 |
Desenvolvimento de um sistema computacional baseado no código Geant4 para avaliações dosimétricas em radioterapia.OLIVEIRA, Alex Cristóvão Holanda de 29 April 2016 (has links)
Submitted by Rafael Santana (rafael.silvasantana@ufpe.br) on 2017-05-11T17:49:20Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Tese_AlexOliveira_PROTEN_2016.pdf: 2268858 bytes, checksum: 5f7228b81f8233f71ade4db328005315 (MD5) / Made available in DSpace on 2017-05-11T17:49:20Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Tese_AlexOliveira_PROTEN_2016.pdf: 2268858 bytes, checksum: 5f7228b81f8233f71ade4db328005315 (MD5)
Previous issue date: 2016-04-29 / CNEN / A incidência de câncer tem crescido no Brasil, assim como em todo mundo, acompanhando a
mudança do perfil etário da população. Uma das técnicas mais importantes e comumente
utilizadas no tratamento do câncer é a radioterapia. Em torno de 60% dos casos novos de
neoplasias malignas utiliza-se a radioterapia. O equipamento mais utilizado para radioterapia
é o acelerador linear (Linac) que produz feixes de elétrons ou raios-X na faixa energética de 5
a 30 MeV. A maneira mais apropriada de irradiar o paciente é determinada durante o
planejamento. Atualmente, o sistema computacional de planejamento radioterápico (TPS –
Treatment Planning System) é a principal e a mais importante ferramenta no processo de
planejamento em radioterapia. O principal objetivo desse trabalho foi desenvolver um sistema
computacional baseado no código Monte Carlo (MC) Geant4 para avaliações dosimétricas em
radioterapia com feixe de fótons. Além de planejamentos, essas avaliações podem ser
realizadas para pesquisa e controle de qualidade de equipamentos e de TPSs. O sistema
computacional, denominado Quimera, é composto de uma interface gráfica de usuário (qGUI)
e três aplicativos MC (qLinacs, qMATphantoms e qNCTphantoms). A qGUI tem a função de
interface para os aplicativos MC, criando ou editando os arquivos de entrada, executando as
simulações e analisando os resultados. O qLinacs é usado para modelagem e geração de
feixes de irradiação (espaços de fase) de Linacs. O qMATphantoms e o qNCTphantoms são
usados para avaliações de dose em modelos virtuais de fantomas físicos e em imagens de
tomografia computadorizada (CT), respectivamente. A partir de dados do fabricante, foram
modelados no qLinacs um Linac e um colimador multifolhas (MLC) da Varian. As
modelagens do Linac e do MLC foram validadas utilizando dados experimentais. As
validações do qMATphantoms e do qNCTphantoms foram realizadas utilizando espaços de
fase da IAEA (International Atomic Energy Agency). Nessa primeira versão, o Quimera pode
ser usado para pesquisa, planejamentos radioterápicos de tratamentos simples e controle de
qualidade em radioterapia com feixes de fótons gerados por Linacs. Os aplicativos MC
funcionam independentes da qGUI e essa pode ser usada para manipulação de imagens CT e
análise de resultados de outros aplicativos MC. Devido à estrutura modular do Quimera, é
possível adicionar novos aplicativos MC, permitindo o desenvolvimento de novas pesquisas,
modelagem de Linacs e MLCs de diferentes fabricantes, o uso de outras técnicas (feixe de
elétrons, prótons, íons pesados, tomoterapia, etc.) e aplicações em áreas correlatas
(braquiterapia, radioproteção, etc.). Esse trabalho é uma iniciativa para desenvolvimento
colaborativo de um sistema computacional completo que possa ser usado em radioterapia,
tanto na prática clínica e técnica quanto na pesquisa. / The incidence of cancer has grown in Brazil, as well as around the world, following the
change in the age profile of the population. One of the most important techniques and
commonly used in cancer treatment is radiotherapy. Around 60% of new cases of cancer use
radiation in at least one phase of treatment. The most used equipment for radiotherapy is a
linear accelerator (Linac) which produces electron or X-ray beams in energy range from 5 to
30 MeV. The most appropriate way to irradiate a patient is determined during treatment
planning. Currently, treatment planning system (TPS) is the main and the most important tool
in the process of planning for radiotherapy. The main objective of this work is to develop a
computational system based on the MC code Geant4 for dose evaluations in photon beam
radiotherapy. In addition to treatment planning, these dose evaluations can be performed for
research and quality control of equipment and TPSs. The computer system, called Quimera,
consists of a graphical user interface (qGUI) and three MC applications (qLinacs,
qMATphantoms and qNCTphantoms). The qGUI has the function of interface for the MC
applications, by creating or editing the input files, running simulations and analyzing the
results. The qLinacs is used for modeling and generation of Linac beams (phase space). The
qMATphantoms and qNCTphantoms are used for dose calculations in virtual models of
physical phantoms and computed tomography (CT) images, respectively. From
manufacturer's data, models of a Varian Linac photon beam and a Varian multileaf collimator
(MLC) were simulated in the qLinacs. The Linac and MLC modelings were validated using
experimental data. qMATphamtoms and qNCTphantoms were validated using IAEA phase
spaces. In this first version, the Quimera can be used for research, radiotherapy planning of
simple treatments and quality control in photom beam radiotherapy. The MC applications
work independent of the qGUI and the qGUI can be used for handling CT images and
analysis of results from other MC applications. Due to the modular structure of the Quimera,
one can add new MC applications, allowing the development of new research for use of other
techniques (electron beam, protons, heavy ions, tomotherapy, etc.) and applications
(brachytherapy, radiation protection, etc.) in radiotherapy. Quimera is an initiative for
collaborative development of a complete computer system that can be used in radiotherapy,
for clinical and technical practice and research
|
50 |
Desenvolvimento de uma metodologia para caracterização do filtro cuno do reator IEA-R1 utilizando o método de Monte Carlo / Development of methodology for characterization of cartridge filters from the IEA-R1 using the Monte Carlo methodPriscila Costa 28 January 2015 (has links)
O filtro cuno faz parte do circuito de tratamento de água do reator IEA-R1 que , quando saturado, é substituído, se tornando um rejeito radioativo que deve ser gerenciado. Neste trabalho foi realizada a caracterização primária do filtro cuno do reator nuclear IEA-R1 do IPEN utilizando-se espectrometria gama associada ao método de Monte Carlo. A espectrometria gama foi realizada utilizando-se um detector de germânio hiperpuro (HPGe). O cristal de germânio representa o volume ativo de detecção do detector HPGe, que possui uma região denominada camada morta ou camada inativa. Na literatura tem sido reportada uma diferença entre os valores experimentais e teóricos na obtenção da curva de eficiência desses detectores. Neste trabalho foi utilizado o código MCNP-4C para a obtenção da calibração em eficiência do detector para a geometria do filtro cuno, onde foram estudadas as influências da camada morta e do efeito de soma em cascata no detector HPGe. As correções dos valores de camada morta foram realizadas variando-se a espessura e o raio do cristal de germânio. O detector possui 75,83 cm3 de volume ativo de detecção, segundo informações fornecidas pelo fabricante. Entretanto os resultados encontrados mostraram que o valor de volume ativo real é menor do que o especificado, onde a camada morta representa 16% do volume total do cristal. A análise do filtro cuno por meio da espectrometria gama, permitiu a identificação de picos de energia. Por meio desses picos foram identificados três radionuclídeos no filtro: 108mAg, 110mAg e 60Co. A partir da calibração em eficiência obtida pelo método de Monte Carlo, o valor de atividade estimado para esses radionuclídeos está na ordem de MBq. / The Cuno filter is part of the water processing circuit of the IEA-R1 reactor and, when saturated, it is replaced and becomes a radioactive waste, which must be managed. In this work, the primary characterization of the Cuno filter of the IEA-R1 nuclear reactor at IPEN was carried out using gamma spectrometry associated with the Monte Carlo method. The gamma spectrometry was performed using a hyperpure germanium detector (HPGe). The germanium crystal represents the detection active volume of the HPGe detector, which has a region called dead layer or inactive layer. It has been reported in the literature a difference between the theoretical and experimental values when obtaining the efficiency curve of these detectors. In this study we used the MCNP-4C code to obtain the detector calibration efficiency for the geometry of the Cuno filter, and the influence of the dead layer and the effect of sum in cascade at the HPGe detector were studied. The correction of the dead layer values were made by varying the thickness and the radius of the germanium crystal. The detector has 75.83 cm3 of active volume of detection, according to information provided by the manufacturer. Nevertheless, the results showed that the actual value of active volume is less than the one specified, where the dead layer represents 16% of the total volume of the crystal. A Cuno filter analysis by gamma spectrometry has enabled identifying energy peaks. Using these peaks, three radionuclides were identified in the filter: 108mAg, 110mAg and 60Co. From the calibration efficiency obtained by the Monte Carlo method, the value of activity estimated for these radionuclides is in the order of MBq.
|
Page generated in 0.0635 seconds