1091 |
Metodologia de aquisição de dados e análise por software, para sistemas de coincidências 4πβ-γ e sua aplicação na padronização de radionuclídeos, com ênfase em transições metaestáveis / Data acquisition with software analysis methodology for 4πβ-γ coincidence systems and application in radionuclide standardization, with emphasis on metastable transitionsFranco Brancaccio 06 August 2013 (has links)
O Laboratório de Metrologia Nuclear (LMN) do Instituto de Pesquisas Energéticas e Nucleares (IPEN) desenvolveu recentemente o Sistema de Coincidência por Software (SCS), para a digitalização e registro dos sinais de seus sistemas de coincidências 4πβ-γ utilizados na padronização de radionuclídeos. O sistema SCS possui quatro entradas analógicas independentes que possibilitam o registro simultâneo dos sinais de até quatro detectores (vias β e γ). A análise dos dados é realizada a posteriori, por software, incluindo discriminação de amplitudes, simulação do tempo-morto da medida e definição do tempo de resolução de coincidências. O software então instalado junto ao SCS estabeleceu a metodologia básica de análise, aplicável a radionuclídeos com decaimento simples, como o 60Co. O presente trabalho amplia a metodologia de análise de dados obtidos com o SCS, de modo a possibilitar o uso de detectores com alta resolução em energia (HPGe), para padronização de radionuclídeos com decaimentos mais complexos, com diferentes ramos de decaimento ou com transições metaestáveis. A expansão metodológica tem suporte na elaboração do programa de análise denominado Coincidence Analyzing Task (CAT). A seção de aplicação inclui as padronizações do 152Eu (diferentes ramos de decaimento) e do 67Ga (nível metaestável). A padronização do 152Eu utilizou uma amostra de uma comparação internacional promovida pelo BIPM (Bureau International des Poids et Mesures), podendo-se comparar a atividade obtida com o valores de laboratórios mundialmente reconhecidos, de modo a avaliar e validar a metodologia desenvolvida. Para o 67Ga, foram obtidas: a meia-vida do nível metaestável de 93 keV, por três diferentes técnicas de análise do conjunto de dados (βpronto-γatrasado-HPGe, βpronto-γatrasado-NaI e βpronto- βatrasado); as atividades de cinco amostras, normalizadas por Monte Carlo e as probabilidades de emissão gama por decaimento, para nove transições. / The Nuclear Metrology Laboratory (LMN) at the Nuclear and Energy Research Institute (IPEN São Paulo, Brazil) has recently developed the Software Coincidence System (SCS), for the digitalization and recording of signals from its 4πβγ detection systems. SCS features up four independent analog inputs, enabling the simultaneous recording of up four detectors (β and γ). The analysis task is performed a posteriori, by means of specialized software, including the setting up of energy discrimination levels, dead-time and coincidence resolution time. The software initially installed was able to perform a basic analysis, for the standardization of simple decay radionuclides, such as 60Co. The present work improves the SCS analysis methodology, in order to enable the use of high resolution detectors (HPGe), for standardization of complex decay radionuclides, including metastable transitions or different decay branches. A program called Coincidence Analyzing Task (CAT) was implemented for the data processing. The work also includes an application section, where the standardization results of 152Eu (different decay branches) and 67Ga (with a metastable level) are presented. The 152Eu standardization was considered for the methodology validation, since it was accomplished by the measurement of a sample previously standardized in an international comparison sponsored by the BIPM (Bureau International des Poids et Mesures). The activity value obtained in this work, as well as its accuracy, could be compared to those obtained by important laboratories in the world. The 67Ga standardization includes the measurement of five samples, with activity values normalized by Monte Carlo simulation. The 93 keV metastable level half-life and the gamma emission probabilities per decay for nine transition of 67Ga are also presented. The metastable half-life was obtained by three different methods: βprompt-γdelayed-HPGe, βprompt-γdelayed-NaI and βprompt-βdelayed.
|
1092 |
Effets thermoélectriques dans des liquides complexes : liquides ioniques et ferrofluides / Thermoelectric effects in complex liquids : ionic liquids and ferrofluidsSalez, Thomas 10 November 2017 (has links)
Les liquides complexes sont des matériaux très prometteurs pour réaliser la conversion bon marché et à grande échelle d’énergie thermique en énergie électrique, dans un contexte de réchauffement climatique et de maîtrise de la consommation d’énergie. Nous montrons qu’en présence d’un couple redox, les cellules thermogalvaniques à base de liquides ioniques (NEA et EMIMTFSI) présentent des propriétés remarquables tels des coefficients Seebeck de plus de 5 mV/K (Eu³⁺/Eu²⁺ dans l’EMIMTFSI). De même, ces travaux présentent l’utilisation de ferrofluides, solutions colloïdales (aqueuses ou à base de solvants organiques) de nanoparticules magnétiques (maghémite), pour accroître le coefficient Seebeck et le courant extractible de générateurs thermoélectriques liquides. Les phénomènes réversibles d’adsorption des nanoparticules sur la surface des électrodes jouent également un rôle important sur les propriétés thermoélectriques de ces solutions, et sont modifiés par l’application de champs magnétiques homogènes parallèles ou perpendiculaires au gradient de température.En l’absence d’un couple redox, les liquides ioniques peuvent être utilisés pour fabriquer des supercondensateurs à charge thermique. Ces derniers exploitent les modifications avec la température des double couches électriques aux interfaces liquide/électrode. Nous avons étudié ici ces modifications de double couches dans l’EMIMBF4 par simulations numériques de Monte-Carlo. Les résultats démontrent un accroissement conséquent des propriétés thermoélectriques lors de la dilution du liquide ionique dans un solvant organique, l’acétonitrile, en accord qualitatif avec les résultats expérimentaux. / Complex liquids are promising material for low cost and wide scale conversion of thermal energy to electric energy, within a context of global warming and control of the energy consumption.In this work we showed that with a redox couple, ionic liquid (EAN and EMIMTFSI) based thermogalvanic cells present remarkable thermoelectric properties such as the Seebeck coefficient over 5 mV/K (Eu³⁺/Eu²⁺ in EMIMTFSI). Moreover, we demonstrated for the first time that ferrofluids, colloidal solutions (aqueous or organic solvent based) of magnetic nanoparticles (maghemite), can be used to increase both the Seebeck coefficient and the electric current in liquid thermoelectric generators through unknown physical processes. The importance of reversible adsorption phenomena of the nanoparticles on the electrodes’ surface for the thermoelectric properties of these solutions was revealed. That can be further modified by a homogeneous magnetic field applied perpendicular or parallel to the temperature gradient. Without a redox couple, ionic liquids can be used to build thermally chargeable supercapacitors. They take advantage of temperature dependent electrical double-layer formation at liquid/electrode interfaces. Here, we studied these double-layer modifications in EMIMBF4/platinum through Monte-Carlo simulations. The results show substantial modifications in the thermoelectric properties when the ionic liquid is diluted in an organic solvent, acetonitrile. These results are qualitatively consistent with experimental measurements.
|
1093 |
Condensation et homogénéisation des sections efficaces pour les codes de transport déterministes par la méthode de Monte Carlo : Application aux réacteurs à neutrons rapides de GEN IV / Condensation and homogenization of cross sections for the deterministic transport codes with Monte Carlo method : Application to the GEN IV fast neutron reactorsCai, Li 30 October 2014 (has links)
Dans le cadre des études de neutronique menées pour réacteurs de GEN-IV, les nouveaux outils de calcul des cœurs de réacteur sont implémentés dans l’ensemble du code APOLLO3® pour la partie déterministe. Ces méthodes de calculs s’appuient sur des données nucléaires discrétisée en énergie (appelées multi-groupes et généralement produites par des codes déterministes eux aussi) et doivent être validées et qualifiées par rapport à des calculs basés sur la méthode de référence Monte-Carlo. L’objectif de cette thèse est de mettre au point une technique alternative de production des propriétés nucléaires multi-groupes par un code de Monte-Carlo (TRIPOLI-4®). Dans un premier temps, après avoir réalisé des tests sur les fonctionnalités existantes de l’homogénéisation et de la condensation avec des précisions meilleures accessibles aujourd’hui, des incohérences sont mises en évidence. De nouveaux estimateurs de paramètres multi-groupes ont été développés et validés pour le code TRIPOLI-4®à l’aide de ce code lui-même, puisqu’il dispose de la possibilité d’utiliser ses propres productions de données multi-groupes dans un calcul de cœur. Ensuite, la prise en compte de l’anisotropie de la diffusion nécessaire pour un bon traitement de l’anisotropie introduite par des fuites des neutrons a été étudiée. Une technique de correction de la diagonale de la matrice de la section efficace de transfert par diffusion à l’ordre P1 (nommée technique IGSC et basée sur une évaluation du courant des neutrons par une technique introduite par Todorova) est développée. Une amélioration de la technique IGSC dans la situation où les propriétés matérielles du réacteur changent drastiquement en espace est apportée. La solution est basée sur l’utilisation d’un nouveau courant qui est projeté sur l’axe X et plus représentatif dans la nouvelle situation que celui utilisant les approximations de Todorova, mais valable seulement en géométrie 1D. A la fin, un modèle de fuite B1 homogène est implémenté dans le code TRIPOLI-4® afin de produire des sections efficaces multi-groupes avec un spectre critique calculé avec l’approximation du mode fondamental. Ce modèle de fuite est analysé et validé rigoureusement en comparant avec les autres codes : Serpent et ECCO ; ainsi qu’avec un cas analytique.L’ensemble de ces développements dans TRIPOLI-4® permet de produire des sections efficaces multi-groupes qui peuvent être utilisées dans le code de calcul de cœur SNATCH de la plateforme PARIS. Ce dernier utilise la théorie du transport qui est indispensable pour la nouvelle filière à neutrons rapides. Les principales conclusions sont : -Le code de réseau en Monte-Carlo est une voie intéressante (surtout pour éviter les difficultés de l’autoprotection, de l’anisotropie limitée à un certain ordre du développement en polynômes de Legendre, du traitement des géométries exactes 3D), pour valider les codes déterministes comme ECCO ou APOLLO3® ou pour produire des données pour les codes déterministes ou Monte-Carlo multi-groupes.-Les résultats obtenus pour le moment avec les données produites par TRIPOLI-4® sont comparables mais n’ont pas encore vraiment montré d’avantage par rapport à ceux obtenus avec des données issues de codes déterministes tel qu’ECCO. / In the framework of the Generation IV reactors neutronic research, new core calculation tools are implemented in the code system APOLLO3® for the deterministic part. These calculation methods are based on the discretization concept of nuclear energy data (named multi-group and are generally produced by deterministic codes) and should be validated and qualified with respect to some Monte-Carlo reference calculations. This thesis aims to develop an alternative technique of producing multi-group nuclear properties by a Monte-Carlo code (TRIPOLI-4®).At first, after having tested the existing homogenization and condensation functionalities with better precision obtained nowadays, some inconsistencies are revealed. Several new multi-group parameters estimators are developed and validated for TRIPOLI-4® code with the aid of itself, since it has the possibility to use the multi-group constants in a core calculation.Secondly, the scattering anisotropy effect which is necessary for handling neutron leakage case is studied. A correction technique concerning the diagonal line of the first order moment of the scattering matrix is proposed. This is named the IGSC technique and is based on the usage of an approximate current which is introduced by Todorova. An improvement of this IGSC technique is then presented for the geometries which hold an important heterogeneity property. This improvement uses a more accurate current quantity which is the projection on the abscissa X. The later current can represent the real situation better but is limited to 1D geometries.Finally, a B1 leakage model is implemented in the TRIPOLI-4® code for generating multi-group cross sections with a fundamental mode based critical spectrum. This leakage model is analyzed and validated rigorously by the comparison with other codes: Serpent and ECCO, as well as an analytical case.The whole development work introduced in TRIPLI-4® code allows producing multi-group constants which can then be used in the core calculation solver SNATCH in the PARIS code platform. The latter uses the transport theory which is indispensable for the new generation fast reactors analysis. The principal conclusions are as follows:-The Monte-Carlo assembly calculation code is an interesting way (in the sense of avoiding the difficulties in the self-shielding calculation, the limited order development of anisotropy parameters, the exact 3D geometries) to validate the deterministic codes like ECCO or APOLLO3® and to produce the multi-group constants for deterministic or Monte-Carlo multi-group calculation codes. -The results obtained for the moment with the multi-group constants calculated by TRIPOLI-4 code are comparable with those produced from ECCO, but did not show remarkable advantages.
|
1094 |
CIRCE a new software to predict the steady state equilibrium of chemical reactions / CIRCE un nouveau logiciel pour prédire l'équilibre des réactions chimiques à l'état d'équilibreLiu, Qi 11 December 2018 (has links)
L'objectif de cette thèse est de développer un nouveau code pour prédire l'équilibre final d'un processus chimique complexe impliquant beaucoup de produits, plusieurs phases et plusieurs processus chimiques. Des méthodes numériques ont été développées au cours des dernières décennies pour prédire les équilibres chimiques finaux en utilisant le principe de minimisation de l'enthalpie libre du système. La plupart des méthodes utilisent la méthode des « multiplicateurs de Lagrange » et résolvent les équations en employant une approximation du problème de Lagrange et en utilisant un algorithme de convergence pas à pas de type Newton-Raphson. Les équations mathématiques correspondantes restent cependant fortement non linéaires, de sorte que la résolution, notamment de systèmes multiphasiques, peut être très aléatoire. Une méthode alternative de recherche du minimum de l’énergie de Gibbs (MCGE) est développée dans ce travail, basée sur une technique de Monte-Carlo associée à une technique de Pivot de Gauss pour sélectionner des vecteurs composition satisfaisant la conservation des atomes. L'enthalpie libre est calculée pour chaque vecteur et le minimum est recherché de manière très simple. Cette méthode ne présente a priori pas de limite d’application (y compris pour las mélanges multiphasiques) et l’équation permettant de calculer l’énergie de Gibbs n’a pas à être discrétisée. Il est en outre montré que la précision des prédictions dépend assez significativement des valeurs thermodynamiques d’entrée telles l'énergie de formation des produits et les paramètres d'interaction moléculaire. La valeur absolue de ces paramètres n'a pas autant d’importance que la précision de leur évolution en fonction des paramètres du process (pression, température, ...). Ainsi, une méthode d'estimation cohérente est requise. Pour cela, la théorie de la « contribution de groupe » est utilisée (ceux de UNIFAC) et a été étendue en dehors du domaine d'interaction moléculaire traditionnel, par exemple pour prédire l'énergie de formation d’enthalpie libre, la chaleur spécifique... Enfin, l'influence du choix de la liste finale des produits est discutée. On montre que la prédictibilité dépend du choix initial de la liste de produits et notamment de son exhaustivité. Une technique basée sur le travail de Brignole et Gani est proposée pour engendrer automatiquement la liste des produits stable possibles. Ces techniques ont été programmées dans un nouveau code : CIRCE. Les travaux de Brignole et de Gani sont mis en œuvre sur la base de la composition atomique des réactifs pour prédire toutes les molécules « réalisables ». La théorie de la « contribution du groupe » est mise en œuvre pour le calcul des propriétés de paramètres thermodynamiques. La méthode MCGE est enfin utilisée pour trouver le minimum absolu de la fonction d'enthalpie libre. Le code semble plus polyvalent que les codes traditionnels (CEA, ASPEN, ...) mais il est plus coûteux en termes de temps de calcul. Il peut aussi être plus prédictif. Des exemples de génie des procédés illustrent l'étendue des applications potentielles en génie chimique. / The objective of this work is to develop a new code to predict the final equilibrium of a complex chemical process with many species/reactions and several phases. Numerical methods were developed in the last decades to predict final chemical equilibria using the principle of minimizing the Gibbs free energy of the system. Most of them use the “Lagrange Multipliers” method and solve the resulting system of equations under the form of an approximate step by step convergence technique. Notwithstanding the potential complexity of the thermodynamic formulation of the “Gibbs problem,” the resulting mathematical formulation is always strongly non-linear so that solving multiphase systems may be very tricky and having the difficult to reach the absolute minimum. An alternative resolution method (MCGE) is developed in this work based on a Monte Carlo technique associated to a Gaussian elimination method to map the composition domain while satisfying the atom balance. The Gibbs energy is calculated at each point of the composition domain and the absolute minimum can be deduced very simply. In theory, the technique is not limited, the Gibbs function needs not be discretised and multiphase problem can be handled easily. It is further shown that the accuracy of the predictions depends to a significant extent on the “coherence” of the input thermodynamic data such the formation Gibbs energy of the species and molecular interaction parameters. The absolute value of such parameters does not matter as much as their evolution as function of the process parameters (pressure, temperature, …). So, a self-consistent estimation method is required. To achieve this, the group contribution theory is used (UNIFAC descriptors) and extended somewhat outside the traditional molecular interaction domain, for instance to predict the Gibbs energy of formation of the species, the specific heat capacity… Lastly the influence of the choice of the final list of products is discussed. It is shown that the relevancy of the prediction depends to a large extent on this initial choice. A first technique is proposed, based on Brignole and Gani‘s work, to avoid omitting species and another one to select, in this list, the products likely to appear given the process conditions. These techniques were programmed in a new code name CIRCE. Brignole and Gani-‘s method is implemented on the basis of the atomic composition of the reactants to predict all “realisable” molecules. The extended group contribution theory is implemented to calculate the thermodynamic parameters. The MCGE method is used to find the absolute minimum of the Gibbs energy function. The code seems to be more versatile than the traditional ones (CEA, ASPEN…) but more expensive in calculation costs. It can also be more predictive. Examples are shown illustrating the breadth of potential applications in chemical engineering.
|
1095 |
巨災風險債券之計價分析 / Pricing Catastrophe Risk Bonds吳智中, Wu, Chih-Chung Unknown Date (has links)
運用傳統再保險契約移轉風險受限於承保能量的逐年波動,尤其自90年代起,全球巨災頻繁,保險人損失巨幅增加,承保能量急遽萎縮,基於巨災市場之資金需求,再保險轉向資本市場,預期將巨災風險移轉至投資人,促成保險衍生性金融商品之創新,本研究針對佔有顯著交易量的巨災風險債券進行分析,基於Cummins和Geman (1995)所建構巨災累積損失模型,引用Duffie 與Singleton (1999)於違約債券的計價模式,將折現利率表示為短期利率加上事故發生率及預期損失比例之乘積,並將債券期間延長至多年期,以符合市場承保的需求,應用市場無套利假設及平賭測度計價的方法計算合理的市場價值,巨災損失過程將分成損失發展期與損失確定期,以卜瓦松過程表示巨災發生頻率,並利用台灣巨災經驗資料建立合適之損失幅度模型,最後以蒙地卡羅方法針對三種不同型態的巨災風險債券試算合理價值,並具體結論所得的數值結果與後續之研究建議。 / Using traditional reinsurance treaties to transfer insurance risks are restrained due to the volatility of the underwriting capacity annually. Catastrophe risks have substantially increased since the early 1990s and have directly resulted significant claim losses for the insurers. Hence the insurers are pursuing the financial capacities from the capital market. Transferring the catastrophe risks to the investor have stimulated the financial innovation for the insurance industry. In this study, pricing issues for the heavily traded catastrophe risk bonds (CAT-bond) are investigated. The aggregated catastrophe loss model in Cummins and Geman (1995) are adopted. While the financial techniques in valuing the defaultable bonds in Duffie and Singleton (1999) are employed to determine the fair prices incorporating the claim hazard rates and the loss severity. The duration of the CAT-bonds is extended from single year to multiple years in order to meet the demand from the reinsurance market. Non- arbitrage theory and martingale measures are employed to determine their fair market values. The contract term of the CAT-bonds is divided into the loss period and the development period. The frequency of the catastrophe risk is modeled through the Poisson process. Taiwan catastrophe loss experiences are examined to build the plausible loss severity model. Three distant types of CAT-bonds are analyzed through Monte Carlo method for illustrations. This paper concludes with remarks regarding some pricing issues of CAT-bonds.
|
1096 |
A parallel/vector Monte Carlo MESFET model for shared memory machinesHuster, Carl R. 29 July 1992 (has links)
The parallelization and vectorization of Monte Carlo algorithms for modelling
charge transport in semiconductor devices are considered. The standard ensemble
Monte Carlo simulation of a three parabolic band model for GaAs is first
presented as partial verification of the simulation. The model includes scattering
due to acoustic, polar-optical and intervalley phonons. This ensemble simulation
is extended to a full device simulation by the addition of real-space positions, and
solution for the electrostatic potential from the charge density distribution using
Poisson's equation. Poisson's equation was solved using the cloud-in-cell scheme
for charge assignment, finite differences for spatial discretization, and simultaneous
over-relaxation for solution. The particle movement (acceleration and scattering)
and the solution of Poisson's are both separately parallelized. The parallelization
techniques used in both parts are based on the use of semaphores for the protection
of shared resources and processor synchronization. The speed increase results for
parallelization with and without vectorization on the Ardent Titan II are presented.
The results show saturation due to memory access limitations at a speed increase of
approximately 3.3 times the serial case when four processors are used. Vectorization
alone provides a speed increase of approximately 1.6 times when compared with the
nonvectorized serial case. It is concluded that the speed increase achieved with
the Titan II is limited by memory access considerations and that this limitation is
likely to plague shared memory machines for the forseeable future. For the program
presented here, vectorization is concluded to provide a better speed increase
per day of development time than parallelization. However, when vectorization is
used in conjunction with parallelization, the speed increase due to vectorization is
negligible. / Graduation date: 1993
|
1097 |
Colloidal chemical potential in attractive nanoparticle-polymer mixtures: simulation and membrane osmometryQuant, Carlos Arturo 17 August 2004 (has links)
The potential applications of dispersed and self-assembled nanoparticles depend critically on accurate control and prediction of their phase behavior. The chemical potential is essential in describing the equilibrium distribution of all components present in every phase of a system and is useful as a building block for constructing phase diagrams. Furthermore, the chemical potential is a sensitive indicator of the local environment of a molecule or particle and is defined in a mathematically rigorous manner in both classical and statistical thermodynamics. The goal of this research is to use simulations and experiments to understand how particle size and composition affect the particle chemical potential of attractive nanoparticle-polymer mixtures.
The expanded ensemble Monte Carlo (EEMC) simulation method for the calculation of the particle chemical potential for a nanocolloid in a freely adsorbing polymer solution is extended to concentrated polymer mixtures. The dependence of the particle chemical potential and polymer adsorption on the polymer concentration and particle diameter are presented. The perturbed Lennard-Jones chain (PLJC) equation of state (EOS) for polymer chains1 is adapted to calculate the particle chemical potential of nanocolloid-polymer mixtures. The adapted PLJC equation is able to predict the EEMC simulation results of the particle chemical potential by introducing an additional parameter that reduces the effects of polymer adsorption and the effective size of the colloidal particle.
Osmotic pressure measurements are used to calculate the chemical potential of nanocolloidal silica in an aqueous poly(ethylene oxide) (PEO) solution at different silica and PEO concentrations. The experimental data was compared with results calculated from Expanded Ensemble Monte Carlo (EEMC) simulations. The results agree qualitatively with the experimentally observed chemical potential trends and illustrate the experimentally-observed dependence of the chemical potential on the composition. Furthermore, as is the case with the EEMC simulations, polymer adsorption was found to play the most significant role in determining the chemical potential trends.
The simulation and experimental results illustrate the relative importance of the particles size and composition as well as the polymer concentration on the particle chemical potential. Furthermore, a method for using osmometry to measure chemical potential of nanoparticles in a nanocolloid-mixture is presented that could be combined with simulation and theoretical efforts to develop accurate equations of state and phase behavior predictions. Finally, an equation of state originally developed for polymer liquid-liquid equilibria (LLE) was demonstrated to be effective in predicting nanoparticle chemical potential behavior observed in the EEMC simulations of particle-polymer mixtures.
|
1098 |
Implementation Strategies for Particle Filter based Target TrackingVelmurugan, Rajbabu 03 April 2007 (has links)
This thesis contributes new algorithms and implementations for particle filter-based target tracking. From an algorithmic perspective, modifications that improve a batch-based acoustic direction-of-arrival (DOA), multi-target, particle filter tracker are presented. The main improvements are reduced execution time and increased robustness to target maneuvers. The key feature of the batch-based tracker is an image template-matching approach that handles data association and clutter in measurements. The particle filter tracker is compared to an extended Kalman filter~(EKF) and a Laplacian filter and is shown to perform better for maneuvering targets. Using an approach similar to the acoustic tracker, a radar range-only tracker is also developed. This includes developing the state update and observation models, and proving observability
for a batch of range measurements.
From an implementation perspective, this thesis provides new low-power and real-time implementations for particle filters. First, to achieve a very low-power implementation, two mixed-mode implementation strategies that use
analog and digital components are developed. The mixed-mode implementations use analog, multiple-input translinear element (MITE) networks to realize nonlinear functions. The power dissipated in the mixed-mode implementation of a particle filter-based, bearings-only tracker is compared to a digital implementation that uses the CORDIC algorithm to realize the nonlinear functions. The mixed-mode method that uses predominantly analog components is shown to provide a factor of twenty improvement in power savings compared to a digital implementation. Next, real-time implementation strategies for the batch-based acoustic DOA tracker are developed. The characteristics of the digital implementation of the tracker are quantified using digital signal processor (DSP) and field-programmable gate array (FPGA) implementations. The FPGA implementation uses a soft-core or hard-core processor to implement the Newton search in the particle proposal stage. A MITE implementation of the nonlinear DOA update function in the tracker is also presented.
|
1099 |
Development of dosimetry and imaging techniques for pre-clinical studies of gold nanoparticle-aided radiation therapyJones, Bernard Lee 05 April 2011 (has links)
Cancer is one of the leading causes of death worldwide, and affects roughly 1.5 million new people in the United States every year. One of the leading tools in the detection and treatment of cancer is radiation. Tumors can be detected and identified using CT or PET scans, and can then be treated with external beam radiotherapy or brachytherapy. By taking advantage of the physical properties of gold and the biological properties of nanoparticles, gold nanoparticles (GNPs) can be used to improve both cancer radiotherapy and imaging. By infusing a tumor with GNPs, either using passive extravasation of nanoparticles by the tumor vasculature or active targeting of an antibody-conjugated nanoparticle to a specific tumor marker, the higher photon cross-section of gold will cause more radiation dose to be deposited in the tumor during photon-based radiotherapy. In principle, this would allow escalation of dose to the tumor while not increasing the dose to normal healthy tissue. Additionally, if a tumor infused with GNPs was irradiated by an external kilo-voltage source, the fluorescence emitted by the gold atoms would allow one to localize and quantify the GNP concentration. This work has two main aims: to quantify the GNP-mediated dose enhancement during GNRT on a nanometer scale, and to develop a refined imaging modality capable of quantifying GNP location and concentration within a small-animal-sized object. In order to quantify the GNP-mediated dose enhancement on a nanometer scale, a computational model was developed. This model combines both large-scale and small-scale calculations in order to accurately determine the heterogeneous dose distribution of GNPs. The secondary electron spectra were calculated using condensed history Monte Carlo, which is able to accurately take into account changes in beam quality throughout the tumor and calculate the average energy spectrum of the secondary charged particles created. Then, the dose distributions of these electron spectra were calculated on a nanometer scale using event-by-event Monte Carlo. The second aim is to develop an imaging system capable of reconstructing a tomographic image of GNP location and concentration in a small animal-sized object by capturing gold fluorescence photons emitted during irradiation of the object by an external beam. This would not only allow for localization of GNPs during gold nanoparticle-aided radiation therapy (GNRT), but also facilitate the use of GNPs as imaging agents for drug-delivery or other similar studies. The purpose of this study is to develop a cone-beam implementation of XFCT that meets realistic constrains on image resolution, detection limit, scan time, and dose. A Monte Carlo model of this imaging geometry was developed and used to test the methods of data acquisition and image reconstruction. The results of this study were then used to drive the production of a functioning benchtop, polychromatic cone-beam XFCT system.
|
1100 |
The role and regulatory mechanisms of nox1 in vascular systemsYin, Weiwei 28 June 2012 (has links)
As an important endogenous source of reactive oxygen species (ROS), NADPH oxidase 1 (Nox1) has received tremendous attention in the past few decades. It has been identified to play a key role as the initial "kindle," whose activation is crucial for amplifying ROS production through several propagation mechanisms in the vascular system. As a consequence, Nox1 has been implicated in the initiation and genesis of many cardiovascular diseases and has therefore been the subject of detailed investigations.
The literature on experimental studies of the Nox1 system is extensive. Numerous investigations have identified essential features of the Nox1 system in vasculature and characterized key components, possible regulatory signals and/or signaling pathways, potential activation mechanisms, a variety of Nox1 stimuli, and its potential physiological and pathophysiological functions. While these experimental studies have greatly enhanced our understanding of the Nox1 system, many open questions remain regarding the overall functionality and dynamic behavior of Nox1 in response to specific stimuli. Such questions include the following. What are the main regulatory and/or activation mechanisms of Nox1 systems in different types of vascular cells? Once Nox1 is activated, how does the system return to its original, unstimulated state, and how will its subunits be recycled? What are the potential disassembly pathways of Nox1? Are these pathways equally important for effectively reutilizing Nox1 subunits? How does Nox1 activity change in response to dynamic signals? Are there generic features or principles within the Nox1 system that permit optimal performance?
These types of questions have not been answered by experiments, and they are indeed quite difficult to address with experiments. I demonstrate in this dissertation that one can pose such questions and at least partially answer them with mathematical and computational methods. Two specific cell types, namely endothelial cells (ECs) and vascular smooth muscle cells (VSMCs), are used as "templates" to investigate distinct modes of regulation of Nox1 in different vascular cells. By using a diverse array of modeling methods and computer simulations, this research identifies different types of regulation and their distinct roles in the activation process of Nox1. In the first study, I analyze ECs stimulated by mechanical stimuli, namely shear stresses of different types. The second study uses different analytical and simulation methods to reveal generic features of alternative disassembly mechanisms of Nox1 in VSMCs. This study leads to predictions of the overall dynamic behavior of the Nox1 system in VSMCs as it responds to extracellular stimuli, such as the hormone angiotensin II. The studies and investigations presented here improve our current understanding of the Nox1 system in the vascular system and might help us to develop potential strategies for manipulation and controlling Nox1 activity, which in turn will benefit future experimental and clinical studies.
|
Page generated in 0.0783 seconds