• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 134
  • 44
  • 24
  • 14
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 267
  • 267
  • 267
  • 45
  • 28
  • 26
  • 24
  • 23
  • 21
  • 20
  • 20
  • 18
  • 18
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Modélisation des propriétés magnéto-électriques d'oxydes de métaux de transition anisotropes. / Modeling of the magnetoelectric properties of anisotropic transition metal oxides

Al Baalbaky, Ahmed 21 December 2017 (has links)
Les oxydes de métaux de transition sont largement utilisés en raison de leurs propriétés fondamentales intéressantes et de leurs applications importantes. En particulier, CuCrO2 est d’un intérêt particulier parce qu’il possède un état multiferroïque en absence de champ magnétique. Dans cette thèse, nous modélisons les propriétés magnéto-électriques de CuCrO2 par simulations Monte Carlo basées sur des paramètres magnétiques déterminés par calculs ab initio. Nous étudions également l’effet du dopage du Ga sur les propriétés magnéto-électriques du composé CuCr1-xGaxO2 (0 ≤ x ≤ 0:3). Nos résultats sontqualitativement en accord avec les observations expérimentales. / Transition metal oxides are widely used due to their interesting fundamental properties and important applications. In particular, CuCrO2 is of special interest because it enters the multiferroic state in zero magnetic fields. In this thesis we model the magnetoelectric properties of CuCrO2 using Monte Carlo simulations with the help of ab initio calculations.We also investigate the effect of Ga doping on the magnetoelectric properties of CuCr1-xGaxO2 (0 ≤ x ≤ 0:3). Our results are well comparable to the experimental observations.
112

Spatial fractionation of the dose in charged particle therapy / Fractionnement spatial de la dose en radiothérapie par particules chargées

Peucelle, Cécile 04 November 2016 (has links)
Malgré de récentes avancées, les traitements par radiothérapie (RT) demeurent insatisfaisants : la tolérance des tissus sains aux rayonnements limite la délivrance de fortes doses (potentiellement curatives) à la tumeur. Pour remédier à ce problème, de nouvelles approches basées sur des modes de dépôt de dose innovants sont aujourd’hui à l’étude. Parmi ces approches, la technique synchrotron “Minibeam Radiation Therapy” (MBRT) a démontré sa capacité à élever la résistance des tissus sains aux rayonnements, ainsi qu’à induire un important retard de croissance tumorale. La MBRT combine des faisceaux submillimétriques à un fractionnement spatial de la dose. Dans ce contexte, l’alliance de la balistique plus avantageuse des particules chargées (et leur sélectivité biologique) à la préservation des tissus sains observée en MBRT permettrait de préserver d’avantage les tissus sains. Cette stratégie innovante a été explorée durant ce travail de thèse. Deux voies ont notamment été étudiées: la MBRT par faisceaux de protons (pMBRT), et d’ions très lourds. Premièrement, la preuve de concept expérimentale de la pMBRT a été réalisée dans un centre clinique (Institut Curie, Centre de Protonthérapie d’Orsay). De plus, l'évaluation de potentielles optimisations de la pMBRT, à la fois en terme de configuration d’irradiation et de génération des minifaisceaux, a été menée dans une étude Monte Carlo (MC). Dans la seconde partie de ce travail, un nouvel usage potentiel des ions très lourds (néon et plus lourds) en radiothérapie a été évalué dans une étude MC. Les combiner à un fractionnement spatial permettrait de tirer profit de leur efficacité dans le traitement de tumeurs radiorésistantes (hypoxiques), un des principaux défis de la RT, tout en minimisant leurs effets secondaires. Les résultats obtenus au terme de ce travail sont favorables à une exploration approfondie de ces deux approches innovantes. Les données dosimétriques compilées dans ce manuscrit serviront à guider prochaines les expérimentations biologiques. / Despite recent breakthroughs, radiotherapy (RT) treatments remain unsatisfactory : the tolerance of normal tissues to radiations still limits the possibility of delivering high (potentially curative) doses in the tumour. To overcome these difficulties, new RT approaches using distinct dose delivery methods are being explored. Among them, the synchrotron minibeam radiation therapy (MBRT) technique has been shown to lead to a remarkable normal tissue resistance to very high doses, and a significant tumour growth delay. MBRT allies sub-millimetric beams to a spatial fractionation of the dose. The combination of the more selective energy deposition of charged particles (and their biological selectivity) to the well-established normal tissue sparing of MBRT could lead to a further gain in normal tissue sparing. This innovative strategy was explored in this Ph.D. thesis. In particular, two new avenues were studied: proton MBRT (pMBRT) and very heavy ion MBRT. First, the experimental proof of concept of pMBRT was performed at a clinical facility (Institut Curie, Orsay, France). In addition, pMBRT setup and minibeam generation were optimised by means of Monte Carlo (MC) simulations. In the second part of this work, a potential renewed use of very heavy ions (neon and heavier) for therapy was evaluated in a MC study. Combining such ions to a spatial fractionation could allow profiting from their high efficiency in the treatment of hypoxic radioresistant tumours, one of the main challenges in RT, while reducing at maximum their side effects. The promising results obtained in this thesis support further explorations of these two novel avenues. The dosimetry knowledge acquired will serve to guide the biological experiments.
113

Neutrophil Extracellular Trap (NET) Formation: From Fundamental Biophysics to Delivery of Nanosensors

Meyer, Daniel 26 June 2019 (has links)
No description available.
114

Les isotopes d'azote au-delà de la limite de stabilité neutronique : 23N, 24N et 25N / Nitrogen isotopes beyond the neutron drip line : 23N, 24N et 25N

Deshayes, Quentin 04 December 2017 (has links)
Afin d'étudier les limites d'existence de la chaîne isotopique des azotes, une expérience a été menée au RIBF-RIKEN en utilisant le spectromètre SAMURAI couplé au détecteur de neutrons NEBULA. Les systèmes étudiés - 23N*, 24N et 25N ont été produits via des réactions de knockout de quelques nucléons ou de fragmentation à partir de faisceau secondaires de haute énergie (~250 MeV/nucléon). La méthode utilisée pour caractériser ces systèmes est celle de la masse invariante qui nécessite la cinématique complète des réactions étudiées. Pour interpréter les résultats, une simulation de la totalité du dispositif expérimental a été utilisée. L'étalonnage des détecteurs et les techniques d'analyse ont été testés en sondant l'état fondamental connu du 16B. Dans le cas du 23N, une étude de spectroscopie gamma en vol a permis de confirmer qu'il ne possédait pas d'état excité lié. Nous avons pu le sonder à travers 3 voies de réactions distinctes : Le knockout d'un proton du 24O, la fragmentation à partir du 27Ne et la diffusion inélastique. Dans tous les cas, nous avons observé une résonance l=0 à environ 3,5 MeV d'énergie d'excitation. Cette résonance a été interprétée, en s'appuyant sur des calculs de modèle en couche, comme le premier état excité du 23N de spin parité Jpi=3/2-. Dans l'ensemble des voies une seconde résonance possédant une énergie d'excitation d'environ 5 MeV était nécessaire pour décrire les spectres en énergie-relative fragment-neutron mesurés.Le 24N a été observé pour la première fois lors de notre expérience comme une résonance autour de 1,3 MeV au dessus du seuil d'émission neutron. Nous avons pu sonder ce système via 4 réactions, le knockout de deux ou trois protons respectivement du 26F et du 27Ne et des réactions de fragmentation à partir du 27F et du 28Ne. L'ensemble de ces spectres peut être ajusté à l'aide d'une résonance l=2. Des considérations théoriques simples nous suggèrent que cette dernière correspond au doublet 2-,1- prédit comme l'état fondamental du 24N par le modèle en couche.Le 25N a également été observé pour la première fois lors de notre expérience. Malgré une statistique relativement limitée, les spectres des réactions de knockout de deux et trois protons du 27F et 28Ne, montrent une structure claire environ 1,7 MeV au dessus du seuil d'émission de deux neutrons qui peut être identifiée comme l'état fondamental 1/2- prédit par le modèle en couche. / To study the most neutron-rich nitrogen isotopes an experiment has been undertaken at the RIBF-RIKEN using the SAMURAI spectrometer and NEBULA neutron array. The nuclei of interest - 23N*, 24N and 25N - were produced via nucleon knockout and fragmentation reactions from high-energy (~250 MeV/nucleon) secondary beams. The technique of invariant-mass spectroscopy, which requires the measurement in complete kinematics of the beam-like reaction products, was employed to characterise these unbound systems. In the case of the 23N, in-flight gamma-ray spectroscopy was performed and it has been possible to confirm that it has no bound excited states. Three reaction channels - the knockout of a proton from 24O, the fragmentation of 27Ne and inelastic scattering – were employed to search for unbound excited states. In all these channels, an l=0 resonance was observed at around 3.5 MeV excitation energy. This resonance is interpreted, through comparison with shell model calculations, as the Jpi=3/2- first excited state of 23N. In all channels, another resonance with an excitation energy of close to 5 MeV was necessary to fully describe the fragment-neutron relative energy spectra. The nucleus 24N was observed here for the first time as resonance-like peak some 1.3 MeV above the one-neutron decay threshold. Four reaction channels were investigated: the knockout of two and three protons from 26F and 27Ne, respectively, and fragmentation of 27F and 28Ne. All the relative energy spectra were consistent with the population of an l=2 resonance. Simple considerations suggest that this is the 2-,1- ground-state doublet predicted by the shell model. The nucleus 25N was also observed here for the first time. Despite the relatively limited statistics, both two-proton and three-proton removal from 27F and 28Ne, exhibited a clear structure some 1.7 MeV above the two-neutron decay threshold which, based on simple considerations, may be identified with the expected 1/2- ground state.
115

Contrast agent imaging using an optimized table-top x-ray fluorescence and photon-counting computed tomography imaging system

Dunning, Chelsea Amanda Saffron 04 November 2020 (has links)
Contrast agents are often crucial in medical imaging for disease diagnosis. Novel contrast agents, such as gold nanoparticles (AuNPs) and lanthanides, are being ex- plored for a variety of clinical applications. Preclinical testing of these contrast agents is necessary before being approved for use in humans, which requires the use of small animal imaging techniques. Small animal imaging demands the detection of these contrast agents in trace amounts at acceptable imaging time and radiation dose. Two such imaging techniques include x-ray fluorescence computed tomography (XFCT) and photon-counting CT (PCCT). XFCT combines the principles of CT with x-ray fluorescence by detecting fluorescent x-rays from contrast agents at various projections to reconstruct contrast agent maps. XFCT can image trace amounts of AuNPs but is limited to small animal imaging due to fluorescent x-ray attenuation and scatter. PCCT uses photon-counting detectors that separate the CT data into energy bins. This enables contrast agent detection by recognizing the energy dependence of x-ray attenuation in different materials, independent of AuNP depth, and can provide anatomical information that XFCT cannot. To achieve the best of both worlds, we modeled and built a table-top x-ray imaging system capable of simultaneous XFCT and PCCT imaging. We used Monte Carlo simulation software for the following work in XFCT imaging of AuNPs. We simulated XFCT induced by x-ray, electron, and proton beams scanning a small animal-sized object (phantom) containing AuNPs with Monte Carlo techniques. XFCT induced by x-rays resulted in the best image quality of AuNPs, however high-energy electron and medium-energy proton XFCT may be feasible for on-board x-ray fluorescence techniques during radiation therapy. We then simulated a scan of a phantom containing AuNPs on a table-top system to optimize the detector arrangement, size, and data acquisition strategy based on the resulting XFCT image quality and available detector equipment. To enable faster XFCT data acquisition, we separately simulated another AuNP phantom and determined the best collimator geometry for Au fluorescent x-ray detection. We also performed experiments on our table-top x-ray imaging system in the lab. Phantoms containing multiples of three lanthanide contrast agents were scanned on our tabletop x-ray imaging system using a photon-counting detector capable of sustaining high x-ray fluxes that enabled PCCT. We used a novel subtraction algorithm for reconstructing separate contrast agent maps; all lanthanides were distinct at low concentrations including gadolinium and holmium that are close in atomic number. Finally, we performed the first simultaneous XFCT and PCCT scan of a phantom and mice containing both gadolinium and gold based on the optimized parameters from our simulations. This dissertation outlines the development of our tabletop x-ray imaging system and the optimization of the complex parameters necessary to obtain XFCT and PCCT images of multiple contrast agents at biologically-relevant concentrations. / Graduate
116

Lattice model for amyloid peptides : OPEP force field parametrization and applications to the nucleus size of Alzheimer's peptides / Modèle réseau de peptides amyloïdes : paramétrisation du champ de forces OPEP et application aux noyaux de nucléation de peptides d'Alzheimer

Tran, Thanh Thuy 20 September 2016 (has links)
La maladie d’Alzheimer touche plus de 40 millions de personnes dans le monde et résulte de l’agrégation du peptide beta-amyloïde de 40/42 résidus. En dépit de nombreuses études expérimentales et théoriques, le mécanisme de formation des fibres et des plaques n’est pas élucidé, et les structures des espèces les plus toxiques restent à déterminer. Dans cette thèse, je me suis intéressée à deux aspects. (1) La détermination du noyau de nucléation (N*) de deux fragments (Aβ)16-22 et (Aβ)37-42. Mon approche consiste à déterminer les paramètres OPEP du dimère (Aβ)16-22 en comparant des simulations Monte Carlo sur réseau et des dynamiques moléculaires atomiques par échange de répliques. Les paramètres fonctionnant aussi sur le trimère (Aβ)16-22 et les dimères et trimères (Aβ)37-42, j’ai étudié la surface d’énergie libre des décamères et mes simulations montrent que N* est de 10 chaines pour (Aβ)16-22 et est supérieure à 20 chaines pour (Aβ)37-42. (2) J’ai ensuite étudié les structures du dimère (Aβ)1-40 par simulations de dynamique moléculaire atomistique par échanges de répliques. Cette étude, qui fournit les conformations d’équilibre du dimère Aβ1-40 en solution aqueuse, ouvre des perspectives pour une compréhension de l’impact des mutations pathogènes et protectrices au niveau moléculaire. / The neurodegenerative Alzheimer's disease (AD) is affecting more than 40 million people worldwide and is linked to the aggregation of the amyloid-β proteins of 40/42 amino acids. Despite many experimental and theoretical studies, the mechanism by which amyloid fibrils form and the 3D structures of the early toxic species in aqueous solution remain to be determined. In this thesis, I studied the structures of the eraly formed oligomers of the amyloid-β peptide and the critical nucleus size of two amyloid-β peptide fragments using either coarse-grained or all-atom simulations. First, at the coarse-grained level, I developed a lattice model for amyloid protein, which allows us to study the nucleus sizes of two experimentally well-characterized peptide fragments (Aβ)16-22 and (Aβ)37-42 of the Alzheimer's peptide (Aβ)1-42. After presenting a comprehensive OPEP force-field parameterization using an on-lattice protein model with Monte Carlo simulations and atomistic simulations, I determined the nucleus sizes of the two fragments. My results show that the nucleation number is 10 chains for (Aβ)16-22 and larger than 20 chains for (Aβ)37-42. This knowledge is important to help design more effective drugs against AD. Second, I investigated the structures of the dimer (Aβ)1-40 using extensive atomistic REMD simulations. This study provides insights into the equilibrium structure of the (Aβ)1-40 dimer in aqueous solution, opening a new avenue for a comprehensive understanding of the impact of pathogenic and protective mutations in early-stage Alzheimer’s disease on a molecular level.
117

Microbial Cell Disruption Using Pressurized Gases to Improve Lipid Recovery from Wet Biomass: Thermodynamic Analysis

Howlader, Md Shamim 04 May 2018 (has links)
Microbial cell disruption using pressurized gas is a promising approach to improve the lipid extraction yield directly from the wet biomass by eliminating the energy-intensive drying process, which is an integral part of traditional methods. As the process starts with the solubilization of the gas in lipid-rich microbial cells, it is important to understand the solubility of different potential gases in both lipid (triglyceride) and lipid-rich microbial cell culture to design efficient cell disruption processes. In this study, we determined the solubility of different gases (e.g., CO2, CH4, N2, and Ar) in canola oil (triglyceride) using a pressure drop gas apparatus developed in our laboratory. The solubility of different gases in triglyceride followed the trend CO2 > CH4 > Ar > N2. Since the solubility of CO2 was found to be higher compared to other gases, the solubility of CO2 in lipid rich cell culture, cell culture media, and spent media was also determined. It was found that CO2 is more soluble in triglycerides, but less soluble in lipid-rich cell culture compared to CO2 in water. From both thermodynamic models and Monte Carlo simulations, the correlated solubility was found to be in good agreement with the experimental results. CO2 was found to be the most suitable gas for microbial cell disruption because almost 100% cell death occurred when using CO2 whereas more than 85% cells were found to be active after treatment with CH4, N2, and Ar. The optimization of microbial cell disruption was conducted using the combination of Box-Behnken design of experiment (DOE) technique and response surface methodology. The optimized cell disruption conditions were found to be 3900 kPa, 296.5 K, 360 min, and 325 rpm where almost 100% cell death was predicted from the statistical modeling. Finally, it was found that 86% of the total lipid content can be recovered from the wet biomass after treatment with pressurized CO2 under optimized conditions compared to control where up to 74% of the total lipid content can be recovered resulting in 12% increase in the lipid extraction yield using pressurized CO2.
118

Adaptive Sampling Methods for Stochastic Optimization

Daniel Andres Vasquez Carvajal (10631270) 08 December 2022 (has links)
<p>This dissertation investigates the use of sampling methods for solving stochastic optimization problems using iterative algorithms. Two sampling paradigms are considered: (i) adaptive sampling, where, before each iterate update, the sample size for estimating the objective function and the gradient is adaptively chosen; and (ii) retrospective approximation (RA), where, iterate updates are performed using a chosen fixed sample size for as long as progress is deemed statistically significant, at which time the sample size is increased. We investigate adaptive sampling within the context of a trust-region framework for solving stochastic optimization problems in $\mathbb{R}^d$, and retrospective approximation within the broader context of solving stochastic optimization problems on a Hilbert space. In the first part of the dissertation, we propose Adaptive Sampling Trust-Region Optimization (ASTRO), a class of derivative-based stochastic trust-region (TR) algorithms developed to solve smooth stochastic unconstrained optimization problems in $\mathbb{R}^{d}$ where the objective function and its gradient are observable only through a noisy oracle or using a large dataset. Efficiency in ASTRO stems from two key aspects: (i) adaptive sampling to ensure that the objective function and its gradient are sampled only to the extent needed, so that small sample sizes are chosen when the iterates are far from a critical point and large sample sizes are chosen when iterates are near a critical point; and (ii) quasi-Newton Hessian updates using BFGS. We prove three main results for ASTRO and for general stochastic trust-region methods that estimate function and gradient values adaptively, using sample sizes that are stopping times with respect to the sigma algebra of the generated observations. The first asserts strong consistency when the adaptive sample sizes have a mild logarithmic lower bound, assuming that the oracle errors are light-tailed. The second and third results characterize the iteration and oracle complexities in terms of certain risk functions. Specifically, the second result asserts that the best achievable $\mathcal{O}(\epsilon^{-1})$ iteration complexity (of squared gradient norm) is attained when the total relative risk associated with the adaptive sample size sequence is finite; and the third result characterizes the corresponding oracle complexity in terms of the total generalized risk associated with the adaptive sample size sequence. We report encouraging numerical results in certain settings. In the second part of this dissertation, we consider the use of RA as an alternate adaptive sampling paradigm to solve smooth stochastic constrained optimization problems in infinite-dimensional Hilbert spaces. RA generates a sequence of subsampled deterministic infinite-dimensional problems that are approximately solved within a dynamic error tolerance. The bottleneck in RA becomes solving this sequence of problems efficiently. To this end, we propose a progressive subspace expansion (PSE) framework to solve smooth deterministic optimization problems in infinite-dimensional Hilbert spaces with a TR Sequential Quadratic Programming (SQP) solver. The infinite-dimensional optimization problem is discretized, and a sequence of finite-dimensional problems are solved where the problem dimension is progressively increased. Additionally, (i) we solve this sequence of finite-dimensional problems only to the extent necessary, i.e., we spend just enough computational work needed to solve each problem within a dynamic error tolerance, and (ii) we use the solution of the current optimization problem as the initial guess for the subsequent problem. We prove two main results for PSE. The first assesses convergence to a first-order critical point of a subsequence of iterates generated by the PSE TR-SQP algorithm. The second characterizes the relationship between the error tolerance and the problem dimension, and provides an oracle complexity result for the total amount of computational work incurred by PSE. This amount of computational work is closely connected to three quantities: the convergence rate of the finite-dimensional spaces to the infinite-dimensional space, the rate of increase of the cost of making oracle calls in finite-dimensional spaces, and the convergence rate of the solution method used. We also show encouraging numerical results on an optimal control problem supporting our theoretical findings.</p> <p>  </p>
119

A Computational Analysis of the Structure of the Genetic Code

Degagne, Christopher 11 1900 (has links)
The standard genetic code (SGC) is the cipher used by nearly all organisms to transcribe information stored in DNA and translate it into its amino acid counterparts. Since the early 1960s, researchers have observed that the SGC is structured so that similar codons encode amino acids with similar physiochemical properties. This structure has been hypothesized to buffer the SGC against transcription or translational error because single nucleotide mutations usually either are silent or impart minimal effect on the containing protein. We herein briefly review different theories for the origin of that structure. We also briefly review different computational experiments designed to quantify buffering capacity for the SGC. We report on computational Monte Carlo simulations that we performed using a computer program that we developed, AGCT. In the simulations, the SGC was ranked against other, hypothetical genetic codes (HGC) for its ability to minimize physiochemical distances between amino acids encoded by codons separated by single nucleotide mutations. We analyzed unappreciated structural aspects and neglected properties in the SGC. We found that error measure type affected SGC ranking. We also found that altering stop codon positions had no effect on SGC ranking, but including stop codons in error calculations improved SGC ranking. We analyzed 49 properties individually and identified conserved properties. Among these, we found that long-range non-bonded energy is more conserved than is polar requirement, which previously was considered to be the most conserved property in the SGC. We also analyzed properties in combinations. We hypothesized that the SGC is organized as a compromise among multiple properties. Finally, we used AGCT to test whether different theories on the origin of the SGC could explain more convincingly the buffering capacity in the SGC. We found that, without accounting for transition/transversion biases, the SGC ranking was modest enough under constraints imposed by the coevolution and four column theories that it could be explained due to constraints associated with either theory (or both theories); however, when transition/transversion biases were included, only the four column theory returned a SGC ranking modest enough that it could be explained due to constraints associated with that theory. / Thesis / Master of Science (MSc) / The standard genetic code (SGC) is the cipher used almost universally to transcribe information stored in DNA and translate it to amino acid counterparts. Since the mid 1960s, researchers have recognized that the SGC is organized so that similar three-nucleotide RNA codons encode amino acids with similar properties; researchers consequently hypothesized that the SGC is structured to minimize effects from transcription or translation errors. This hypothesis has been tested using computer simulation. I briefly review results from those studies, complement them by analyzing unappreciated structural aspects and neglected properties, and test two theories on the origin of the SGC.
120

Eines computacionals avançades per a planificació radioterapèutica mitjançat simulacions Monte Carlo

Oliver Gil, Sandra 27 April 2024 (has links)
Tesis por compendio / [ES] La tesi presentada a aquest document, s'emmarca dins de l'àmbit de la física mèdica. Dins d'aquesta branca de la física, es desenvolupen eines computacionals per oferir millores en la planificació de tractaments que involucren radiació ionitzant. En aquestes planificacions, es calculen factors dosimètrics com la dosi total absorbida tant, a la regió d'interès del tractaments, objectiu del mateix, com a la resta de teixits o òrgans de risc propers a la zona objectiu. Per poder efectuar aquests càlculs, existeixen diferents tècniques, sent les simulacions basades en Monte Carlo les considerades com l'eina més precisa. Aquest tipus de simulacions, permeten modelitzar els dispositius mèdics que emeten el feix de tractament als pacients, de forma detallada. A més, les simulacions Monte Carlo, permeten descriure les fonts de radiació minuciosament i considerar el transport de les partícules involucrades en el problema a través de la geometria considerada. En els treballs que conformen aquesta tesi, s'han emprat diferents codis Monte Carlo, depenent del problema a dur a terme. S'ha emprat MCNP6 a diferents treballs per la capacitat, i facilitat, de modelar geometries complexes emprant mallats volumètriques, penEasy com a codi per validar algunes de les eines dissenyades i penRed, per les característiques especialitzades en física mèdica, com la lectura i processament automàtic de DICOM i les fonts de braquiteràpia, el que faciliten molt les simulacions en l'entorn mèdic. Degut a estos fets, i a que penRed, és de codi obert i no requereix llicència, com al cas del MCNP, s'ha decidit estendre les capacitats que manquen en este, per poder equiparar el seu ús a la resta de codis en els problemes abordats durant la realització de la tesi doctoral. Tots aquests treballs contribueixen al desenvolupament d'eines que, mitjançant la simulació Monte Carlo, permeten optimitzar els càlculs en radioteràpia. Més encara, les eines desenvolupades, tenen una aplicabilitat més general i poden emprar-se en altres camps o problemes, com, per exemple, diagnòstic basat en imatge mèdica. El primer dels treballs, cobreix la necessita del codi MCNP6 de ser capaç de llegir i escriure fitxers d'espai de fase en format estàndard de la IAEA, eina que ja tenen implementadas molts dels codis de simulació Monte Carlo. Per suplir la manca de MCNP6 d'aquesta capacitat, es desenvolupa en aquesta tesi un codi capaç de realitzar aquestes conversions entre format d'espai de fase intern de MCNP6 i formats IAEA i a l'inrevés. Al segon treball, s'empren simulacions Monte Carlo per tal de dissenyar un filtre que homogeinitze el feix d'electrons de 12 MeV a l'eixida d'un accelerador de radioteràpia intraoperàtoria. El treball proporciona una configuració de filtre, dissenyada amb simulació Monte Carlo i validada amb altre grup d'investigació independent. El tercer treball, es basa en oferir una millora als elevats temps de computació a l'hora de realitzar planificacions de radioteràpia amb simulacions Monte Carlo per a tractaments amb diferents irradiacions angulars. Amb aquesta eina es pretén agilitzar significativament el procés de càlcul de distribució de dosi en el maniquí o pacient, sense haver de realitzar la simulació a través de tots els components de l'accelerador. Finalment, arrel d'haver emprat geometries basades en malles en les simulacions realitzades amb MCNP6, s'ha vist la importància d'aquesta capacitat, especialment en simulacions en l'àmbit de la física mèdica. La definició de geometries per descriure el sistema, és una part fonamental de qualsevol simulació, independentment del codi que s'utilitza per a dur-la a terme. És per això que, el quart treball, es centra en el desenvolupament d'un mòdul per a simular sobre geometries mallades en penRed. / [CA] La tesis presentada en este documento se enmarca dentro del ámbito de la física médica. Dentro de esta rama de la física, se desarrollan herramientas computacionales para ofrecer mejoras en la planificación de tratamientos que involucran radiación ionizante. En estas planificaciones, se calculan factores dosimétricos como la dosis total absorbida tanto en la región de interés del tratamiento, objetivo del mismo, como en el resto de tejidos u órganos de riesgo cercanos a la zona objetivo. Para poder llevar a cabo estos cálculos, existen diferentes técnicas, siendo las simulaciones basadas en Monte Carlo consideradas como la herramienta más precisa. Este tipo de simulaciones permiten modelar los dispositivos médicos que emiten el haz de tratamiento a los pacientes de forma detallada. Además, las simulaciones Monte Carlo permiten describir las fuentes de radiación minuciosamente y considerar el transporte de las partículas involucradas en el problema a través de la geometría considerada. En los trabajos que conforman esta tesis, se han empleado diferentes códigos Monte Carlo, dependiendo del problema a abordar. Se ha utilizado MCNP6 en diferentes trabajos por su capacidad y facilidad para modelar geometrías complejas utilizando mallas volumétricas, penEasy como código para validar algunas de las herramientas diseñadas y penRed, por sus características especializadas en física médica, como la lectura y procesamiento automático de DICOM y las fuentes de braquiterapia, lo que facilita mucho las simulaciones en el entorno médico. Debido a estos hechos, y a que penRed es de código abierto y no requiere licencia, como es el caso de MCNP, se ha decidido ampliar las capacidades que faltan en este, para poder equiparar su uso al resto de códigos en los problemas abordados durante la realización de la tesis doctoral. Todos estos trabajos contribuyen al desarrollo de herramientas que, mediante la simulación Monte Carlo, permiten optimizar los cálculos en radioterapia. Además, las herramientas desarrolladas tienen una aplicabilidad más general y pueden emplearse en otros campos o problemas, como por ejemplo, el diagnóstico basado en imagen médica. El primero de los trabajos cubre la necesidad del código MCNP6 de ser capaz de leer y escribir archivos de espacio de fase en formato estándar de la IAEA, herramienta que ya tienen implementadas muchos de los códigos de simulación Monte Carlo. Para suplir la falta de MCNP6 de esta capacidad, se desarrolla en esta tesis un código capaz de realizar estas conversiones entre formato de espacio de fase interno de MCNP6 y formatos IAEA y viceversa. En el segundo trabajo, se emplean simulaciones Monte Carlo para diseñar un filtro que homogenice el haz de electrones de 12 MeV en la salida de un acelerador de radioterapia intraoperatoria. El trabajo proporciona una configuración de filtro, diseñada con simulación Monte Carlo y validada con otro grupo de investigación independiente. El tercer trabajo se basa en ofrecer una mejora a los elevados tiempos de computación al realizar planificaciones de radioterapia con simulaciones Monte Carlo para tratamientos con diferentes irradiaciones angulares. Con esta herramienta se pretende agilizar significativamente el proceso de cálculo de distribución de dosis en el maniquí o paciente, sin tener que realizar la simulación a través de todos los componentes del acelerador. Finalmente, a raíz de haber empleado geometrías basadas en mallas en las simulaciones realizadas con MCNP6, se ha visto la importancia de esta capacidad, especialmente en simulaciones en el ámbito de la física médica. La definición de geometrías para describir el sistema es una parte fundamental de cualquier simulación, independientemente del código que se utilice para llevarla a cabo. Es por ello que el cuarto trabajo se centra en el desarrollo de un módulo para simular sobre geometrías malladas en penRed. / [EN] The thesis presented in this document falls within the scope of medical physics. Within this branch of physics, computational tools are developed to offer improvements in the planning of treatments involving ionizing radiation. In these plans, dosimetric factors are calculated, such as the total absorbed dose both in the region of interest of the treatment, which is the treatment's objective, and in the surrounding tissues or organs at risk near the target area. To perform these calculations, different techniques exist, with Monte Carlo simulations considered the most accurate tool. These simulations allow modeling of medical devices emitting the treatment beam to patients in detail. Furthermore, Monte Carlo simulations enable a detailed description of radiation sources and consider the transport of particles involved in the problem through the considered geometry. Different Monte Carlo codes have been used in the works comprising this thesis, depending on the problem addressed. MCNP6 has been used in various works for its capacity and ease in modeling complex geometries using volumetric meshes, penEasy as a code to validate some of the designed tools, and penRed for its specialized features in medical physics, such as reading and automatic processing of DICOM and brachytherapy sources, greatly facilitating simulations in the medical environment. Due to these facts, and because penRed is open-source and does not require a license, unlike MCNP, it has been decided to expand its capabilities to match its use with other codes in the problems addressed during the completion of the doctoral thesis. All of these works contribute to the development of tools that, through Monte Carlo simulation, optimize calculations in radiotherapy. Additionally, the developed tools have broader applicability and can be used in other fields or problems, such as diagnosis based on medical imaging. The first of the works covers the need for the MCNP6 code to be able to read and write phase space files in the standard IAEA format, a tool that many Monte Carlo simulation codes already have implemented. To address the lack of this capability in MCNP6, a code capable of performing these conversions between the internal phase space format of MCNP6 and IAEA formats, and vice versa, is developed in this thesis. In the second work, Monte Carlo simulations are used to design a filter that homogenizes the 12 MeV electron beam at the output of an intraoperative radiotherapy accelerator. The work provides a filter configuration, designed with Monte Carlo simulation and validated with another independent research group. The third work aims to improve the high computation times when performing radiotherapy planning with Monte Carlo simulations for treatments with different angular irradiations. This tool aims to significantly speed up the process of dose distribution calculation in the phantom or patient, without having to simulate through all components of the accelerator. Finally, due to having employed mesh-based geometries in simulations conducted with MCNP6, the importance of this capability has been recognized, especially in simulations in the field of medical physics. The definition of geometries to describe the system is a fundamental part of any simulation, regardless of the code used to perform it. Therefore, the fourth work focuses on the development of a module to simulate on meshed geometries in penRed. / This study was supported by the program “Ayudas para la promoción de empleo joven e implantación de la Garantía Juvenil en I+D+i, Plan Estatal de Investigación Científica y Técnica e Innovación 2017-2020” from the “Iniciativa de Empleo Juvenil” (IEJ) and the “Fondo Social Europeo” (FSE) We would like to acknowledge the Spanish “Ministerio de Ciencia e Innovación” (MCIN) grant PID2021-125096NB-I00 funded by MCIN/AEI/10.13039 and the “Generalitat Valenciana” (GVA) grant PROMETEO/2021/064. / Oliver Gil, S. (2024). Eines computacionals avançades per a planificació radioterapèutica mitjançat simulacions Monte Carlo [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/203890 / Compendio

Page generated in 0.1479 seconds