• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Desenvolvimento de um modelo computacional para cálculos de dose absorvida em órgãos e tecidos do corpo humano nas situações de exposições acidentais

SANTOS, Adriano Márcio dos January 2006 (has links)
Made available in DSpace on 2014-06-12T23:15:41Z (GMT). No. of bitstreams: 2 arquivo9084_1.pdf: 2082722 bytes, checksum: 16498a33dff006a2b3ac65bb3600d937 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2006 / A exposição a um campo de radiação pode ser de natureza medica, ambiental, ocupacional ou acidental, mas em todos os casos, o principal objetivo é a determinação da dose absorvida no corpo inteiro ou a distribuição da dose absorvida em órgãos e tecidos específicos. Nos anos recentes, as estimativas de dose absorvida no corpo humano se tornaram mais precisas devido aos avanços da tecnologia moderna nas áreas de instrumentação e desenvolvimento de computadores. Além dos dosímetros e métodos biodosimétricos, existem os modelos computacionais de exposição baseados nos métodos Monte Carlo (MC) para o cálculo da dose absorvida em órgãos e tecidos. Para simular corretamente os processos de transporte da radiação no corpo humano, o código computacional MC pode ser acoplado a um fantoma antropomórfico de voxels, que atualmente pode ser considerado como a melhor representação da natureza do corpo humano para o propósito de determinação da dose absorvida. Neste trabalho, um modelo computacional de exposição foi desenvolvido pelo acoplamento do código Monte Carlo EGS4 ao fantoma de voxels MAX, que foi adequadamente modificado para permitir especialmente a avaliação da dose absorvida em humanos expostos a fontes externas de radiação em situações acidentais. Para adaptar facilmente o modelo de exposição MAX/EGS4 as situações acidentais, uma fonte pontual generalizada foi desenvolvida para ser colocada em posições arbitrárias com respeito ao corpo humano. As propriedades funcionais desta fonte pontual generalizada foram verificadas com um fantoma Alderson-Rando (AR). O fantoma físico AR foi digitalizado por um tomógrafo computadorizado e as imagens segmentadas do fantoma AR virtual foram subseqüentemente conectadas ao código MC EGS4. Os dados das exposições experimentais do fantoma físico AR foram comparados aos resultados obtidos de correspondentes simulações de exposições do fantoma AR virtual com o código MC EGS4. Aplicações do modelo de exposição acidental MAX/EGS4 foram demonstradas neste estudo para dois acidentes radiológicos selecionados que aconteceram em Yanango (Peru) e Nesvizh (Belarus). De acordo com as informações relatadas nos correspondentes relatórios da IAEA (International Atomic Energy Agency), as condições de exposição dos dois acidentes foram simuladas com o modelo de exposição MAX/EGS4, e no caso do acidente em Nesvizh (Belarus) incluiu uma modificação na postura do fantoma MAX. Os resultados mostraram que o modelo de exposição MAX/EGS4 pode ser ajustado corretamente para condições de irradiações específicas, e doses absorvidas em tecidos e órgãos radiossensíveis resultantes de exposições acidentais podem ser determinadas com precisão suficiente, condição crucial para o tratamento médico de indivíduos expostos
2

Simulation and Analysis of Human Phantoms Exposed to Heavy Charged Particle Irradiations Using the Particle and Heavy Ion Transport System (PHITS)

Lee, Dongyoul 2011 December 1900 (has links)
Anthropomorphic phantoms are commonly used for testing radiation fields without the need to expose human subjects. One of the most widely known is RANDO phantom. This phantom is used primarily for medical X-ray applications, but a similar design known as "MATROSHKA" is now being used for space research and exposed to heavy ion irradiations from the Galactic environment. Since the radiation field in the phantom should respond in a similar manner to how it would act in human tissues and organs under an irradiation, the tissue substitute chosen for soft tissue and the level of complexity of the entire phantom are crucial issues. The phantoms, and the materials used to create them, were developed mainly for photon irradiations and have not been heavily tested under the conditions of heavy ion exposures found in the space environment or external radiotherapy. The Particle and Heavy-Ion Transport code System (PHITS) was used to test the phantoms and their materials for their potential as human surrogates for heavy ion irradiation. Stopping powers and depth-dose distributions of heavy charged particles (HCPs) important to space research and medical applications were first used in the simulations to test the suitability of current soft tissue substitutes. A detailed computational anthropomorphic phantom was then developed where tissue substitutes and ICRU-44 tissue could be interchanged to verify the validation of the soft tissue substitutes and and determine the required level of complexity of the entire phantom needed to achieve a specified precision as a replacement of the human body. The materials tested were common soft tissue substitutes in use and the materials which had a potential for the soft tissue substitute. Ceric sulfate dosimeter solution was closest to ICRU-44 tissue; however, it was not appropriate as the phantom material because it was a solution. A150 plastic, ED4C (fhw), Nylon (Du Pont Elvamide 8062), RM/SR4, Temex, and RW-2 were within 1% of the mean normalized difference of mass stopping powers (or stopping powers for RW-2) when compared to the ICRU-44 tissue, and their depth-dose distributions were close; therefore, they were the most suitable among the remaining solid materials. Overall, the soft tissue substitutes which were within 1% of ICRU-44 tissue in terms of stopping power produced reasonable results with respect to organ dose in the developed phantom. RM/SR4 is the best anthropomorphic phantom soft tissue substitute because it has similar interaction properties and identical density with ICRU-44 tissue and it is a rigid solid polymer giving practical advantages in manufacture of real phantoms.
3

Randomized Resource Allocaion in Decentralized Wireless Networks

Moshksar, Kamyar January 2011 (has links)
Ad hoc networks and bluetooth systems operating over the unlicensed ISM band are in-stances of decentralized wireless networks. By definition, a decentralized network is com-posed of separate transmitter-receiver pairs where there is no central controller to assign the resources to the users. As such, resource allocation must be performed locally at each node. Users are anonymous to each other, i.e., they are not aware of each other's code-books. This implies that multiuser detection is not possible and users treat each other as noise. Multiuser interference is known to be the main factor that limits the achievable rates in such networks particularly in the high Signal-to-Noise Ratio (SNR) regime. Therefore, all users must follow a distributed signaling scheme such that the destructive effect of interference on each user is minimized, while the resources are fairly shared. In chapter 2 we consider a decentralized wireless communication network with a fixed number of frequency sub-bands to be shared among several transmitter-receiver pairs. It is assumed that the number of active users is a realization of a random variable with a given probability mass function. Moreover, users are unaware of each other's codebooks and hence, no multiuser detection is possible. We propose a randomized Frequency Hopping (FH) scheme in which each transmitter randomly hops over a subset of sub-bands from transmission slot to transmission slot. Assuming all users transmit Gaussian signals, the distribution of the noise plus interference is mixed Gaussian, which makes calculation of the mutual information between the transmitted and received signals of each user intractable. We derive lower and upper bounds on the mutual information of each user and demonstrate that, for large SNR values, the two bounds coincide. This observation enables us to compute the sum multiplexing gain of the system and obtain the optimum hopping strategy for maximizing this quantity. We compare the performance of the FH system with that of the Frequency Division (FD) system in terms of the following performance measures: average sum multiplexing gain and average minimum multiplexing gain per user. We show that (depending on the probability mass function of the number of active users) the FH system can offer a significant improvement in terms of the aforementioned measures. In the sequel, we consider a scenario where the transmitters are unaware of the number of active users in the network as well as the channel gains. Developing a new upper bound on the differential entropy of a mixed Gaussian random vector and using entropy power inequality, we obtain lower bounds on the maximum transmission rate per user to ensure a specified outage probability at a given SNR level. We demonstrate that the so-called outage capacity can be considerably higher in the FH scheme than in the FD scenario for reasonable distributions on the number of active users. This guarantees a higher spectral efficiency in FH compared to FD. Chapter 3 addresses spectral efficiency in decentralized wireless networks of separate transmitter-receiver pairs by generalizing the ideas developed in chapter 2. Motivated by random spreading in Code Division Multiple Access (CDMA), a signaling scheme is introduced where each user's code-book consists of two groups of codewords, referred to as signal codewords and signature codewords. Each signal codeword is a sequence of independent Gaussian random variables and each signature codeword is a sequence of independent random vectors constructed over a globally known alphabet. Using a conditional entropy power inequality and a key upper bound on the differential entropy of a mixed Gaussian random vector, we develop an inner bound on the capacity region of the decentralized network. To guarantee consistency and fairness, each user designs its signature codewords based on maximizing the average (with respect to a globally known distribution on the channel gains) of the achievable rate per user. It is demonstrated how the Sum Multiplexing Gain (SMG) in the network (regardless of the number of users) can be made arbitrarily close to the SMG of a centralized network with an orthogonal scheme such as Time Division (TD). An interesting observation is that in general the elements of the vectors in a signature codeword must not be equiprobable over the underlying alphabet in contrast to the use of binary Pseudo-random Noise (PN) signatures in randomly spread CDMA where the chip elements are +1 or -1 with equal probability. The main reason for this phenomenon is the interplay between two factors appearing in the expression of the achievable rate, i.e., multiplexing gain and the so-called interference entropy factor. In the sequel, invoking an information theoretic extremal inequality, we present an optimality result by showing that in randomized frequency hopping which is the main idea in the prevailing bluetooth devices in decentralized networks, transmission of independent signals in consecutive transmission slots is in general suboptimal regardless of the distribution of the signals. Finally, chapter 4 addresses a decentralized Gaussian interference channel consisting of two block-asynchronous transmitter-receiver pairs. We consider a scenario where the rate of data arrival at the encoders is considerably low and codewords of each user are transmitted at random instants depending on the availability of enough data for transmission. This makes the transmitted signals by each user look like scattered bursts along the time axis. Users are block-asynchronous meaning there exists a delay between their transmitted signal bursts. The proposed model for asynchrony assumes the starting point of an interference burst is uniformly distributed along the transmitted codeword of any user. There is also the possibility that each user does not experience interference on a transmitted codeword at all. Due to the randomness of delay, the channels are non-ergodic in the sense that the transmitters are unaware of the location of interference bursts along their transmitted codewords. In the proposed scheme, upon availability of enough data in its queue, each user follows a locally Randomized Masking (RM) strategy where the transmitter quits transmitting the Gaussian symbols in its codeword independently from symbol interval to symbol interval. An upper bound on the probability of outage per user is developed using entropy power inequality and a key upper bound on the differential entropy of a mixed Gaussian random variable. It is shown that by adopting the RM scheme, the probability of outage is considerably less than the case where both users transmit the Gaussian symbols in their codewords in consecutive symbol intervals, referred to as Continuous Transmission (CT).
4

Randomized Resource Allocaion in Decentralized Wireless Networks

Moshksar, Kamyar January 2011 (has links)
Ad hoc networks and bluetooth systems operating over the unlicensed ISM band are in-stances of decentralized wireless networks. By definition, a decentralized network is com-posed of separate transmitter-receiver pairs where there is no central controller to assign the resources to the users. As such, resource allocation must be performed locally at each node. Users are anonymous to each other, i.e., they are not aware of each other's code-books. This implies that multiuser detection is not possible and users treat each other as noise. Multiuser interference is known to be the main factor that limits the achievable rates in such networks particularly in the high Signal-to-Noise Ratio (SNR) regime. Therefore, all users must follow a distributed signaling scheme such that the destructive effect of interference on each user is minimized, while the resources are fairly shared. In chapter 2 we consider a decentralized wireless communication network with a fixed number of frequency sub-bands to be shared among several transmitter-receiver pairs. It is assumed that the number of active users is a realization of a random variable with a given probability mass function. Moreover, users are unaware of each other's codebooks and hence, no multiuser detection is possible. We propose a randomized Frequency Hopping (FH) scheme in which each transmitter randomly hops over a subset of sub-bands from transmission slot to transmission slot. Assuming all users transmit Gaussian signals, the distribution of the noise plus interference is mixed Gaussian, which makes calculation of the mutual information between the transmitted and received signals of each user intractable. We derive lower and upper bounds on the mutual information of each user and demonstrate that, for large SNR values, the two bounds coincide. This observation enables us to compute the sum multiplexing gain of the system and obtain the optimum hopping strategy for maximizing this quantity. We compare the performance of the FH system with that of the Frequency Division (FD) system in terms of the following performance measures: average sum multiplexing gain and average minimum multiplexing gain per user. We show that (depending on the probability mass function of the number of active users) the FH system can offer a significant improvement in terms of the aforementioned measures. In the sequel, we consider a scenario where the transmitters are unaware of the number of active users in the network as well as the channel gains. Developing a new upper bound on the differential entropy of a mixed Gaussian random vector and using entropy power inequality, we obtain lower bounds on the maximum transmission rate per user to ensure a specified outage probability at a given SNR level. We demonstrate that the so-called outage capacity can be considerably higher in the FH scheme than in the FD scenario for reasonable distributions on the number of active users. This guarantees a higher spectral efficiency in FH compared to FD. Chapter 3 addresses spectral efficiency in decentralized wireless networks of separate transmitter-receiver pairs by generalizing the ideas developed in chapter 2. Motivated by random spreading in Code Division Multiple Access (CDMA), a signaling scheme is introduced where each user's code-book consists of two groups of codewords, referred to as signal codewords and signature codewords. Each signal codeword is a sequence of independent Gaussian random variables and each signature codeword is a sequence of independent random vectors constructed over a globally known alphabet. Using a conditional entropy power inequality and a key upper bound on the differential entropy of a mixed Gaussian random vector, we develop an inner bound on the capacity region of the decentralized network. To guarantee consistency and fairness, each user designs its signature codewords based on maximizing the average (with respect to a globally known distribution on the channel gains) of the achievable rate per user. It is demonstrated how the Sum Multiplexing Gain (SMG) in the network (regardless of the number of users) can be made arbitrarily close to the SMG of a centralized network with an orthogonal scheme such as Time Division (TD). An interesting observation is that in general the elements of the vectors in a signature codeword must not be equiprobable over the underlying alphabet in contrast to the use of binary Pseudo-random Noise (PN) signatures in randomly spread CDMA where the chip elements are +1 or -1 with equal probability. The main reason for this phenomenon is the interplay between two factors appearing in the expression of the achievable rate, i.e., multiplexing gain and the so-called interference entropy factor. In the sequel, invoking an information theoretic extremal inequality, we present an optimality result by showing that in randomized frequency hopping which is the main idea in the prevailing bluetooth devices in decentralized networks, transmission of independent signals in consecutive transmission slots is in general suboptimal regardless of the distribution of the signals. Finally, chapter 4 addresses a decentralized Gaussian interference channel consisting of two block-asynchronous transmitter-receiver pairs. We consider a scenario where the rate of data arrival at the encoders is considerably low and codewords of each user are transmitted at random instants depending on the availability of enough data for transmission. This makes the transmitted signals by each user look like scattered bursts along the time axis. Users are block-asynchronous meaning there exists a delay between their transmitted signal bursts. The proposed model for asynchrony assumes the starting point of an interference burst is uniformly distributed along the transmitted codeword of any user. There is also the possibility that each user does not experience interference on a transmitted codeword at all. Due to the randomness of delay, the channels are non-ergodic in the sense that the transmitters are unaware of the location of interference bursts along their transmitted codewords. In the proposed scheme, upon availability of enough data in its queue, each user follows a locally Randomized Masking (RM) strategy where the transmitter quits transmitting the Gaussian symbols in its codeword independently from symbol interval to symbol interval. An upper bound on the probability of outage per user is developed using entropy power inequality and a key upper bound on the differential entropy of a mixed Gaussian random variable. It is shown that by adopting the RM scheme, the probability of outage is considerably less than the case where both users transmit the Gaussian symbols in their codewords in consecutive symbol intervals, referred to as Continuous Transmission (CT).
5

Sistema de Planificación de Tratamientos de Radioterapia para Aceleradores Lineales de Partículas (LinAc) basado en el método Monte Carlo

Abella Aranda, Vicente 14 October 2014 (has links)
La principal motivación que ha propiciado el veloz progreso de las técnicas de prevención y tratamiento del cáncer en los últimos años ha sido, y continúa siendo, su protagonismo en las listas de principales causas de muerte: más de 10 millones de diagnósticos anuales a escala global y más de 160.000 en territorio español. En este contexto, la implementación clínica de los Sistemas de Planificación de Tratamientos de Radioterapia (RTPS) ha desempeñado un papel capital. Resulta lugar común en el ámbito de la medicina nuclear que los algoritmos convencionales de cálculo de dosis que poseen los RTPS, de naturaleza determinista, carecen de la precisión necesaria a la hora de determinar el transporte lateral de electrones cuando un haz de partículas cargadas incide en la interfaz entre un medio material de densidad baja y otro de densidad alta; además, incurren en predicciones de dosis erróneas ante la presencia de heterogeneidades debido a la alta dispersión de electrones que se produce entre los distintos materiales. Se ha comprobado que los métodos de cálculo de dosis basados en Monte Carlo (MC) proporcionan distribuciones de dosis más precisas que los algoritmos convencionales en los planificadores 3D comerciales. Sin embargo, pese a la substancial mejora que ofrecen los primeros, aún no se han conseguido implementar de forma extensiva en el ámbito clínico debido al coste de tiempo computacional que requieren para obtener resultados con una estadística aceptable. Esta tesis presenta un estudio de integración de cálculos dosimétricos realizados con un código de transporte de partículas basado en Monte Carlo (MCNP) en un Sistema de Planificación de Tratamientos de distribución libre (PlanUNC), análogo a los comerciales. El trabajo comprende no sólo la consecución de un software que permite la intercomunicación de MCNP con PLUNC, al que se designa con el nombre de MCTPS-UPV, sino también un estudio de optimización de la simulación MC con objeto de agilizar el cálculo y minimizar su tiempo de computación, sin perjuicio de obtener resultados estadísticamente válidos. Los resultados demuestran que, acoplando en PLUNC el código MCNP en su versión 5 1.40 (y partiendo de la suposición de que los resultados de MCNP5 se ajustan a los experimentales en un intervalo de error del 5%, puesto que han sido validados experimentalmente en una cuba de agua con heterogeneidades con el acelerador lineal (LinAc) Elekta Precise y un colimador multiláminas (MLC)), puede efectuarse dicha simulación en pacientes reales mediante una metodología que permite tiempos computacionales aptos para su aplicación clínica y deposiciones de dosis precisas en medios heterogéneos. La investigación proporciona, además, de forma académica, un estudio extensivo tanto práctico como teórico en torno a la simulación MC en sistemas de planificación de tratamientos y a las particularidades asociadas a la implementación clínica de los algoritmos dosimétricos MC, tales como la influencia de las heterogeneidades en la deposición de dosis en el paciente, la influencia del tamaño de la voxelización o la reducción de varianza en el cálculo estadístico, tan importantes en el contexto en que ésta se inscribe. Las simulaciones se llevan a cabo mediante un LinAc Elekta Precise con MLC y distintos tamaños y conformaciones de campo que permiten un análisis exhaustivo de todas las variables que participan en la irradiación. Finalmente, el trabajo debe derivar en una futura validación experimental de las distribuciones de dosis dentro del maniquí RANDO mediante dosímetros, además de en la posibilidad de obtener tiempos de cálculo realistas mediante tecnologías más accesibles al usuario, en la posibilidad de incluir una conformación del haz posterior a la simulación incial del espacio de fase o en el estudio de la contaminación del paciente por fotoneutrones. / Abella Aranda, V. (2014). Sistema de Planificación de Tratamientos de Radioterapia para Aceleradores Lineales de Partículas (LinAc) basado en el método Monte Carlo [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/43219

Page generated in 0.0275 seconds