• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 69
  • 19
  • 8
  • 6
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 142
  • 142
  • 97
  • 24
  • 20
  • 17
  • 16
  • 15
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Computational Methods for Random Differential Equations: Theory and Applications

Navarro Quiles, Ana 01 March 2018 (has links)
Desde las contribuciones de Isaac Newton, Gottfried Wilhelm Leibniz, Jacob y Johann Bernoulli en el siglo XVII hasta ahora, las ecuaciones en diferencias y las diferenciales han demostrado su capacidad para modelar satisfactoriamente problemas complejos de gran interés en Ingeniería, Física, Epidemiología, etc. Pero, desde un punto de vista práctico, los parámetros o inputs (condiciones iniciales/frontera, término fuente y/o coeficientes), que aparecen en dichos problemas, son fijados a partir de ciertos datos, los cuales pueden contener un error de medida. Además, pueden existir factores externos que afecten al sistema objeto de estudio, de modo que su complejidad haga que no se conozcan de forma cierta los parámetros de la ecuación que modeliza el problema. Todo ello justifica considerar los parámetros de la ecuación en diferencias o de la ecuación diferencial como variables aleatorias o procesos estocásticos, y no como constantes o funciones deterministas, respectivamente. Bajo esta consideración aparecen las ecuaciones en diferencias y las ecuaciones diferenciales aleatorias. Esta tesis hace un recorrido resolviendo, desde un punto de vista probabilístico, distintos tipos de ecuaciones en diferencias y diferenciales aleatorias, aplicando fundamentalmente el método de Transformación de Variables Aleatorias. Esta técnica es una herramienta útil para la obtención de la función de densidad de probabilidad de un vector aleatorio, que es una transformación de otro vector aleatorio cuya función de densidad de probabilidad es conocida. En definitiva, el objetivo de este trabajo es el cálculo de la primera función de densidad de probabilidad del proceso estocástico solución en diversos problemas basados en ecuaciones en diferencias y diferenciales aleatorias. El interés por determinar la primera función de densidad de probabilidad se justifica porque dicha función determinista caracteriza la información probabilística unidimensional, como media, varianza, asimetría, curtosis, etc., de la solución de la ecuación en diferencias o diferencial correspondiente. También permite determinar la probabilidad de que acontezca un determinado suceso de interés que involucre a la solución. Además, en algunos casos, el estudio teórico realizado se completa mostrando su aplicación a problemas de modelización con datos reales, donde se aborda el problema de la estimación de distribuciones estadísticas paramétricas de los inputs en el contexto de las ecuaciones en diferencias y diferenciales aleatorias. / Ever since the early contributions by Isaac Newton, Gottfried Wilhelm Leibniz, Jacob and Johann Bernoulli in the XVII century until now, difference and differential equations have uninterruptedly demonstrated their capability to model successfully interesting complex problems in Engineering, Physics, Chemistry, Epidemiology, Economics, etc. But, from a practical standpoint, the application of difference or differential equations requires setting their inputs (coefficients, source term, initial and boundary conditions) using sampled data, thus containing uncertainty stemming from measurement errors. In addition, there are some random external factors which can affect to the system under study. Then, it is more advisable to consider input data as random variables or stochastic processes rather than deterministic constants or functions, respectively. Under this consideration random difference and differential equations appear. This thesis makes a trail by solving, from a probabilistic point of view, different types of random difference and differential equations, applying fundamentally the Random Variable Transformation method. This technique is an useful tool to obtain the probability density function of a random vector that results from mapping another random vector whose probability density function is known. Definitely, the goal of this dissertation is the computation of the first probability density function of the solution stochastic process in different problems, which are based on random difference or differential equations. The interest in determining the first probability density function is justified because this deterministic function characterizes the one-dimensional probabilistic information, as mean, variance, asymmetry, kurtosis, etc. of corresponding solution of a random difference or differential equation. It also allows to determine the probability of a certain event of interest that involves the solution. In addition, in some cases, the theoretical study carried out is completed, showing its application to modelling problems with real data, where the problem of parametric statistics distribution estimation is addressed in the context of random difference and differential equations. / Des de les contribucions de Isaac Newton, Gottfried Wilhelm Leibniz, Jacob i Johann Bernoulli al segle XVII fins a l'actualitat, les equacions en diferències i les diferencials han demostrat la seua capacitat per a modelar satisfactòriament problemes complexos de gran interés en Enginyeria, Física, Epidemiologia, etc. Però, des d'un punt de vista pràctic, els paràmetres o inputs (condicions inicials/frontera, terme font i/o coeficients), que apareixen en aquests problemes, són fixats a partir de certes dades, les quals poden contenir errors de mesura. A més, poden existir factors externs que afecten el sistema objecte d'estudi, de manera que, la seua complexitat faça que no es conega de forma certa els inputs de l'equació que modelitza el problema. Tot aço justifica la necessitat de considerar els paràmetres de l'equació en diferències o de la equació diferencial com a variables aleatòries o processos estocàstics, i no com constants o funcions deterministes. Sota aquesta consideració apareixen les equacions en diferències i les equacions diferencials aleatòries. Aquesta tesi fa un recorregut resolent, des d'un punt de vista probabilístic, diferents tipus d'equacions en diferències i diferencials aleatòries, aplicant fonamentalment el mètode de Transformació de Variables Aleatòries. Aquesta tècnica és una eina útil per a l'obtenció de la funció de densitat de probabilitat d'un vector aleatori, que és una transformació d'un altre vector aleatori i la funció de densitat de probabilitat és del qual és coneguda. En definitiva, l'objectiu d'aquesta tesi és el càlcul de la primera funció de densitat de probabilitat del procés estocàstic solució en diversos problemes basats en equacions en diferències i diferencials. L'interés per determinar la primera funció de densitat es justifica perquè aquesta funció determinista caracteritza la informació probabilística unidimensional, com la mitjana, variància, asimetria, curtosis, etc., de la solució de l'equació en diferències o l'equació diferencial aleatòria corresponent. També permet determinar la probabilitat que esdevinga un determinat succés d'interés que involucre la solució. A més, en alguns casos, l'estudi teòric realitzat es completa mostrant la seua aplicació a problemes de modelització amb dades reals, on s'aborda el problema de l'estimació de distribucions estadístiques paramètriques dels inputs en el context de les equacions en diferències i diferencials aleatòries. / Navarro Quiles, A. (2018). Computational Methods for Random Differential Equations: Theory and Applications [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/98703
42

Distribuições preditiva e implícita para ativos financeiros / Predictive and implied distributions of a stock price

Oliveira, Natália Lombardi de 01 June 2017 (has links)
Submitted by Alison Vanceto (alison-vanceto@hotmail.com) on 2017-08-28T13:57:07Z No. of bitstreams: 1 DissNLO.pdf: 2139734 bytes, checksum: 9d9000013e5ab1fd3e860be06fc72737 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-09-06T13:18:03Z (GMT) No. of bitstreams: 1 DissNLO.pdf: 2139734 bytes, checksum: 9d9000013e5ab1fd3e860be06fc72737 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-09-06T13:18:12Z (GMT) No. of bitstreams: 1 DissNLO.pdf: 2139734 bytes, checksum: 9d9000013e5ab1fd3e860be06fc72737 (MD5) / Made available in DSpace on 2017-09-06T13:28:02Z (GMT). No. of bitstreams: 1 DissNLO.pdf: 2139734 bytes, checksum: 9d9000013e5ab1fd3e860be06fc72737 (MD5) Previous issue date: 2017-06-01 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / We present two different approaches to obtain a probability density function for the stock?s future price: a predictive distribution, based on a Bayesian time series model, and the implied distribution, based on Black & Scholes option pricing formula. Considering the Black & Scholes model, we derive the necessary conditions to obtain the implied distribution of the stock price on the exercise date. Based on predictive densities, we compare the market implied model (Black & Scholes) with a historical based approach (Bayesian time series model). After obtaining the density functions, it is simple to evaluate probabilities of one being bigger than the other and to make a decision of selling/buying a stock. Also, as an example, we present how to use these distributions to build an option pricing formula. / Apresentamos duas abordagens para obter uma densidade de probabilidades para o preço futuro de um ativo: uma densidade preditiva, baseada em um modelo Bayesiano para série de tempo e uma densidade implícita, baseada na fórmula de precificação de opções de Black & Scholes. Considerando o modelo de Black & Scholes, derivamos as condições necessárias para obter a densidade implícita do preço do ativo na data de vencimento. Baseando-se nas densidades de previsão, comparamos o modelo implícito com a abordagem histórica do modelo Bayesiano. A partir destas densidades, calculamos probabilidades de ordem e tomamos decisões de vender/comprar um ativo. Como exemplo, apresentamos como utilizar estas distribuições para construir uma fórmula de precificação.
43

Vizualizace vybraných proudění supratekutého hélia s využitím částic pevného vodíku / Visualization of selected flows of superfluid helium using solid hydrogen tracer particles

Duda, Daniel January 2013 (has links)
Daniel Duda Visualization of selected flows of superfluid helium using solid hydrogen tracer particles 5 Thesis title: Visualization of selected flows of superfluid helium using solid hydrogen tracer particles Author: Bc. Daniel Duda Department: Department of Low Temperature Physics, Supervisor: prof. RNDr. Ladislav Skrbek, DrSc, Department of Low Temperature Physics, Faculty of Mathematics and Physics, Charles University in Prague. Consultant: Dr. Marco La Mantia, PhD. Abstract: Quantum turbulence generated in thermal counterflow of He II is studied experimentally by visualization. The statistical properties of the motion of micron size solid deuterium particles are studied by using the particle tracking velocimetry technique at length scales comparable to the mean distance between quantized vorti- ces. The probability density function (PDF) of the longitudinal velocity displays two peaks that correspond to two velocity fields of the two-fluid description of He II. The PDF of the transversal velocity displays a classical-like Gaussian core with non- classical power-law tails, confirming the quantum nature of turbulence in counter- flowing He II. The distribution of the particle acceleration is found to be similar in shape to the classical one, in the range of investigated parameters. The observed de-...
44

Inversion of seismic attributes for petrophysical parameters and rock facies

Shahraeeni, Mohammad Sadegh January 2011 (has links)
Prediction of rock and fluid properties such as porosity, clay content, and water saturation is essential for exploration and development of hydrocarbon reservoirs. Rock and fluid property maps obtained from such predictions can be used for optimal selection of well locations for reservoir development and production enhancement. Seismic data are usually the only source of information available throughout a field that can be used to predict the 3D distribution of properties with appropriate spatial resolution. The main challenge in inferring properties from seismic data is the ambiguous nature of geophysical information. Therefore, any estimate of rock and fluid property maps derived from seismic data must also represent its associated uncertainty. In this study we develop a computationally efficient mathematical technique based on neural networks to integrate measured data and a priori information in order to reduce the uncertainty in rock and fluid properties in a reservoir. The post inversion (a posteriori) information about rock and fluid properties are represented by the joint probability density function (PDF) of porosity, clay content, and water saturation. In this technique the a posteriori PDF is modeled by a weighted sum of Gaussian PDF’s. A so-called mixture density network (MDN) estimates the weights, mean vector, and covariance matrix of the Gaussians given any measured data set. We solve several inverse problems with the MDN and compare results with Monte Carlo (MC) sampling solution and show that the MDN inversion technique provides good estimate of the MC sampling solution. However, the computational cost of training and using the neural network is much lower than solution found by MC sampling (more than a factor of 104 in some cases). We also discuss the design, implementation, and training procedure of the MDN, and its limitations in estimating the solution of an inverse problem. In this thesis we focus on data from a deep offshore field in Africa. Our goal is to apply the MDN inversion technique to obtain maps of petrophysical properties (i.e., porosity, clay content, water saturation), and petrophysical facies from 3D seismic data. Petrophysical facies (i.e., non-reservoir, oil- and brine-saturated reservoir facies) are defined probabilistically based on geological information and values of the petrophysical parameters. First, we investigate the relationship (i.e., petrophysical forward function) between compressional- and shear-wave velocity and petrophysical parameters. The petrophysical forward function depends on different properties of rocks and varies from one rock type to another. Therefore, after acquisition of well logs or seismic data from a geological setting the petrophysical forward function must be calibrated with data and observations. The uncertainty of the petrophysical forward function comes from uncertainty in measurements and uncertainty about the type of facies. We present a method to construct the petrophysical forward function with its associated uncertainty from the both sources above. The results show that introducing uncertainty in facies improves the accuracy of the petrophysical forward function predictions. Then, we apply the MDN inversion method to solve four different petrophysical inverse problems. In particular, we invert P- and S-wave impedance logs for the joint PDF of porosity, clay content, and water saturation using a calibrated petrophysical forward function. Results show that posterior PDF of the model parameters provides reasonable estimates of measured well logs. Errors in the posterior PDF are mainly due to errors in the petrophysical forward function. Finally, we apply the MDN inversion method to predict 3D petrophysical properties from attributes of seismic data. In this application, the inversion objective is to estimate the joint PDF of porosity, clay content, and water saturation at each point in the reservoir, from the compressional- and shear-wave-impedance obtained from the inversion of AVO seismic data. Uncertainty in the a posteriori PDF of the model parameters are due to different sources such as variations in effective pressure, bulk modulus and density of hydrocarbon, uncertainty of the petrophysical forward function, and random noise in recorded data. Results show that the standard deviations of all model parameters are reduced after inversion, which shows that the inversion process provides information about all parameters. We also applied the result of the petrophysical inversion to estimate the 3D probability maps of non-reservoir facies, brine- and oil-saturated reservoir facies. The accuracy of the predicted oil-saturated facies at the well location is good, but due to errors in the petrophysical inversion the predicted non-reservoir and brine-saturated facies are ambiguous. Although the accuracy of results may vary due to different sources of error in different applications, the fast, probabilistic method of solving non-linear inverse problems developed in this study can be applied to invert well logs and large seismic data sets for petrophysical parameters in different applications.
45

Experimental and numerical investigation of high viscosity oil-based multiphase flows

Alagbe, Solomon Oluyemi 05 1900 (has links)
Multiphase flows are of great interest to a large variety of industries because flows of two or more immiscible liquids are encountered in a diverse range of processes and equipment. However, the advent of high viscosity oil requires more investigations to enhance good design of transportation system and forestall its inherent production difficulties. Experimental and numerical studies were conducted on water-sand, oil-water and oilwater- sand respectively in 1-in ID 5m long horizontal pipe. The densities of CYL680 and CYL1000 oils employed are 917 and 916.2kg/m3 while their viscosities are 1.830 and 3.149Pa.s @ 25oC respectively. The solid-phase concentration ranged from 2.15e-04 to 10%v/v with mean diameter of 150micron and material density of 2650kg/m3. Experimentally, the observed flow patterns are Water Assist Annular (WA-ANN), Dispersed Oil in Water (DOW/OF), Oil Plug in Water (OPW/OF) with oil film on the wall and Water Plug in Oil (WPO). These configurations were obtained through visualisation, trend and the probability density function (PDF) of pressure signals along with the statistical moments. Injection of water to assist high viscosity oil transport reduced the pressure gradient by an order of magnitude. No significant differences were found between the gradients of oil-water and oil-water-sand, however, increase in sand concentration led to increase in the pressure losses in oil-water-sand flow. Numerically, Water Assist Annular (WA-ANN), Dispersed Oil in Water (DOW/OF), Oil Plug in Water (OPW/OF) with oil film on the wall, and Water Plug in Oil (WPO) flow pattern were successfully obtained by imposing a concentric inlet condition at the inlet of the horizontal pipe coupled with a newly developed turbulent kinetic energy budget equation coded as user defined function which was hooked up to the turbulence models. These modifications aided satisfactory predictions.
46

Non-parametric probability density function estimation for medical images

Joshi, Niranjan Bhaskar January 2008 (has links)
The estimation of probability density functions (PDF) of intensity values plays an important role in medical image analysis. Non-parametric PDF estimation methods have the advantage of generality in their application. The two most popular estimators in image analysis methods to perform the non-parametric PDF estimation task are the histogram and the kernel density estimator. But these popular estimators crucially need to be ‘tuned’ by setting a number of parameters and may be either computationally inefficient or need a large amount of training data. In this thesis, we critically analyse and further develop a recently proposed non-parametric PDF estimation method for signals, called the NP windows method. We propose three new algorithms to compute PDF estimates using the NP windows method. One of these algorithms, called the log-basis algorithm, provides an easier and faster way to compute the NP windows estimate, and allows us to compare the NP windows method with the two existing popular estimators. Results show that the NP windows method is fast and can estimate PDFs with a significantly smaller amount of training data. Moreover, it does not require any additional parameter settings. To demonstrate utility of the NP windows method in image analysis we consider its application to image segmentation. To do this, we first describe the distribution of intensity values in the image with a mixture of non-parametric distributions. We estimate these distributions using the NP windows method. We then use this novel mixture model to evolve curves with the well-known level set framework for image segmentation. We also take into account the partial volume effect that assumes importance in medical image analysis methods. In the final part of the thesis, we apply our non-parametric mixture model (NPMM) based level set segmentation framework to segment colorectal MR images. The segmentation of colorectal MR images is made challenging due to sparsity and ambiguity of features, presence of various artifacts, and complex anatomy of the region. We propose to use the monogenic signal (local energy, phase, and orientation) to overcome the first difficulty, and the NPMM to overcome the remaining two. Results are improved substantially on those that have been reported previously. We also present various ways to visualise clinically useful information obtained with our segmentations in a 3-dimensional manner.
47

A random matrix model for two-colour QCD at non-zero quark density

Phillips, Michael James January 2011 (has links)
We solve a random matrix ensemble called the chiral Ginibre orthogonal ensemble, or chGinOE. This non-Hermitian ensemble has applications to modelling particular low-energy limits of two-colour quantum chromo-dynamics (QCD). In particular, the matrices model the Dirac operator for quarks in the presence of a gluon gauge field of fixed topology, with an arbitrary number of flavours of virtual quarks and a non-zero quark chemical potential. We derive the joint probability density function (JPDF) of eigenvalues for this ensemble for finite matrix size N, which we then write in a factorised form. We then present two different methods for determining the correlation functions, resulting in compact expressions involving Pfaffians containing the associated kernel. We determine the microscopic large-N limits at strong and weak non-Hermiticity (required for physical applications) for both the real and complex eigenvalue densities. Various other properties of the ensemble are also investigated, including the skew-orthogonal polynomials and the fraction of eigenvalues that are real. A number of the techniques that we develop have more general applicability within random matrix theory, some of which we also explore in this thesis.
48

Echantillonnage aléatoire et estimation spectrale de processus et de champs stationnaires / Random sampling and spectral estimation of stationary processes and fields

Kouakou, Kouadio Simplice 14 June 2012 (has links)
Dans ce travail nous nous intéressons à l'estimation de la densité spectrale par la méthode du noyau pour des processus à temps continu et des champs aléatoires observés selon des schémas d'échantillonnage (ou plan d'expériences) discrets aléatoires. Deux types d'échantillonnage aléatoire sont ici considérés : schémas aléatoires dilatés, et schémas aléatoires poissonniens. Aucune condition de gaussiannité n'est imposée aux processus et champs étudiés, les hypothèses concerneront leurs cumulants.En premier nous examinons un échantillonnage aléatoire dilaté utilisé par Hall et Patil (1994) et plus récemment par Matsuda et Yajima (2009) pour l'estimation de la densité spectrale d'un champ gaussien. Nous établissons la convergence en moyenne quadratique dans un cadre plus large, ainsi que la vitesse de convergence de l'estimateur.Ensuite nous appliquons l'échantillonnage aléatoire poissonnien dans deux situations différentes : estimation spectrale d'un processus soumis à un changement de temps aléatoire (variation d'horloge ou gigue), et estimation spectrale d'un champ aléatoire sur R2. Le problème de l'estimation de la densité spectrale d'un processus soumis à un changement de temps est résolu par projection sur la base des vecteurs propres d'opérateurs intégraux définis à partir de la fonction caractéristique de l'accroissement du changement de temps aléatoire. Nous établissons la convergence en moyenne quadratique et le normalité asymptotique de deux estimateurs construits l'un à partir d'une observation continue, et l'autre à partir d'un échantillonnage poissonnien du processus résultant du changement de temps.La dernière partie de ce travail est consacrée au cas d'un champ aléatoire sur R2 observé selon un schéma basé sur deux processus de Poissons indépendants, un pour chaque axe de R2. Les résultats de convergence sont illustrés par des simulations / In this work, we are dealing in the kernel estimation of the spectral density for a continuous time process or random eld observed along random discrete sampling schemes. Here we consider two kind of sampling schemes : random dilated sampling schemes, and Poissonian sampling schemes. There is no gaussian condition for the process or the random eld, the hypotheses apply to their cumulants.First, we consider a dilated sampling scheme introduced by Hall and Patil (1994) and used more recently by Matsuda and Yajima (2009) for the estimation of the spectral density of a Gaussian random eld.We establish the quadratic mean convergence in our more general context, as well as the rate of convergence of the estimator.Next we apply the Poissonian sampling scheme to two different frameworks : to the spectral estimation for a process disturbed by a random clock change (or time jitter), and to the spectral estimation of a random field on R2.The problem of the estimatin of the spectral density of a process disturbed by a clock change is solved with projection on the basis of eigen-vectors of kernel integral operators defined from the characteristic function of the increment of the random clock change. We establish the convergence and the asymptotic normality of two estimators contructed, from a continuous time observation, and the other from a Poissonian sampling scheme observation of the clock changed process.The last part of this work is devoted to random fields on R2 observed along a sampling scheme based on two Poisson processes (one for each axis of R2). The convergence results are illustrated by some simulations
49

Impact of Geometric Uncertainties on Dose Calculations for Intensity Modulated Radiation Therapy of Prostate Cancer

Jiang, Runqing January 2007 (has links)
IMRT uses non-uniform beam intensities within a radiation field to provide patient-specific dose shaping, resulting in a dose distribution that conforms tightly to the planning target volume (PTV). Unavoidable geometric uncertainty arising from patient repositioning and internal organ motion can lead to lower conformality index (CI), a decrease in tumor control probability (TCP) and an increase in normal tissue complication probability (NTCP). The CI of the IMRT plan depends heavily on steep dose gradients between the PTV and organ at risk (OAR). Geometric uncertainties reduce the planned dose gradients and result in a less steep or “blurred” dose gradient. The blurred dose gradients can be maximized by constraining the dose objective function in the static IMRT plan or by reducing geometric uncertainty during treatment with corrective verification imaging. Internal organ motion and setup error were evaluated simultaneously for 118 individual patients with implanted fiducials and MV electronic portal imaging (EPI). The Gaussian PDF is patient specific and group standard deviation (SD) should not be used for accurate treatment planning for individual patients. Frequent verification imaging should be employed in situations where geometric uncertainties are expected. The dose distribution including geometric uncertainties was determined from integration of the convolution of the static dose gradient with the PDF. Local maximum dose gradient (LMDG) was determined via optimization of dose objective function by manually adjusting DVH control points or selecting beam numbers and directions during IMRT treatment planning. EUDf is a useful QA parameter for interpreting the biological impact of geometric uncertainties on the static dose distribution. The EUDf has been used as the basis for the time-course NTCP evaluation in the thesis. Relative NTCP values are useful for comparative QA checking by normalizing known complications (e.g. reported in the RTOG studies) to specific DVH control points. For prostate cancer patients, rectal complications were evaluated from specific RTOG clinical trials and detailed evaluation of the treatment techniques. Treatment plans that did not meet DVH constraints represented additional complication risk. Geometric uncertainties improved or worsened rectal NTCP depending on individual internal organ motion within patient.
50

Impact of Geometric Uncertainties on Dose Calculations for Intensity Modulated Radiation Therapy of Prostate Cancer

Jiang, Runqing January 2007 (has links)
IMRT uses non-uniform beam intensities within a radiation field to provide patient-specific dose shaping, resulting in a dose distribution that conforms tightly to the planning target volume (PTV). Unavoidable geometric uncertainty arising from patient repositioning and internal organ motion can lead to lower conformality index (CI), a decrease in tumor control probability (TCP) and an increase in normal tissue complication probability (NTCP). The CI of the IMRT plan depends heavily on steep dose gradients between the PTV and organ at risk (OAR). Geometric uncertainties reduce the planned dose gradients and result in a less steep or “blurred” dose gradient. The blurred dose gradients can be maximized by constraining the dose objective function in the static IMRT plan or by reducing geometric uncertainty during treatment with corrective verification imaging. Internal organ motion and setup error were evaluated simultaneously for 118 individual patients with implanted fiducials and MV electronic portal imaging (EPI). The Gaussian PDF is patient specific and group standard deviation (SD) should not be used for accurate treatment planning for individual patients. Frequent verification imaging should be employed in situations where geometric uncertainties are expected. The dose distribution including geometric uncertainties was determined from integration of the convolution of the static dose gradient with the PDF. Local maximum dose gradient (LMDG) was determined via optimization of dose objective function by manually adjusting DVH control points or selecting beam numbers and directions during IMRT treatment planning. EUDf is a useful QA parameter for interpreting the biological impact of geometric uncertainties on the static dose distribution. The EUDf has been used as the basis for the time-course NTCP evaluation in the thesis. Relative NTCP values are useful for comparative QA checking by normalizing known complications (e.g. reported in the RTOG studies) to specific DVH control points. For prostate cancer patients, rectal complications were evaluated from specific RTOG clinical trials and detailed evaluation of the treatment techniques. Treatment plans that did not meet DVH constraints represented additional complication risk. Geometric uncertainties improved or worsened rectal NTCP depending on individual internal organ motion within patient.

Page generated in 0.0713 seconds